mirror of
https://github.com/rqlite/rqlite.git
synced 2022-10-30 02:37:32 +03:00
Add support for DNS-based autoclustering (#979)
New disco-bootstrapping hybrid to autoclustering, which uses DNS A Records to find nodes.
This commit is contained in:
@@ -1,4 +1,6 @@
|
||||
## 7.1.1 (unreleased)
|
||||
## 7.2.0 (unreleased)
|
||||
## New features
|
||||
- [PR #979](https://github.com/rqlite/rqlite/pull/979): Add support for DNS-based autoclustering.
|
||||
|
||||
### Implementation changes and bug fixes
|
||||
- [PR #976](https://github.com/rqlite/rqlite/pull/976): Improve `readyz/` response.
|
||||
|
||||
@@ -6,6 +6,7 @@ This document describes various ways to dynamically form rqlite clusters, which
|
||||
## Contents
|
||||
* [Quickstart](#quickstart)
|
||||
* [Automatic Boostrapping](#automatic-bootstrapping)
|
||||
* [Using DNS for Bootstrapping](#using-dns-for-bootstrapping)
|
||||
* [Consul](#consul)
|
||||
* [etcd](#etcd)
|
||||
* [More details](more-details)
|
||||
@@ -22,18 +23,18 @@ For simplicity, let's assume you want to run a 3-node rqlite cluster. To bootstr
|
||||
|
||||
Node 1:
|
||||
```bash
|
||||
rqlited -http-addr=$IP1:$HTTP_PORT -raft-addr=$IP1:$RAFT_PORT \
|
||||
-bootstrap-expect 3 -join http://$IP1:HTTP_PORT,http://$IP2:HTTP_PORT,http://$IP2:HTTP_PORT data
|
||||
rqlited -node-id $ID1 -http-addr=$IP1:4001 -raft-addr=$IP1:4002 \
|
||||
-bootstrap-expect 3 -join http://$IP1:4001,http://$IP2:4001,http://$IP2:4001 data
|
||||
```
|
||||
Node 2:
|
||||
```bash
|
||||
rqlited -http-addr=$IP2:$HTTP_PORT -raft-addr=$IP2:$RAFT_PORT \
|
||||
-bootstrap-expect 3 -join http://$IP1:HTTP_PORT,http://$IP2:HTTP_PORT,http://$IP2:HTTP_PORT data
|
||||
rqlited -node-id $ID2 -http-addr=$IP2:4001 -raft-addr=$IP2:4002 \
|
||||
-bootstrap-expect 3 -join http://$IP1:4001,http://$IP2:4001,http://$IP2:4001 data
|
||||
```
|
||||
Node 3:
|
||||
```bash
|
||||
rqlited -http-addr=$IP3:$HTTP_PORT -raft-addr=$IP3:$RAFT_PORT \
|
||||
-bootstrap-expect 3 -join http://$IP1:HTTP_PORT,http://$IP2:HTTP_PORT,http://$IP2:HTTP_PORT data
|
||||
rqlited -node-id $ID3 -http-addr=$IP3:4001 -raft-addr=$IP3:4002 \
|
||||
-bootstrap-expect 3 -join http://$IP1:4001,http://$IP2:4001,http://$IP2:4001 data
|
||||
```
|
||||
|
||||
`-bootstrap-expect` should be set to the number of nodes that must be available before the bootstrapping process will commence, in this case 3. You also set `-join` to the HTTP URL of all 3 nodes in the cluster. **It's also required that each launch command has the same values for `-bootstrap-expect` and `-join`.**
|
||||
@@ -43,10 +44,20 @@ After the cluster has formed, you can launch more nodes with the same options. A
|
||||
#### Docker
|
||||
With Docker you can launch every node identically:
|
||||
```bash
|
||||
docker run rqlite/rqlite -bootstrap-expect 3 -join http://$IP1:HTTP_PORT,http://$IP2:HTTP_PORT,http://$IP2:HTTP_PORT
|
||||
docker run rqlite/rqlite -bootstrap-expect 3 -join http://$IP1:4001,http://$IP2:4001,http://$IP2:4001
|
||||
```
|
||||
where `$IP[1-3]` are the expected network addresses of the containers.
|
||||
|
||||
### Using DNS for Bootstrapping
|
||||
You can also use the Domain Name System (DNS) to bootstrap a cluster. This is similar to automatic clustering, but doesn't require you to specify the network addresses at the command line. Instead you create a DNS record for the host `rqlite`, with an [A Record](https://www.cloudflare.com/learning/dns/dns-records/dns-a-record/) for each rqlite node's HTTP IP address.
|
||||
|
||||
To launch a node using DNS boostrap, execute the following command:
|
||||
```bash
|
||||
rqlited -node-id $ID1 -http-addr=$IP1:4001 -raft-addr=$IP1:4002 \
|
||||
-disco-mode=dns -bootstrap-expect 3 data
|
||||
```
|
||||
You would launch two other nodes similarly.
|
||||
|
||||
### Consul
|
||||
Another approach uses [Consul](https://www.consul.io/) to coordinate clustering. The advantage of this approach is that you do need to know the network addresses of the nodes ahead of time.
|
||||
|
||||
@@ -54,17 +65,17 @@ Let's assume your Consul cluster is running at `http://example.com:8500`. Let's
|
||||
|
||||
Node 1:
|
||||
```bash
|
||||
rqlited -http-addr=$IP1:$HTTP_PORT -raft-addr=$IP1:$RAFT_PORT \
|
||||
rqlited -node-id $ID1 -http-addr=$IP1:4001 -raft-addr=$IP1:4002 \
|
||||
-disco-mode consul-kv -disco-config '{"address": "example.com:8500"}' data
|
||||
```
|
||||
Node 2:
|
||||
```bash
|
||||
rqlited -http-addr=$IP2:$HTTP_PORT -raft-addr=$IP2:$RAFT_PORT \
|
||||
rqlited -node-id $ID2 -http-addr=$IP2:4001 -raft-addr=$IP2:4002 \
|
||||
-disco-mode consul-kv -disco-config '{"address": "example.com:8500"}' data
|
||||
```
|
||||
Node 3:
|
||||
```bash
|
||||
rqlited -http-addr=$IP3:$HTTP_PORT -raft-addr=$IP3:$RAFT_PORT \
|
||||
rqlited -node-id $ID3 -http-addr=$IP3:4001 -raft-addr=$IP3:4002 \
|
||||
-disco-mode consul-kv -disco-config '{"address": "example.com:8500"}' data
|
||||
```
|
||||
|
||||
@@ -83,17 +94,17 @@ Let's assume etcd is available at `example.com:2379`.
|
||||
|
||||
Node 1:
|
||||
```bash
|
||||
rqlited -http-addr=$IP1:$HTTP_PORT -raft-addr=$IP1:$RAFT_PORT \
|
||||
rqlited -node-id $ID1 -http-addr=$IP1:4001 -raft-addr=$IP1:4002 \
|
||||
-disco-mode etcd-kv -disco-config '{"endpoints": ["example.com:2379"]}' data
|
||||
```
|
||||
Node 2:
|
||||
```bash
|
||||
rqlited -http-addr=$IP2:$HTTP_PORT -raft-addr=$IP2:$RAFT_PORT \
|
||||
rqlited -node-id $ID2 -http-addr=$IP2:4001 -raft-addr=$IP2:4002 \
|
||||
-disco-mode etcd-kv -disco-config '{"endpoints": ["example.com:2379"]}' data
|
||||
```
|
||||
Node 3:
|
||||
```bash
|
||||
rqlited -http-addr=$IP3:$HTTP_PORT -raft-addr=$IP3:$RAFT_PORT \
|
||||
rqlited -node-id $ID3 -http-addr=$IP3:4001 -raft-addr=$IP3:4002 \
|
||||
-disco-mode etcd-kv -disco-config '{"endpoints": ["example.com:2379"]}' data
|
||||
```
|
||||
Like with Consul autoclustering, the cluster Leader will continually report its address to etcd.
|
||||
@@ -104,13 +115,14 @@ docker run rqlite/rqlite -disco-mode=etcd-kv -disco-config '{"endpoints": ["exam
|
||||
```
|
||||
|
||||
## More Details
|
||||
### Controlling Consul and etcd configuration
|
||||
For both Consul and etcd, `-disco-confg` can either be an actual JSON string, or a path to a file containing a JSON-formatted configuration. The former option may be more convenient if the configuration you need to supply is very short, as in the example above.
|
||||
### Controlling Discovery configuration
|
||||
For detailed control over Discovery configuration `-disco-confg` can either be an actual JSON string, or a path to a file containing a JSON-formatted configuration. The former option may be more convenient if the configuration you need to supply is very short, as in the example above.
|
||||
|
||||
The example above demonstrates a simple configuration, and most real deployments will require more configuration information for Consul and etcd. For example, your Consul system might be reachable over HTTPS. To more fully configure rqlite for Discovery, consult the relevant configuration specification below. You must create a JSON-formatted file which matches that described in the source code.
|
||||
The example above demonstrates a simple configuration, and most real deployments may require detailed configuration. For example, your Consul system might be reachable over HTTPS. To more fully configure rqlite for Discovery, consult the relevant configuration specification below. You must create a JSON-formatted file which matches that described in the source code.
|
||||
|
||||
- [Full Consul configuration description](https://github.com/rqlite/rqlite-disco-clients/blob/main/consul/config.go)
|
||||
- [Full etcd configuration description](https://github.com/rqlite/rqlite-disco-clients/blob/main/etcd/config.go)
|
||||
- [Full DNS configuration description](https://github.com/rqlite/rqlite-disco-clients/blob/main/dns/config.go)
|
||||
|
||||
#### Running multiple different clusters
|
||||
If you wish a single Consul or etcd system to support multiple rqlite clusters, then set the `-disco-key` command line argument to a different value for each cluster.
|
||||
@@ -119,3 +131,5 @@ If you wish a single Consul or etcd system to support multiple rqlite clusters,
|
||||
When using Automatic Bootstrapping, each node notifies all other nodes of its existence. The first node to have a record of enough nodes (set by `-boostrap-expect`) forms the cluster. Only one node can ever form a cluster, any node that attempts to do so later will fail, and instead become Followers in the new cluster.
|
||||
|
||||
When using either Consul or etcd for automatic clustering, rqlite uses the key-value store of each system. In each case the Leader atomically sets its HTTP URL, allowing other nodes to discover it. To prevent multiple nodes updating the Leader key at once, nodes uses a check-and-set operation, only updating the Leader key if it's value has not changed since it was last read by the node. See [this blog post](https://www.philipotoole.com/rqlite-7-0-designing-node-discovery-and-automatic-clustering/) for more details on the design.
|
||||
|
||||
For DNS-based discovery, the rqlite nodes simply resolve the hostname, and uses the returned network addresses, once the number of returned addresses is at least as great as the `-bootstrap-expect` value.
|
||||
|
||||
@@ -91,7 +91,7 @@ func (b *Bootstrapper) Boot(id, raftAddr string, done func() bool, timeout time.
|
||||
|
||||
targets, err := b.provider.Lookup()
|
||||
if err != nil {
|
||||
b.logger.Printf("provider loopup failed %s", err.Error())
|
||||
b.logger.Printf("provider lookup failed %s", err.Error())
|
||||
}
|
||||
if len(targets) < b.expect {
|
||||
continue
|
||||
|
||||
@@ -1,9 +1,11 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"errors"
|
||||
"flag"
|
||||
"fmt"
|
||||
"io"
|
||||
"net"
|
||||
"net/url"
|
||||
"os"
|
||||
@@ -16,6 +18,7 @@ const (
|
||||
DiscoModeNone = ""
|
||||
DiscoModeConsulKV = "consul-kv"
|
||||
DiscoModeEtcdKV = "etcd-kv"
|
||||
DiscoModeDNS = "dns"
|
||||
)
|
||||
|
||||
// Config represents the configuration as set by command-line flags.
|
||||
@@ -240,8 +243,13 @@ func (c *Config) Validate() error {
|
||||
if c.BootstrapExpect > 0 {
|
||||
return fmt.Errorf("bootstrapping not applicable when using %s", c.DiscoMode)
|
||||
}
|
||||
case DiscoModeDNS:
|
||||
if c.BootstrapExpect == 0 {
|
||||
return fmt.Errorf("bootstrap-expect value required when using %s", c.DiscoMode)
|
||||
}
|
||||
default:
|
||||
return fmt.Errorf("disco mode must be %s or %s", DiscoModeConsulKV, DiscoModeEtcdKV)
|
||||
return fmt.Errorf("disco mode must be %s, %s, or %s",
|
||||
DiscoModeConsulKV, DiscoModeEtcdKV, DiscoModeDNS)
|
||||
}
|
||||
|
||||
return nil
|
||||
@@ -266,6 +274,25 @@ func (c *Config) HTTPURL() string {
|
||||
return fmt.Sprintf("%s://%s", apiProto, c.HTTPAdv)
|
||||
}
|
||||
|
||||
// DiscoConfigReader returns a ReadCloser providing access to the Disco config.
|
||||
// The caller must call close on the ReadCloser when finished with it. If no
|
||||
// config was supplied, it returns nil.
|
||||
func (c *Config) DiscoConfigReader() io.ReadCloser {
|
||||
var rc io.ReadCloser
|
||||
if c.DiscoConfig == "" {
|
||||
return nil
|
||||
}
|
||||
|
||||
// Open config file. If opening fails, assume string is the literal config.
|
||||
cfgFile, err := os.Open(c.DiscoConfig)
|
||||
if err != nil {
|
||||
rc = io.NopCloser(bytes.NewReader([]byte(c.DiscoConfig)))
|
||||
} else {
|
||||
rc = cfgFile
|
||||
}
|
||||
return rc
|
||||
}
|
||||
|
||||
// BuildInfo is build information for display at command line.
|
||||
type BuildInfo struct {
|
||||
Version string
|
||||
|
||||
@@ -2,7 +2,6 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"crypto/tls"
|
||||
"crypto/x509"
|
||||
"fmt"
|
||||
@@ -18,6 +17,7 @@ import (
|
||||
"time"
|
||||
|
||||
consul "github.com/rqlite/rqlite-disco-clients/consul"
|
||||
"github.com/rqlite/rqlite-disco-clients/dns"
|
||||
etcd "github.com/rqlite/rqlite-disco-clients/etcd"
|
||||
"github.com/rqlite/rqlite/auth"
|
||||
"github.com/rqlite/rqlite/cluster"
|
||||
@@ -202,44 +202,35 @@ func createStore(cfg *Config, ln *tcp.Layer) (*store.Store, error) {
|
||||
func createDiscoService(cfg *Config, str *store.Store) (*disco.Service, error) {
|
||||
var c disco.Client
|
||||
var err error
|
||||
var reader io.Reader
|
||||
var rc io.ReadCloser
|
||||
|
||||
if cfg.DiscoConfig != "" {
|
||||
// Open config file. If opening fails, assume the config is a JSON string.
|
||||
cfgFile, err := os.Open(cfg.DiscoConfig)
|
||||
if err != nil {
|
||||
reader = bytes.NewReader([]byte(cfg.DiscoConfig))
|
||||
} else {
|
||||
reader = cfgFile
|
||||
defer cfgFile.Close()
|
||||
rc = cfg.DiscoConfigReader()
|
||||
defer func() {
|
||||
if rc != nil {
|
||||
rc.Close()
|
||||
}
|
||||
}
|
||||
|
||||
}()
|
||||
if cfg.DiscoMode == DiscoModeConsulKV {
|
||||
var consulCfg *consul.Config
|
||||
if reader != nil {
|
||||
consulCfg, err = consul.NewConfigFromReader(reader)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
consulCfg, err = consul.NewConfigFromReader(rc)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("create Consul config: %s", err.Error())
|
||||
}
|
||||
|
||||
c, err = consul.New(cfg.DiscoKey, consulCfg)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, fmt.Errorf("create Consul client: %s", err.Error())
|
||||
}
|
||||
} else if cfg.DiscoMode == DiscoModeEtcdKV {
|
||||
var etcdCfg *etcd.Config
|
||||
if reader != nil {
|
||||
etcdCfg, err = etcd.NewConfigFromReader(reader)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
etcdCfg, err = etcd.NewConfigFromReader(rc)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("create etcd config: %s", err.Error())
|
||||
}
|
||||
|
||||
c, err = etcd.New(cfg.DiscoKey, etcdCfg)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
return nil, fmt.Errorf("create etcd client: %s", err.Error())
|
||||
}
|
||||
} else {
|
||||
return nil, fmt.Errorf("invalid disco service: %s", cfg.DiscoMode)
|
||||
@@ -377,54 +368,82 @@ func createCluster(cfg *Config, tlsConfig *tls.Config, hasPeers bool, str *store
|
||||
// existing Raft state.
|
||||
return nil
|
||||
}
|
||||
|
||||
log.Printf("discovery mode: %s", cfg.DiscoMode)
|
||||
discoService, err := createDiscoService(cfg, str)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to start discovery service: %s", err.Error())
|
||||
}
|
||||
|
||||
if !hasPeers {
|
||||
log.Println("no preexisting nodes, registering with discovery service")
|
||||
|
||||
leader, addr, err := discoService.Register(str.ID(), cfg.HTTPURL(), cfg.RaftAdv)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to register with discovery service: %s", err.Error())
|
||||
// DNS disco is special.
|
||||
if cfg.DiscoMode == DiscoModeDNS {
|
||||
if hasPeers {
|
||||
log.Printf("preexisting node configuration detected, ignoring %s", cfg.DiscoMode)
|
||||
return nil
|
||||
}
|
||||
if leader {
|
||||
log.Println("node registered as leader using discovery service")
|
||||
if err := str.Bootstrap(store.NewServer(str.ID(), str.Addr(), true)); err != nil {
|
||||
return fmt.Errorf("failed to bootstrap single new node: %s", err.Error())
|
||||
rc := cfg.DiscoConfigReader()
|
||||
defer func() {
|
||||
if rc != nil {
|
||||
rc.Close()
|
||||
}
|
||||
}()
|
||||
dnsCfg, err := dns.NewConfigFromReader(rc)
|
||||
if err != nil {
|
||||
return fmt.Errorf("error reading DNS configuration: %s", err.Error())
|
||||
}
|
||||
dnsClient := dns.New(dnsCfg)
|
||||
|
||||
bs := cluster.NewBootstrapper(dnsClient, cfg.BootstrapExpect, tlsConfig)
|
||||
done := func() bool {
|
||||
leader, _ := str.LeaderAddr()
|
||||
return leader != ""
|
||||
}
|
||||
|
||||
httpServ.RegisterStatus("disco", dnsClient)
|
||||
return bs.Boot(str.ID(), cfg.RaftAdv, done, cfg.BootstrapExpectTimeout)
|
||||
} else {
|
||||
discoService, err := createDiscoService(cfg, str)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to start discovery service: %s", err.Error())
|
||||
}
|
||||
|
||||
if !hasPeers {
|
||||
log.Println("no preexisting nodes, registering with discovery service")
|
||||
|
||||
leader, addr, err := discoService.Register(str.ID(), cfg.HTTPURL(), cfg.RaftAdv)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to register with discovery service: %s", err.Error())
|
||||
}
|
||||
if leader {
|
||||
log.Println("node registered as leader using discovery service")
|
||||
if err := str.Bootstrap(store.NewServer(str.ID(), str.Addr(), true)); err != nil {
|
||||
return fmt.Errorf("failed to bootstrap single new node: %s", err.Error())
|
||||
}
|
||||
} else {
|
||||
for {
|
||||
log.Printf("discovery service returned %s as join address", addr)
|
||||
if err := addJoinCreds([]string{addr}, cfg.JoinAs, credStr); err != nil {
|
||||
return fmt.Errorf("failed too add auth creds: %s", err.Error())
|
||||
}
|
||||
|
||||
if j, err := cluster.Join(cfg.JoinSrcIP, []string{addr}, str.ID(), cfg.RaftAdv, !cfg.RaftNonVoter,
|
||||
cfg.JoinAttempts, cfg.JoinInterval, tlsConfig); err != nil {
|
||||
log.Printf("failed to join cluster at %s: %s", addr, err.Error())
|
||||
|
||||
time.Sleep(time.Second)
|
||||
_, addr, err = discoService.Register(str.ID(), cfg.HTTPURL(), cfg.RaftAdv)
|
||||
if err != nil {
|
||||
log.Printf("failed to get updated leader: %s", err.Error())
|
||||
}
|
||||
continue
|
||||
} else {
|
||||
log.Println("successfully joined cluster at", j)
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
} else {
|
||||
for {
|
||||
log.Printf("discovery service returned %s as join address", addr)
|
||||
if err := addJoinCreds([]string{addr}, cfg.JoinAs, credStr); err != nil {
|
||||
return fmt.Errorf("failed too add auth creds: %s", err.Error())
|
||||
}
|
||||
|
||||
if j, err := cluster.Join(cfg.JoinSrcIP, []string{addr}, str.ID(), cfg.RaftAdv, !cfg.RaftNonVoter,
|
||||
cfg.JoinAttempts, cfg.JoinInterval, tlsConfig); err != nil {
|
||||
log.Printf("failed to join cluster at %s: %s", addr, err.Error())
|
||||
|
||||
time.Sleep(time.Second)
|
||||
_, addr, err = discoService.Register(str.ID(), cfg.HTTPURL(), cfg.RaftAdv)
|
||||
if err != nil {
|
||||
log.Printf("failed to get updated leader: %s", err.Error())
|
||||
}
|
||||
continue
|
||||
} else {
|
||||
log.Println("successfully joined cluster at", j)
|
||||
break
|
||||
}
|
||||
}
|
||||
log.Println("preexisting node configuration detected, not registering with discovery service")
|
||||
}
|
||||
} else {
|
||||
log.Println("preexisting node configuration detected, not registering with discovery service")
|
||||
}
|
||||
|
||||
go discoService.StartReporting(cfg.NodeID, cfg.HTTPURL(), cfg.RaftAdv)
|
||||
httpServ.RegisterStatus("disco", discoService)
|
||||
go discoService.StartReporting(cfg.NodeID, cfg.HTTPURL(), cfg.RaftAdv)
|
||||
httpServ.RegisterStatus("disco", discoService)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
|
||||
@@ -129,7 +129,7 @@ func (s *Service) Stats() (map[string]interface{}, error) {
|
||||
defer s.mu.Unlock()
|
||||
|
||||
return map[string]interface{}{
|
||||
"name": s.c.String(),
|
||||
"mode": s.c.String(),
|
||||
"register_interval": s.RegisterInterval,
|
||||
"report_interval": s.ReportInterval,
|
||||
"last_contact": s.lastContact,
|
||||
|
||||
10
go.mod
10
go.mod
@@ -21,15 +21,15 @@ require (
|
||||
github.com/mkideal/pkg v0.1.3
|
||||
github.com/rqlite/go-sqlite3 v1.22.0
|
||||
github.com/rqlite/raft-boltdb v0.0.0-20211018013422-771de01086ce
|
||||
github.com/rqlite/rqlite-disco-clients v0.0.0-20220126132740-4d4f660bbdf0
|
||||
github.com/rqlite/rqlite-disco-clients v0.0.0-20220130234129-c05d8e6f4a92
|
||||
go.etcd.io/bbolt v1.3.6
|
||||
go.uber.org/atomic v1.9.0 // indirect
|
||||
go.uber.org/multierr v1.7.0 // indirect
|
||||
go.uber.org/zap v1.20.0 // indirect
|
||||
golang.org/x/crypto v0.0.0-20220112180741-5e0467b6c7ce
|
||||
golang.org/x/net v0.0.0-20220121210141-e204ce36a2ba
|
||||
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9 // indirect
|
||||
google.golang.org/genproto v0.0.0-20220118154757-00ab72f36ad5 // indirect
|
||||
golang.org/x/crypto v0.0.0-20220128200615-198e4374d7ed
|
||||
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd
|
||||
golang.org/x/sys v0.0.0-20220128215802-99c3d69c2c27 // indirect
|
||||
google.golang.org/genproto v0.0.0-20220126215142-9970aeb2e350 // indirect
|
||||
google.golang.org/grpc v1.44.0 // indirect
|
||||
google.golang.org/protobuf v1.27.1
|
||||
)
|
||||
|
||||
20
go.sum
20
go.sum
@@ -251,8 +251,8 @@ github.com/rqlite/go-sqlite3 v1.22.0 h1:twqvKzylJXG62Qe0rcqdy5ClGhc0YRc2vvA3nEXw
|
||||
github.com/rqlite/go-sqlite3 v1.22.0/go.mod h1:ml55MVv28UP7V8zrxILd2EsrI6Wfsz76YSskpg08Ut4=
|
||||
github.com/rqlite/raft-boltdb v0.0.0-20211018013422-771de01086ce h1:sVlzmCJiaM0LGK3blAHOD/43QxJZ8bLCDcsqZRatnFE=
|
||||
github.com/rqlite/raft-boltdb v0.0.0-20211018013422-771de01086ce/go.mod h1:mc+WNDHyskdViYAoPnaMXEBnSKBmoUgiEZjrlAj6G34=
|
||||
github.com/rqlite/rqlite-disco-clients v0.0.0-20220126132740-4d4f660bbdf0 h1:FCi46URP/KrvJSXrWz/PR9ntzj5dJ8Tp+i0BRHBHTfs=
|
||||
github.com/rqlite/rqlite-disco-clients v0.0.0-20220126132740-4d4f660bbdf0/go.mod h1:pym85nj6JnCI7rM9RxTZ4cubkTQyyg7uLwVydso9B80=
|
||||
github.com/rqlite/rqlite-disco-clients v0.0.0-20220130234129-c05d8e6f4a92 h1:Pfq4lSuNANFRRQDODU2Rcy+rJH7ezDKJiM6rKqDpBAk=
|
||||
github.com/rqlite/rqlite-disco-clients v0.0.0-20220130234129-c05d8e6f4a92/go.mod h1:pym85nj6JnCI7rM9RxTZ4cubkTQyyg7uLwVydso9B80=
|
||||
github.com/ryanuber/columnize v0.0.0-20160712163229-9b3edd62028f/go.mod h1:sm1tb6uqfes/u+d4ooFouqFdy9/2g9QGwK3SQygK0Ts=
|
||||
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529 h1:nn5Wsu0esKSJiIVhscUtVbo7ada43DJhG55ua/hjS5I=
|
||||
github.com/sean-/seed v0.0.0-20170313163322-e2103e2c3529/go.mod h1:DxrIzT+xaE7yg65j358z/aeFdxmN0P9QXhEzd20vsDc=
|
||||
@@ -303,8 +303,8 @@ golang.org/x/crypto v0.0.0-20190923035154-9ee001bba392/go.mod h1:/lpIB1dKB+9EgE3
|
||||
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.0.0-20201221181555-eec23a3978ad/go.mod h1:jdWPYTVW3xRLrWPugEBEK3UY2ZEsg3UU495nc5E+M+I=
|
||||
golang.org/x/crypto v0.0.0-20220112180741-5e0467b6c7ce h1:Roh6XWxHFKrPgC/EQhVubSAGQ6Ozk6IdxHSzt1mR0EI=
|
||||
golang.org/x/crypto v0.0.0-20220112180741-5e0467b6c7ce/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
|
||||
golang.org/x/crypto v0.0.0-20220128200615-198e4374d7ed h1:YoWVYYAfvQ4ddHv3OKmIvX7NCAhFGTj62VP2l2kfBbA=
|
||||
golang.org/x/crypto v0.0.0-20220128200615-198e4374d7ed/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
|
||||
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
|
||||
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
|
||||
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
|
||||
@@ -335,8 +335,8 @@ golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v
|
||||
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
|
||||
golang.org/x/net v0.0.0-20210410081132-afb366fc7cd1/go.mod h1:9tjilg8BloeKEkVJvy7fQ90B1CfIiPueXVOjqfkSzI8=
|
||||
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
|
||||
golang.org/x/net v0.0.0-20220121210141-e204ce36a2ba h1:6u6sik+bn/y7vILcYkK3iwTBWN7WtBvB0+SZswQnbf8=
|
||||
golang.org/x/net v0.0.0-20220121210141-e204ce36a2ba/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
|
||||
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd h1:O7DYs+zxREGLKzKoMQrtrEacpb0ZVXA5rIwylE2Xchk=
|
||||
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
|
||||
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
|
||||
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
|
||||
@@ -385,8 +385,8 @@ golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBc
|
||||
golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20211103235746-7861aae1554b/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9 h1:XfKQ4OlFl8okEOr5UvAqFRVj8pY/4yfcXrddB8qAbU0=
|
||||
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.0.0-20220128215802-99c3d69c2c27 h1:XDXtA5hveEEV8JB2l7nhMTp3t3cHp9ZpwcdjqyEWLlo=
|
||||
golang.org/x/sys v0.0.0-20220128215802-99c3d69c2c27/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 h1:JGgROgKl9N8DuW20oFS5gxc+lE67/N3FcwmBPMe7ArY=
|
||||
@@ -423,8 +423,8 @@ google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98
|
||||
google.golang.org/genproto v0.0.0-20200513103714-09dca8ec2884/go.mod h1:55QSHmfGQM9UVYDPBsyGGes0y52j32PQ3BqQfXhyH3c=
|
||||
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
|
||||
google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c/go.mod h1:UODoCrxHCcBojKKwX1terBiRUaqAsFqJiF615XL43r0=
|
||||
google.golang.org/genproto v0.0.0-20220118154757-00ab72f36ad5 h1:zzNejm+EgrbLfDZ6lu9Uud2IVvHySPl8vQzf04laR5Q=
|
||||
google.golang.org/genproto v0.0.0-20220118154757-00ab72f36ad5/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
|
||||
google.golang.org/genproto v0.0.0-20220126215142-9970aeb2e350 h1:YxHp5zqIcAShDEvRr5/0rVESVS+njYF68PSdazrNLJo=
|
||||
google.golang.org/genproto v0.0.0-20220126215142-9970aeb2e350/go.mod h1:5CzLGKJ67TSI2B9POpiiyGha0AjJvZIUgRMt1dSmuhc=
|
||||
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
|
||||
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
|
||||
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
|
||||
|
||||
@@ -217,7 +217,7 @@ class Node(object):
|
||||
|
||||
def disco_mode(self):
|
||||
try:
|
||||
return self.status()['disco']['name']
|
||||
return self.status()['disco']['mode']
|
||||
except requests.exceptions.ConnectionError:
|
||||
return ''
|
||||
|
||||
|
||||
Reference in New Issue
Block a user