consul服务器搭建
安装consul
官网下载合适的版本即可
windows下直接下载解压即可
linux下可以使用wget
命令下载
//1.下载压缩包
wget https://releases.hashicorp.com/consul/1.4.2/consul_1.4.2_linux_amd64.zip
//2.解压
unzip consul_1.4.2_linux_amd64.zip
//3.检查安装
./consul -v
Consul v1.4.2
Protocol 2 spoken by default, understands 2 to 3 (agent will automatically use protocol >2 when speaking to compatible agents)
启动consul
开发服务器模式-dev
consul支持开发服务器模式来快速启动单节点consul:
consul agent -dev
这种模式下启动的是内存服务器,所有数据不会持久化. 仅用于开发.
浏览器中访问http://localhost:8500
即可看到consul管理界面,如下图所示:
集群模式
consul使用raft协议实现一致性,集群至少部署三个节点.
集群部署方式,agent先各自启动,通过将某个agent加入cluster来实现集群部署.
集群规划:
consul server1:172.16.22.2
consul server2:172.16.22.51
consul server3:10.45.82.76
启动consul服务器
172.16.22.2
[test@node1 consul]$ nohup ./consul agent -server -bind=172.16.22.2 -client=0.0.0.0 -bootstrap-expect=3 -data-dir=/usr/test/consul/data -node=server-xh-1 -ui &
172.16.22.51
[test@node2 consul]$nohup ./consul agent -server -bind=172.16.22.51 -client=0.0.0.0 -bootstrap-expect=3 -data-dir=/usr/test/consul/data -node=server-xh-2 -ui &
10.45.82.76
[test@node3 consul]$nohup ./consul agent -server -bind=10.45.82.76 -client=0.0.0.0 -bootstrap-expect=3 -data-dir=/usr/test/consul/data -node=server-xh-3 -ui &
启动命令参数可参考Command-line Options
这个时候查看下consul agent,会发现每个agent下都只有各自自身,即每个agent都是相互独立的.
172.16.22.2
[test@node1 consul]$ ./consul members
Node Address Status Type Build Protocol DC Segment
server-xh-1 172.16.22.2:8301 alive server 1.4.2 2 dc1 <all>
172.16.22.51
[test@node2 consul]$./consul members
Node Address Status Type Build Protocol DC Segment
server-xh-2 172.16.22.51:8301 alive server 1.4.2 2 dc1 <all>
10.45.82.76
[test@node3 consul]$./consul members
Node Address Status Type Build Protocol DC Segment
server-xh-3 10.45.82.76:8301 alive server 1.4.2 2 dc1 <all>
通过将172.16.22.2和172.16.22.51加入到10.45.82.76来实现集群部署
[test@node1 consul]$./consul join 10.45.82.76
Successfully joined cluster by contacting 1 nodes.
[test@node1 consul]$./consul members
Node Address Status Type Build Protocol DC Segment
server-01 172.16.22.2:8301 alive server 1.4.2 2 dc1 <all>
server-03 10.45.82.76:8301 alive server 1.4.2 2 dc1 <all>
[test@node2 consul]$./consul join 10.45.82.76
Successfully joined cluster by contacting 1 nodes.
[test@node2 consul]$./consul members
Node Address Status Type Build Protocol DC Segment
server-01 172.16.22.2:8301 alive server 1.4.2 2 dc1 <all>
server-02 172.16.22.51:8301 alive server 1.4.2 2 dc1 <all>
server-03 10.45.82.76:8301 alive server 1.4.2 2 dc1 <all>
至此,包含三个节点的consul cluster搭建好了
注意:集群部署的时候可以看下启动日志,能比较清楚的看到Leader选举的流程等
启动consul时打印的一些重要信息
以下是开发服务器模式启动的部分日志:
==> Starting Consul agent...
==> Consul agent running!
//consul版本号
Version: 'v1.4.2'
//agent节点的ID,随机生成的唯一ID
Node ID: '4e02ef52-9690-5fc6-b16c-de9724422f84'
//agent节点的唯一名称,默认情况下是主机名,可以使用-node指定节点名称
Node name: 'localhost.localdomain'
//数据中心.consul支持多数据中心. 每个agent节点都需要指定其使用的数据中心,默认为dc1.可以通过-datacenter指定使用的数据中心.
Datacenter: 'dc1' (Segment: '<all>')
//服务器.true表示当前agent是以服务器模式运行.false表示客户端模式.
//Bootstrap引导模式,带上启动参数-bootstrap可以开启引导模式.一个数据中心下最多只有一个agent节点可以设置为引导模式,因为在这种模式下agent节点可以自选为Raft leader,多个会导致不一致性.
Server: true (Bootstrap: false)
//客户端连接agent节点的IP地址及端口号.默认情况下只绑定到本机.
Client Addr: [127.0.0.1] (HTTP: 8500, HTTPS: -1, gRPC: 8502, DNS: 8600)
//集群中consul agent节点之间通信的IP地址和端口
Cluster Addr: 127.0.0.1 (LAN: 8301, WAN: 8302)
//
Encrypt: Gossip: false, TLS-Outgoing: false, TLS-Incoming: false
==> Log data will now stream in as it occurs:
2019/02/18 18:49:41 [DEBUG] agent: Using random ID "4e02ef52-9690-5fc6-b16c-de9724422f84" as node ID
2019/02/18 18:49:41 [WARN] agent: Node name "localhost.localdomain" will not be discoverable via DNS due to invalid characters. Valid characters include all alpha-numerics and dashes.
2019/02/18 18:49:41 [INFO] raft: Initial configuration (index=1): [{Suffrage:Voter ID:4e02ef52-9690-5fc6-b16c-de9724422f84 Address:127.0.0.1:8300}]
2019/02/18 18:49:41 [INFO] raft: Node at 127.0.0.1:8300 [Follower] entering Follower state (Leader: "")
2019/02/18 18:49:41 [INFO] serf: EventMemberJoin: localhost.localdomain.dc1 127.0.0.1
2019/02/18 18:49:41 [INFO] serf: EventMemberJoin: localhost.localdomain 127.0.0.1
2019/02/18 18:49:41 [INFO] consul: Handled member-join event for server "localhost.localdomain.dc1" in area "wan"
2019/02/18 18:49:41 [DEBUG] agent/proxy: managed Connect proxy manager started
2019/02/18 18:49:41 [INFO] consul: Adding LAN server localhost.localdomain (Addr: tcp/127.0.0.1:8300) (DC: dc1)
2019/02/18 18:49:41 [INFO] agent: Started DNS server 127.0.0.1:8600 (udp)
2019/02/18 18:49:41 [INFO] agent: Started DNS server 127.0.0.1:8600 (tcp)
2019/02/18 18:49:41 [INFO] agent: Started HTTP server on 127.0.0.1:8500 (tcp)
2019/02/18 18:49:41 [INFO] agent: started state syncer
2019/02/18 18:49:41 [INFO] agent: Started gRPC server on 127.0.0.1:8502 (tcp)
2019/02/18 18:49:41 [WARN] raft: Heartbeat timeout from "" reached, starting election
2019/02/18 18:49:41 [INFO] raft: Node at 127.0.0.1:8300 [Candidate] entering Candidate state in term 2
2019/02/18 18:49:41 [DEBUG] raft: Votes needed: 1
2019/02/18 18:49:41 [DEBUG] raft: Vote granted from 4e02ef52-9690-5fc6-b16c-de9724422f84 in term 2. Tally: 1
2019/02/18 18:49:41 [INFO] raft: Election won. Tally: 1
2019/02/18 18:49:41 [INFO] raft: Node at 127.0.0.1:8300 [Leader] entering Leader state
2019/02/18 18:49:41 [INFO] consul: cluster leadership acquired
2019/02/18 18:49:41 [INFO] consul: New leader elected: localhost.localdomain
2019/02/18 18:49:41 [INFO] connect: initialized primary datacenter CA with provider "consul"
2019/02/18 18:49:41 [DEBUG] consul: Skipping self join check for "localhost.localdomain" since the cluster is too small
2019/02/18 18:49:41 [INFO] consul: member 'localhost.localdomain' joined, marking health alive
2019/02/18 18:49:42 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
2019/02/18 18:49:42 [INFO] agent: Synced node info
2019/02/18 18:49:42 [DEBUG] agent: Node info in sync
2019/02/18 18:49:44 [DEBUG] agent: Skipping remote check "serfHealth" since it is managed automatically
2019/02/18 18:49:44 [DEBUG] agent: Node info in sync
通过启动日志可以发现:
1.建议使用启动参数-node
指定agent节点名称,因为默认的主机名可能包含一些无效字符.节点名称有效字符包括所有字母数字和破折号.
2.一致性协议使用Raft协议
consul web ui
我们可以通过web UI查看集群信息
注意,启动agent的时候带参数-ui
即可启动consul自带的web管理界面,默认端口号8500
http://10.45.82.76:8500
consul command
查看下consul命令支持的功能:
[test@node3 consul]$./consul --help
Usage: consul [--version] [--help] <command> [<args>]
Available commands are:
acl Interact with Consul's ACLs
agent Runs a Consul agent
catalog Interact with the catalog
connect Interact with Consul Connect
debug Records a debugging archive for operators
event Fire a new event
exec Executes a command on Consul nodes
force-leave Forces a member of the cluster to enter the "left" state
info Provides debugging information for operators.
intention Interact with Connect service intentions
//将consul server加入consul cluster ./consul join 172.16.22.2
join Tell Consul agent to join cluster
keygen Generates a new encryption key
keyring Manages gossip layer encryption keys
kv Interact with the key-value store
//优雅的停掉consul server ./consul leave
leave Gracefully leaves the Consul cluster and shuts down
lock Execute a command holding a lock
maint Controls node or service maintenance mode
//查看当前consul下的成员 ./consul members
members Lists the members of a Consul cluster
monitor Stream logs from a Consul agent
operator Provides cluster-level tools for Consul operators
reload Triggers the agent to reload configuration files
rtt Estimates network round trip time between nodes
services Interact with services
snapshot Saves, restores and inspects snapshots of Consul server state
tls Builtin helpers for creating CAs and certificates
validate Validate config files/directories
version Prints the Consul version
watch Watch for changes in Consul
可以通过./consul command --help
查看具体某个指令支持的参数.比如./consul agent --help