Docker Swarm的高可用性

2016-07-12  本文已影响1374人  zerolinke

Docker Swarm -> High availability in Docker Swarm

Docker Swarm的高可用性

在Docker Swarm中,Swarm manager负责管理整个集群,包括成规模的众多Docker主机资源。如果这个Swarm manager挂了,你必须创建一个新的来解决中断的服务。

Docker Swarm具有High Availability的特性,当有一个manager实例出现问题,Docker Swarm可以优雅的进行故障转移-failover。你可以创建一个单独的primary manager实例和多个replica实例来使用这个特性。

一个primary manager是Docker Swarm集群的主要联系人。你也可以创建replica实例作为,并与它进行通信。对于replica收到的请求会被自动的代理到primary manager。如果一个primary manager故障,一个replica会成为lead。通过这种机制,你可以始终保持于集群的联系。

建立 primary和replicas

这一小节阐述一下如何建立多个manager的Docker Swarm。

前提

你的集群中至少要有ConsuletcdZookeeper中的一个。这里的程序是机遇一个运行于192.168.42.10:8500Consul服务。所有的主机都拥有一个监听2375端口的Docker Engine。将所有manager的操作端口都放置在4000上。这个例子的Swarm主机配置:

创建primary manager

你可以使用swarm manager命令通过--replication--advertise标签创建一个primary manager

 user@manager-1 $ swarm manage -H :4000 <tls-config-flags> --replication --advertise 192.168.42.200:4000 consul://192.168.42.10:8500/nodes
  INFO[0000] Listening for HTTP addr=:4000 proto=tcp
  INFO[0000] Cluster leadership acquired
  INFO[0000] New leader elected: 192.168.42.200:4000
  [...]

这个--replication标识高速Swarm这个manager是多manager配置的一部分,并且这个primary manager会加入与其它manager实例的主权竞争。这个primary manager有权管理集群以及集群中的副本日志和副本事件。

这个--advertise选项指定这个manager的地址。当这个节点被推选为primary后Swarm会使用这个地址通知集群。你会在命令行的输出中看到被推选为primary manager的是你提供的地址。

创建两个replicas

现在你有了一个primary manager,你可以创建一个replicas。

user@manager-2 $ swarm manage -H :4000 <tls-config-flags> --replication --advertise 192.168.42.201:4000 consul://192.168.42.10:8500/nodes
INFO[0000] Listening for HTTP                            addr=:4000 proto=tcp
INFO[0000] Cluster leadership lost
INFO[0000] New leader elected: 192.168.42.200:4000
[...]

这条命令创建了一个replicas manager在192.168.42.201:4000上,而primary manager在192.168.42.200:4000

创建一个额外的,第三个manager实例:

user@manager-3 $ swarm manage -H :4000 <tls-config-flags> --replication --advertise 192.168.42.202:4000 consul://192.168.42.10:8500/nodes
INFO[0000] Listening for HTTP                            addr=:4000 proto=tcp
INFO[0000] Cluster leadership lost
INFO[0000] New leader elected: 192.168.42.200:4000
[...]

一旦你发布了你的primary manager和replicas,可能接下来你就只会创建Swarm agent了。

列出集群中的机器

敲击dockre info应该会输出和下面相同的内容:

user@my-machine $ export DOCKER_HOST=192.168.42.200:4000 # Points to manager-1
user@my-machine $ docker info
Containers: 0
Images: 25
Storage Driver:
Role: Primary  <--------- manager-1 is the Primary manager
Primary: 192.168.42.200
Strategy: spread
Filters: affinity, health, constraint, port, dependency
Nodes: 3
 swarm-agent-0: 192.168.42.100:2375
  └ Containers: 0
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 2.053 GiB
  └ Labels: executiondriver=native-0.2, kernelversion=3.13.0-49-generic, operatingsystem=Ubuntu 14.04.2 LTS, storagedriver=aufs
 swarm-agent-1: 192.168.42.101:2375
  └ Containers: 0
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 2.053 GiB
  └ Labels: executiondriver=native-0.2, kernelversion=3.13.0-49-generic, operatingsystem=Ubuntu 14.04.2 LTS, storagedriver=aufs
 swarm-agent-2: 192.168.42.102:2375
  └ Containers: 0
  └ Reserved CPUs: 0 / 1
  └ Reserved Memory: 0 B / 2.053 GiB
  └ Labels: executiondriver=native-0.2, kernelversion=3.13.0-49-generic, operatingsystem=Ubuntu 14.04.2 LTS, storagedriver=aufs
Execution Driver:
Kernel Version:
Operating System:
CPUs: 3
Total Memory: 6.158 GiB
Name:
ID:
Http Proxy:
Https Proxy:
No Proxy:

这个信息表明manager-1是当前的primary。

测试故障转移机制

要测试故障转移机制,可以关闭指定的primary manager,发出Ctrl-C或是kill的命令对于当前primary manager(manager-1)去关闭它。

等待自动的故障转移

经过短暂的时间,其它的实例检测到这个故障然后推选lead称为primary manager。

查看manager-2的日志:

user@manager-2 $ swarm manage -H :4000 <tls-config-flags> --replication --advertise 192.168.42.201:4000 consul://192.168.42.10:8500/nodes
INFO[0000] Listening for HTTP                            addr=:4000 proto=tcp
INFO[0000] Cluster leadership lost
INFO[0000] New leader elected: 192.168.42.200:4000
INFO[0038] New leader elected: 192.168.42.201:4000
INFO[0038] Cluster leadership acquired               <--- We have been elected as the new Primary Manager
[...]

Because the primary manager, manager-1, failed right after it was elected, the replica with the address 192.168.42.201:4000, manager-2, recognized the failure and attempted to take away the lead. Because manager-2 was fast enough, the process was effectively elected as the primary manager. As a result, manager-2 became the primary manager of the cluster.

如果我们查看一下manager-3我们会看到这样的日志:

user@manager-3 $ swarm manage -H :4000 <tls-config-flags> --replication --advertise 192.168.42.202:4000 consul://192.168.42.10:8500/nodes
INFO[0000] Listening for HTTP                            addr=:4000 proto=tcp
INFO[0000] Cluster leadership lost
INFO[0000] New leader elected: 192.168.42.200:4000
INFO[0036] New leader elected: 192.168.42.201:4000   <--- manager-2 sees the new Primary Manager
[...]

在此刻,我们需要导出一个新的DOCKER_HOST值。

切换到primary

要切换到manager-2需要导出DOCKER_HOOST像下面这样:

user@my-machine $ export DOCKER_HOST=192.168.42.201:4000 # Points to manager-2
user@my-machine $ docker info
Containers: 0
Images: 25
Storage Driver:
Role: Replica  <--------- manager-2 is a Replica
Primary: 192.168.42.200
Strategy: spread
Filters: affinity, health, constraint, port, dependency
Nodes: 3

你可以使用命令在任意的Docker Swarm的primary或是replica上。

如果你喜欢,你可以使用自定义的机制总是指向DOCKER_HOST到当前的manager。这样你就永远不会与Docker Swarm失去联系。

上一篇下一篇

猜你喜欢

热点阅读