Ansible&实现主/备模式高可用

2018-09-10  本文已影响0人  Net夜风

运维核心工作:

操作系统安装(物理机、虚拟机)--> 应用程序包部署(安装、配置、服务启动 )--> 批量操作 --> 业务系统程序部署(安装,运行以及发布) --> 监控

在调度器上下线一批主机(maintanance) --> 关闭服务 --> 部署新版本的应用程序 --> 启动服务 --> 在调度器上启用这一批服务器;

轻量级的运维工具:Ansible

Ansible的特性
Ansible的架构
ansible架构.png
Ansible的安装使用
  1. ansible是基于epel仓库,因此安装之前先要配置好epel的yum源仓库

    [root@localhost ~]# yum info ansible
    Loaded plugins: fastestmirror
    Loading mirror speeds from cached hostfile
     * extras: ftp.sjtu.edu.cn
    Available Packages
    Name        : ansible
    Arch        : noarch
    Version     : 2.6.3
    Release     : 1.el7
    Size        : 10 M
    Repo        : epel
    Summary     : SSH-based configuration management, deployment, and task execution system
    URL         : http://ansible.com
    License     : GPLv3+
    Description : Ansible is a radically simple model-driven configuration management,
              : multi-node deployment, and remote task execution system. Ansible works
              : over SSH and does not require any software or daemons to be installed
              : on remote nodes. Extension modules can be written in any language and
              : are transferred to managed machines automatically.
    
    [root@localhost ~]# rpm -ql ansible |less
    /etc/ansible/ansible.cfg   #ansible主配置文件
    /etc/ansible/hosts         #主机清单配置文件
    /etc/ansible/roles         #角色配置文件
    
  2. ansible的使用方式:

  1. ansible语法格式:ansible <host-pattern> [options]

  2. ansible的简单使用格式:ansible HOST-PATTERN -m MOD_NAME -a MOD_ARGS -f FORKS -C -u USERNAME -c CONNECTION

  3. 基于密钥的方式连接两台host主机node1和node2

    [root@localhost ~]# ssh-keygen -t rsa -P ""   #生成密钥
    [root@localhost ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.43.143   
        #使用密钥连接node1
    [root@localhost ~]# ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.43.129   
        #使用密钥连接node2
    
    [root@localhost ~]# vim /etc/ansible/hosts   #编辑主机清单文件添加主机
     [websrvs]
     192.168.43.129
     192.168.43.143
     
     [dbsrvs]
     192.168.43.129
     [root@localhost ~]# ansible all -m ping -C  #使用ping命令测试两台主机node1,node2;-C:测试模式,干跑;
     192.168.43.129 | SUCCESS => {
         "changed": false, 
         "ping": "pong"
     }
     192.168.43.143 | SUCCESS => {
         "changed": false, 
         "ping": "pong"
     }
    
  4. ansible的常用模块:

[root@localhost ~]# ansible all -m copy -a "src=/etc/fstab dest=/tmp/fstab.ansible mode=600"
192.168.43.129 | SUCCESS => {
    "changed": true, 
    "checksum": "413996796bccca42104b6769612d2b57d8085210", 
    "dest": "/tmp/fstab.ansible", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "95b7fa684cc5066d06f284ce029cddb5", 
    "mode": "0600", 
    "owner": "root", 
    "size": 541, 
    "src": "/root/.ansible/tmp/ansible-tmp-1536476031.13-129897102565078/source", 
    "state": "file", 
    "uid": 0
}
192.168.43.143 | SUCCESS => {
    "changed": true, 
    "checksum": "413996796bccca42104b6769612d2b57d8085210", 
    "dest": "/tmp/fstab.ansible", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "95b7fa684cc5066d06f284ce029cddb5", 
    "mode": "0600", 
    "owner": "root", 
    "size": 541, 
    "src": "/root/.ansible/tmp/ansible-tmp-1536476031.16-86269165700494/source", 
    "state": "file", 
    "uid": 0
}

node1:
[root@node1 ~]# ll -d /tmp/fstab.ansible 
-rw------- 1 root root 541 Sep  9 02:53 /tmp/fstab.ansible

注意:service有2各选项:enabled设定开机自启,runlevel在那个级别设定开机自启

  1. Playbook的核心元素:
playbook的主要作用:

就是能够把多个相关联的任务,通过读取YAML格式的配置文件一次编完;要把任务、变量、模板、处理器放在一个YAML格式文件中进行指定,然后任务就可一次批量执行;

例如:
    playbook的基础组件hosts和tasks演示:
    [root@localhost ~]# mkdir -pv playbook
    [root@localhost ~]# cd playbook/
    [root@localhost playbook]# vim first.yaml
             - hosts: all
               remote_user: root
               tasks:
               - name: install redis
                 yum: name=redis state=latest
               - name: start redis
                 service: name=redis state=started

    [root@localhost playbook]# ansible-playbook --check first.yaml
    
    PLAY [all] *************************************************************************************
    
    TASK [Gathering Facts] #只要收集参数成功都显示ok;*************************************************************************
    ok: [192.168.43.143]
    ok: [192.168.43.129]
    
    TASK [install redis]  #在playbook中定义的第一个任务***************************************************************************
    changed: [192.168.43.129]
    changed: [192.168.43.143]
    
    TASK [start redis] #在playbook中定义的第二个任务*****************************************************************************
    changed: [192.168.43.129]
    changed: [192.168.43.143]
    
    PLAY RECAP  #返回的报告*************************************************************************************
    192.168.43.129             : ok=3    changed=2    unreachable=0    failed=0   
    192.168.43.143             : ok=3    changed=2    unreachable=0    failed=0 
    
    [root@localhost playbook]# ansible-playbook --list-hosts first.yaml
    #查看这个playbook运行在哪些主机
    playbook: first.yaml
    
      play #1 (all): all    TAGS: []
        pattern: [u'all']
        hosts (2):
          192.168.43.143
          192.168.43.129
  
  [root@localhost playbook]# ansible-playbook -C first.yaml  #干跑一遍测试
  [root@localhost playbook]# ansible-playbook  first.yaml   #真正执行
示例1:安装httpd,安装配置文件,启动httpd服务
[root@localhost playbook]# mkdir working
[root@localhost playbook]# cd working/
[root@localhost working]# cp /etc/httpd/conf/httpd.conf ./
[root@localhost working]# vim httpd.conf
    Listen 8080
[root@localhost playbook]# cd ..
[root@localhost playbook]# vim web.yaml
        - hosts: websrvs
          remote_user: root
          tasks:
          - name: install httpd package
            yum: name=httpd state=present
          - name: install configure file
            copy: src=working/httpd.conf dest=/etc/httpd/conf/
          - name: start httpd service
            service: name=httpd state=started
          - name: execute ss command
            shell: ss -tnl | grep 8080
[root@localhost playbook]# ansible-playbook --check web.yaml 测试语法
    PLAY [websrvs] *********************************************************************************

    TASK [Gathering Facts] *************************************************************************
    ok: [192.168.43.143]
    ok: [192.168.43.129]
    
    TASK [install httpd package] *******************************************************************
    changed: [192.168.43.143]
    changed: [192.168.43.129]
    
    TASK [install configure file] ******************************************************************
    changed: [192.168.43.129]
    changed: [192.168.43.143]
    
    TASK [start httpd service] *********************************************************************
    changed: [192.168.43.129]
    changed: [192.168.43.143]
    
    TASK [execute ss command] **********************************************************************
    skipping: [192.168.43.129]
    skipping: [192.168.43.143]
    
    PLAY RECAP *************************************************************************************
    192.168.43.129             : ok=4    changed=3    unreachable=0    failed=0   
    192.168.43.143             : ok=4    changed=3    unreachable=0    failed=0 
[root@localhost playbook]# ansible-playbook web.yaml  #真正执行
注意:在ansible-playbook中执行ss -tnl | grep :8080,这种查询是不显示结果的,所以,一般不在ansible-playboot里执行有关查询显示的命令;

...

示例2:演示使用handlers,触发执行;
如果把监听端改为808,再执行,则不会生效,因为,服务已经启动了,除非重启服务,这时,就应该用到handlers处理器
[root@localhost playbook]# vim web-2.yaml 
        - hosts: websrvs
          remote_user: root
          tasks:
          - name: install httpd package
            yum: name=httpd state=present
          - name: install configure file
            copy: src=working/httpd.conf dest=/etc/httpd/conf/
            notify: restart httpd
          - name: start httpd service
            service: name=httpd state=started
          handlers:
          - name: restart httpd
            service: name=httpd state=restarted
        [root@localhost playbook]# vim working/httpd.conf 
                Listen 808
        [root@localhost playbook]# ansible-playbook --check web-2.yaml
        [root@localhost playbook]# ansible-playbook web-2.yaml
        [root@localhost playbook]# ansible websrvs -m shell -a "ss -tnl | grep 808"
        192.168.1.113 | SUCCESS | rc=0 >>
        LISTEN     0      128         :::808                     :::*                  
        
        192.168.1.114 | SUCCESS | rc=0 >>
        LISTEN     0      128         :::808                     :::*   

...

 示例3:根据上例,如果仅修改了配置文件,却还要从第一步,执行安装程序包,这样是没必要的,所以,可使用tag,给任务加标签,不指定标签时,执行所有任务,加标签时,只执行标签所在的任务;
 [root@localhost playbook]# vim web-3.yaml   
        - hosts: websrvs
          remote_user: root
          tasks:
          - name: install httpd package
            yum: name=httpd state=present
            tags: insthttpd
          - name: install configure file
            copy: src=working/httpd.conf dest=/etc/httpd/conf/
            tags: instconfig
            notify: restart httpd
          - name: start httpd service
            service: name=httpd state=started
            tags: starthttpd
          handlers:
          - name: restart httpd
            service: name=httpd state=restarted
       [root@localhost playbook]# vim working/httpd.conf
               Listen 80
      [root@localhost playbook]# ansible-playbook -t insthttpd --check web-3.yaml 
      [root@localhost playbook]# ansible-playbook -t instconf,insthttpd --check web-3.yaml  #调用多个标签;

8.variables:变量


  1. playbook的其它组件:
  1. templates模块:基于模板方式生成一个文件复制到远程主机;
    *src= 指定本地jinja2的模板文件路径
    *dest= 远程主机路径
    owner=属主
    group=属组
    mode= 权限

演示模板使用:使用ansible在二台主机上,安装nginx,提供配置文件,但其中的worker_processores的值要与主机的cpu核心数相同;此时,就可把配置文件基于模板方式提供,而这个worker_processores的值,放的是jinja2所支持的变量,直接使用变量的方式放在那个位置,而本机的template模块会自动套用这里面变量的值,给ansible facts所报告的结果,并把它生成在这个文件中,而后复制到目标主机上去;这就是模板的作用;

示例:
[root@localhost files]# ansible all -m yum --check -a "name=nginx state=latest"  #测试安装nginx
[root@localhost files]# ansible all -m yum -a "name=nginx state=latest"   #安装nginx
[root@localhost ~]# mkdir files
[root@localhost ~]# cp /etc/nginx/nginx.conf /root/files/nginx.conf.j2
[root@localhost files]# vim nginx.conf.j2
修改:
worker_processes {{ ansible_processor_vcpus }};

[root@localhost files]# vim nginx.yaml
        - hosts: websrvs
          remote_user: root
          tasks:
          - name: Install nginx
            yum: name=nginx state=present
          - name: Install config file
            template: src=/root/files/nginx.conf.j2 dest=/etc/nginx/nginx.conf
            notify: restart nginx
          - name: start service
            service: name=nginx state=started
          handlers:
          - name: restart nginx
            service: name=nginx state=restarted

[root@localhost files]# ansible-playbook  nginx.yaml --check
[root@localhost files]# ansible-playbook  nginx.yaml 

可使用主机变量,让不同主机监听不同端口:
[root@localhost files]# vim /etc/ansible/hosts
[websrvs]
192.168.255.3 http_port=80 定义主机变量
192.168.255.4 http_port=8080
[root@localhost files]# vim nginx.conf.j2
修改:
 listen       {{ http_port }}
 [root@localhost files]# ansible-playbook nginx.yaml --check
 [root@localhost files]# ansible-playbook nginx.yaml 
 
条件测试:when示例
    ]# scp root@192.168.1.105:/etc/nginx/nginx.conf files/nginx.conf.c6.j2 复制一个centos6上的nginx配置文件;
    ]# vim files/nginx.conf.c6.j2
    worker_processes  {{ ansible_processor_vcpus }};
    
    ]# vim nginx.yaml
    - hosts: all
      remote_user: root
      tasks:
      - name: install nginx
        yum: name=nginx state=present
      - name: install conf file to c7
        template: src=files/nginx.conf.j2 dest=/etc/nginx/nginx.conf
        when: ansible_distribution_major_version == "7"
        notify: restart nginx
        tags: instconf
      - name: install conf file to c6
        template: src=files/nginx.conf.c6.j2 dest=/etc/nginx/nginx.conf
        when: ansible_distribution_major_version == "6"
        notify: restart nginx
        tags: instconf
      - name: start nginx service
        service: name=nginx state=started
      handlers: 
      - name: restart nginx
        service: name=nginx state=restarted

    示例:
    同时安装nginx、memcached、php-fpm等程序包,使用循环
    ]# vim iter.yaml
    - hosts: all
      remote_user: root
      tasks:
      - name: install some packages
        yum: name={{ item }} state=present
        with_items:
        - nginx
        - memcached
        - php-fpm

  1. role 角色
    有3组服务器web、db、ha都用到时间同步服务,当编写三个yaml文件分别适用于这3组服务器时,每个文件都要写一遍时间同步的功能;或另有一种情况,假如第一组服务器即是web又是db,第二组服务器只是db,第三组服务器只是web,此时要写yaml文件,如果要写一个db的,再写一行web的,还要写一个db和web合并的,如果还要memcached服务器,而有些在db上,有些在web上,在这种场景中,代码要在不同的主机角色间灵活组合,而这对于此前固化在yaml中的格式显然是不适用的;
    如果把每一种配置的定义的功能独立化,而且谁用到时谁去调用即可;这种可独立化的配置通常安照功能为基准进行划分的;如果服务器安装了某种功能就扮演成了某种角色;即把db功能的配置定义一个角色,web功能的配置定义一个角色,memcached功能配置定义一个角色等等;需要什么就事先定义好什么,放在特定目录下,
    当主机需要进行配置时,写一个yaml配置文件,在其里面指明用在哪个主机上、使用remote_user基于哪个运行、调用角色即可;
    这就是角色机制,是自包含的,为了让服务器能够调用其中的角色实现某种功能,所需要的一切代码、文件的集合都放在一个特定位置,这个组件就称为角色;
    角色的好处是跟主机是分离的,谁用谁调用;
    对于playbook而言,角色就是在playbook中所应该定义各种组件的集合;但此前是写在playbook一个文件中的,而如果要变成角色,要扮演成一个单独的目录;角色名就是目录名;
    每一个角色一般按固定格式定义,任何角色都不能引用自己目录以外的资源,这样把这个目录复制到任何主机上都可以用,这就是自包含应该指明file子目录;所有的模板放在templates子目录下;所有的任务放在tasks子目录下,所有的处理器放在handlers子目录下;所有变量放在vars子目录下;还有一个补充meta子目录;
    不是所有目录必须得有,一般是用到哪些目录,就给出哪些目录即可;这就是角色的目录组织形式;
    角色(role);/etc/ansible/roles也可在ansible.cfg中定义;
    一般为目录,每一个角色就是一个子目录;

Ansible实现主/备模式高可用

ansible主备高可用.png
  1. 安装ansible

    [root@localhost ~]# yum -y install ansible keepalive

  2. 编辑主机清单

      [root@localhost ~]# vim /etc/ansible/host
        [websrvs]
        192.168.1.115
        192.168.1.116 
        [hasrvs]
        192.168.1.10
        192.168.1.11
    
  3. 创建固定目录结构

    [root@localhost ~]# mkdir -pv /etc/ansible/roles/{keepalived,nginx}/{files,tasks,templates,handlers,vars,default,meta}
    [root@localhost ~]# tree /etc/ansible/roles/
       /etc/ansible/roles/
     ├── keepalived
     │   ├── default
     │   ├── files
     │   ├── handlers    
     │   ├── meta
     │   ├── tasks
     │   ├── templates
     │   └── vars
     └── nginx
         ├── default
         ├── files
         ├── handlers
         ├── meta
         ├── tasks
         │   └── main.yml
         ├── templates
         │   └── index.html.j2
         └── vars
    

3 基于秘钥连接node1 node2 r1 r2

[root@localhost ~]# ssh-keygen -t rsa -P ""
[root@localhost ~]#  ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.10
[root@localhost ~]#  ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.11
[root@localhost ~]#  ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.115
[root@localhost ~]#  ssh-copy-id -i ~/.ssh/id_rsa.pub root@192.168.1.116

4.编辑roles

   [root@localhost ~]#  vim /etc/ansible/roles/keepalived/tasks/main.yml
#编辑如下内容     
        - name: install keepalived
          yum: name=keepalived state=latest
          when: ansible_os_family == "RedHat"
        - name: install conf
          template: src=kl.conf.j2 dest=/etc/keepalived/keepalived.conf
          tags: conf
          notify: restart keepalived
        - name: start keepalived
          service: name=keepalived state=started enabled=yes

    [root@localhost ~]# vim /etc/ansible/roles/keepalived/handlers/main.yml 
        - name: restart keepalived
          service: name=keedpalived state=restarted
  1. 编辑keepalived配置文件,并定义变量

     [root@localhost ~]# vim /etc/ansible/roles/keepalived/templates/kl.conf.j2 
     ! Configuration: command not found
     global_defs {
                notification_email {
                 root@localhost
                }
                 
            notification_email_from keepalived@localhost
            smtp_server 127.0.0.1
            smtp_connect_timeout 30
            router_id {{ ansible_fqdn }}
            vrrp_mcast_group4 224.1.105.33
         }
         
         vrrp_instance VI_1 {
             state {{ kl_status }}
             interface ens33
             virtual_router_id 33
             priority {{ kl_priority }}
             advert_int 1
             authentication {
                 auth_type PASS
                 auth_pass XXXX1111
             }
             virtual_ipaddress {
                 192.168.1.99 dev ens33 label ens33:0
             }
              notify_master "/etc/keepalived/notify.sh master"
              notify_backup "/etc/keepalived/notify.sh backup"
              notify_fault "/etc/keepalived/notify.sh fault"
         }
         virtual_server 192.168.1.99 80 {
             delay_loop 1
             lb_algo wrr
             lb_kind DR
             protocol TCP
             sorry_server 127.0.0.1 80
         
             real_server 192.168.1.115 80 {
                 weight 1
                 HTTP_GET {
                     url {
                         path /index.html
                         status_code 200
                         }
                     nb_get_retry 3
                     delay_before_retry 2
                     connect_timeout 3
                     }
             }
             real_server 192.168.1.116 80 {
                 weight 1
                 HTTP_GET {
                     url {
                         path /index.html
                         status_code 200
                         }
                     nb_get_retry 3
                     delay_before_retry 2
                     connect_timeout 3
                     }
             }
                 
     }
    
     [root@localhost files]# vim /etc/ansible/hosts 
     [hasrvs]
     192.168.1.10 kl_status=MASTER kl_priority=100
     192.168.1.11 kl_status=BACKUP kl_priority=96
    
  2. 配置nginx的roles

    [root@localhost files]# vim /etc/ansible/roles/nginx/tasks/main.yml
     - name: Install nginx
       yum: name=nginx state=latest
     - name: Install conf
       template: src=index.html.j2 dest=/usr/share/nginx/html/index.html
       notify: reload nginx
     - name: start script
       script: /root/files/setkl.sh start
       notify: reload nginx
     - name: start nginx
       service: name=nginx state=started
     
       [root@localhost files]# vim /etc/ansible/roles/nginx/templates/index.html.j2  
         <h1> {{ ansible_fqdn }} </h1>
         
       [root@localhost files]# vim /etc/ansible/roles/nginx/handlers/main.yml
         - name: reload nginx
           service: name=nginx state=reload
    

7 .编辑keepalived和nginx的playbook

[root@localhost ~]# cd files
[root@localhost files]# vim kl.yml
    - hosts: hasrvs
      remote_user: root
      roles:
      - keepalived
[root@localhost files]# vim nginx.yml
    - hosts: websrvs
      remote_user: root
      roles:
      - nginx
  1. 测试并执行

    [root@localhost files]# ansible-playbook --check kl.yml
    [root@localhost files]# ansible-playbook --check kl.yml
    [root@localhost files]# ansible-playbook --check nginx.yml
    [root@localhost files]# ansible-playbook nginx.yml
    
  2. 访问测试

     [root@localhost files]# curl http://192.168.1.99
     <h1> rs1.ilinux.com </h1>
     [root@localhost files]# curl http://192.168.1.99
     <h1> rs2.ilinux.com </h1>
     [root@localhost files]# curl http://192.168.1.99
     <h1> rs1.ilinux.com </h1>
     [root@localhost files]# curl http://192.168.1.99
     <h1> rs2.ilinux.com </h1>
     [root@localhost files]# curl http://192.168.1.99
     <h1> rs1.ilinux.com </h1>
    
     node1:规则已生成
     [root@node1 ~]# ipvsadm -ln
     IP Virtual Server version 1.2.1 (size=4096)
     Prot LocalAddress:Port Scheduler Flags
       -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
     TCP  192.168.1.99:80 wrr
       -> 192.168.1.115:80             Route   1      0          3         
       -> 192.168.1.116:80             Route   1      0          2    
    [root@node1 ~]# ifconfig
    ens33: ...   
    ens33:0: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
         inet 192.168.1.99  netmask 255.255.255.255  broadcast 0.0.0.0
         ether 00:0c:29:6d:e2:f7  txqueuelen 1000  (Ethernet)
         
    
    [root@node1 ~]# systemctl stop keepalived.service  
    
    node2:
    [root@node2 ~]# ipvsadm -ln
    IP Virtual Server version 1.2.1 (size=4096)
    Prot LocalAddress:Port Scheduler Flags
      -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
    TCP  192.168.1.99:80 wrr
      -> 192.168.1.115:80             Route   1      0          0         
      -> 192.168.1.116:80             Route   1      0          0      
    
    #使用客户端访问:
     [root@localhost files]# curl http://192.168.1.99
     <h1> rs2.ilinux.com </h1>
     [root@localhost files]# curl http://192.168.1.99
     <h1> rs1.ilinux.com </h1>
     [root@localhost files]# curl http://192.168.1.99
     <h1> rs2.ilinux.com </h1>
     [root@localhost files]# curl http://192.168.1.99
     <h1> rs1.ilinux.com </h1>
     [root@localhost files]# curl http://192.168.1.99
     <h1> rs2.ilinux.com </h1>
    
上一篇 下一篇

猜你喜欢

热点阅读