Docker模板及使用说明

2018-01-30  本文已影响0人  加菲老猫

Docker模板

标签(空格分隔): docker template

在为了能够快速搭建演示docker诸多功能的单虚机环境,可以使用以下模板在Azure上快速创建

模板内容:

#cloud-config
package_upgrade: true
write_files:
  - path: /etc/systemd/system/docker.service.d/docker.conf
    content: |
      [Service]
        ExecStart=
        ExecStart=/usr/bin/dockerd
  - path: /etc/docker/daemon.json
    content: |
      {
        "hosts": ["fd://","tcp://0.0.0.0:2375"]
      }
runcmd:
  
  - curl -sSL https://get.docker.com/ | sh
  - usermod -aG docker chengzh
  - apt-get -y install docker-compose 

  - curl -L git.io/scope -o /usr/local/bin/scope
  - chmod a+x /usr/local/bin/scope
  - scope launch 10.0.0.4 10.0.0.5 10.0.0.6 
  - docker run -d -p 9000:9000 --restart always -v /var/run/docker.sock:/var/run/docker.sock -v /opt/portainer:/data --name=portainer portainer/portainer -H unix:///var/run/docker.sock
  - docker run --volume=/:/rootfs:ro --volume=/var/run:/var/run:rw --volume=/sys:/sys:ro --volume=/var/lib/docker/:/var/lib/docker:ro --publish=8090:8080 --detach=true --restart always --name=cadvisor google/cadvisor:latest
  - docker run -d -p 9100:9100 -v "/proc:/host/proc" -v "/sys:/host/sys" -v "/:/rootfs" --net=host --restart always --name=promnx prom/node-exporter --path.procfs /host/proc --path.sysfs /host/sys --collector.filesystem.ignored-mount-points "^/(sys|proc|dev|host|etc)($|/)"
  - docker run -d -i -p 3000:3000 -e "GF_SERVER_ROOT_URL=http://grafana.server.name"  -e "GF_SECURITY_ADMIN_PASSWORD=admin"  --net=host --restart always --name=grafana grafana/grafana
  
  - git clone https://github.com/microservices-demo/microservices-demo
  - cd microservices-demo
  - docker-compose -f deploy/docker-compose/docker-compose.yml up -d 

  - sysctl -w vm.max_map_count=262144
  - docker run busybox sh -c 'while true; do echo "This is a log message from container busybox!"; sleep 10; done;'
  - curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-5.4.0-amd64.deb
  - dpkg -i filebeat-5.4.0-amd64.deb
  - docker run -d -p 5601:5601 -p 9200:9200 -p 5044:5044 -it --name elk sebp/elk 

  - echo "deb [arch=amd64] https://packages.microsoft.com/repos/azure-cli/ wheezy main" | \
  - tee /etc/apt/sources.list.d/azure-cli.list
  - apt-key adv --keyserver packages.microsoft.com --recv-keys 52E16F86FEE04B979B07E28DB02C46DF417A0893
  - apt-get install apt-transport-https
  - apt-get update && sudo apt-get install azure-cli
  - sudo snap install kubectl --classic 

模板文件说明

模板使用步骤
以下步骤在安装了azure cli的ubuntu环境中运行

  1. 运行 sensible-editor sockshop.txt 将上述文档代码拷贝进去,保存生成应答文件
  2. 运行 az group create --name socklab --location westus2创建azure资源组
  1. 待虚机创建完毕之后,使用以下命令行开放端口

    az vm open-port --resource-group socklab --name mysockVM01 --port 4040 --priority 1001
    az vm open-port --resource-group socklab --name mysockVM01 --port 9000 --priority 1002
    az vm open-port --resource-group socklab --name mysockVM01 --port 80 --priority 1003
    az vm open-port --resource-group socklab --name mysockVM01 --port 443 --priority 1004
    az vm open-port --resource-group socklab --name mysockVM01 --port 8080 --priority 1005
    az vm open-port --resource-group socklab --name mysockVM01 --port 8000 --priority 1006
    az vm open-port --resource-group socklab --name mysockVM01 --port 3000 --priority 1007
    az vm open-port --resource-group socklab --name mysockVM01 --port 5601 --priority 1008
    

手动执行部分
运行Prometheus之前需要创建prometheus.yml,以下是该文件模板

# my global config
global:
  scrape_interval:     15s # Set the scrape interval to every 15 seconds. Default is every 1 minute.
  evaluation_interval: 15s # Evaluate rules every 15 seconds. The default is every 1 minute.
  # scrape_timeout is set to the global default (10s).

# Alertmanager configuration
alerting:
  alertmanagers:
  - static_configs:
    - targets:
      # - alertmanager:9093

# Load rules once and periodically evaluate them according to the global 'evaluation_interval'.
rule_files:
  # - "first_rules.yml"
  # - "second_rules.yml"

# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
  # The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
  - job_name: 'prometheus'

    # metrics_path defaults to '/metrics'
    # scheme defaults to 'http'.

    static_configs:
      - targets: ['localhost:9090','localhost:8090','localhost:9100']

说明: 最后一段是重点,说明promethus除了把自己当作数据源之外,还分别引用cadivsor和exporter获取数据源,我们也可以看到,如果要监控多台机器,则需要在这里将其他机器的信息填入。

使用sensible-editor prometheus.yml创建配置文件

运行以下命令创建Prometheus容器

    docker run -d -p 9090:9090 \
      -v /home/chengzh/prometheus.yml:/etc/prometheus/prometheus.yml \
      --restart always \
      --name prometheus \
      --net=host \
      prom/prometheus

ELK手动初始化
假定之前脚本中的ELK部分运行正常,则首先使用sensible-editor /etc/filebeat/filebeat.yml命令编辑配置文件:将相关内容替换为

- /var/lib/docker/containers/*/*.log 
- /var/log/syslog

之后再运行以下命令重启filebeat服务

systemctl start filebeat.service

重启之后ELK的修复
除了weave work之外其他容器都能自动重启,但是ELK需要特别干预,在docker host重启之后使用root账号权限运行以下命令:

sysctl -w vm.max_map_count=262144
docker start elk
systemctl start filebeat.service
docker start busybox
上一篇下一篇

猜你喜欢

热点阅读