压测Docker&Kubernetes微服务性能测试

k8s集群性能测试

2019-10-24  本文已影响0人  hiph

普通的集群性能测试,面临硬件资源紧张,所有node启动和更新周期长的问题,在进行大规模集群测试时从资源和时间方面考虑使用真实集群测试是"不可接受的"。因此k8s官方提供了kubemark工具对K8s集群进行性能测试,允许基于较少的“真实物理”node上运行实际的性能测试。

kubemark架构

kubemark设计文档kubermark

kubemark包括两个部分:

通常来说的kubernetes 节点与master通信有两类组件:kubelet,kubeProxy,在Hollow node中:

kubemark搭建

详细搭建流程kubemark guide

这里介绍两种搭建方式

  1. 使用已有镜像快速搭建
  2. 编译kubernets源码进行kubemakr集群搭建

测试使用社区版本release 1.14.0

使用已有镜像快速搭建

该方式适合已有kubemark镜像,快速实现kubemark集群的部署

  1. 部署完成两个真实集群,分别为kubemark集群(仅有master node),外部集群
  2. 本地配置kubeconfig连接到外部集群
  3. 创建资源
# 创建 kubemark namespace
kubectl create ns kubemark

# 创建 node-configmap
kubectl create configmap node-configmap -n kubemark --from-literal=content.type="test-cluster" 

# 根据kubemaster集群的kubeconfig创建secret
kubectl create secret generic kubeconfig --type=Opaque --namespace=kubemark --from-file=kubelet.kubeconfig={kubemark_kubeconfig_file_path} --from-file=kubeproxy.kubeconfig={kubemark_kubeconfig_file_path}

运行hollow pod
kubectl apply -f hollow-node.yaml -n kubemark

hollow-node.yaml示例内容如下

apiVersion: v1
kind: ReplicationController
metadata:
  name: hollow-node
  labels:
    name: hollow-node
spec:
  replicas: 3
  selector:
    name: hollow-node
  template:
    metadata:
      labels:
        name: hollow-node
    spec:
      initContainers:
      - name: init-inotify-limit
        image: busybox
        command: ['sysctl', '-w', 'fs.inotify.max_user_instances=1000']
        securityContext:
          privileged: true
      volumes:
      - name: kubeconfig-volume
        secret:
          secretName: kubeconfig
      - name: logs-volume
        hostPath:
          path: /var/log
      containers:
      - name: hollow-kubelet
        image: zhang598/kubemark:1.14.0
        ports:
        - containerPort: 4194
        - containerPort: 10250
        - containerPort: 10255
        env:
        - name: CONTENT_TYPE
          valueFrom:
            configMapKeyRef:
              name: node-configmap
              key: content.type
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        command:
        - /bin/sh
        - -c
        - /kubemark --morph=kubelet --name=$(NODE_NAME) --kubeconfig=/kubeconfig/kubelet.kubeconfig $(CONTENT_TYPE) --alsologtostderr 1>>/var/log/kubelet-$(NODE_NAME).log 2>&1
        volumeMounts:
        - name: kubeconfig-volume
          mountPath: /kubeconfig
          readOnly: true
        - name: logs-volume
          mountPath: /var/log
        resources:
          requests:
            cpu: 40m
            memory: 100M
        securityContext:
          privileged: true
      - name: hollow-proxy
        image: zhang598/kubemark:1.14.0
        env:
        - name: CONTENT_TYPE
          valueFrom:
            configMapKeyRef:
              name: node-configmap
              key: content.type
        - name: NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        command:
        - /bin/sh
        - -c
        - /kubemark --morph=proxy --name=$(NODE_NAME)  --use-real-proxier=false  --kubeconfig=/kubeconfig/kubeproxy.kubeconfig $(CONTENT_TYPE) --alsologtostderr 1>>/var/log/kubeproxy-$(NODE_NAME).log 2>&1
        volumeMounts:
        - name: kubeconfig-volume
          mountPath: /kubeconfig
          readOnly: true
        - name: logs-volume
          mountPath: /var/log
        resources:
          requests:
            cpu: 20m
            memory: 102450Ki
  1. 在kubemark集群即可查看对应hollow node注册成功

使用kubernets源码搭建

  1. 在external集群中一个节点编译kubernetes(两种方式)
mkdir -p $GOPATH/src/k8s.io
cd $GOPATH/src/k8s.io
git clone https://github.com/kubernetes/kubernetes
cd kubernetes
#git branch -a
#选择合适版本
#git checkout -b release-1.14  remotes/origin/release-1.14
make

如果只是编译kubemark,那么就只用到 make kubemark kubectl

git clone https://github.com/kubernetes/kubernetes
cd kubernetes
#git branch -a
#选择合适版本
#git checkout -b release-1.14  remotes/origin/release-1.14
make quick-release
  1. 配置环境变量
配置环境变量(代码运行过程中使用)
chmod +x /home/src/k8s.io/kubernetes/_output/bin/kubectl
mv /home/src/k8s.io/kubernetes/_output/bin/kubectl /usr/local/bin/kubectl
export KUBECTL_PATH=/usr/local/bin/kubectl
  1. 适配文件
    3.1 默认是使用GCE(Google Compute Engine)创建Hollow node,修改test/kubemark/cloud-provider-config.sh
CLOUD_PROVIDER="pre-existing"
KUBEMARK_IMAGE_MAKE_TARGET="push"
# CONTAINER_REGISTRY和PROJECT对应创建过程中生成的kubemark镜像上传位置,需要在本地预先登陆该镜像仓库
CONTAINER_REGISTRY=docker.io
PROJECT="rthallisey" 
MASTER_IP="192.168.121.29:6443" 

3.2 注释掉kubernetes/test/kubemark/start-kubemark.sh文件中创建kube-master的东西,只留下创建hollow节点的东西

#举例(release 1.12.0)
############################### Main Function ########################################
#detect-project &> /dev/null
#find-release-tars

# We need master IP to generate PKI and kubeconfig for cluster.
#get-or-create-master-ip
#generate-pki-config
#write-local-kubeconfig

# Setup for master.
#function start-master {
#  echo -e "${color_yellow}STARTING SETUP FOR MASTER${color_norm}"
#  create-master-environment-file
#  create-master-instance-with-resources
#  wait-for-master-reachability
#  write-pki-config-to-master
#  copy-resource-files-to-master
#  start-master-components
#}
#start-master &

# Setup for hollow-nodes.
function start-hollow-nodes {
  echo -e "${color_yellow}STARTING SETUP FOR HOLLOW-NODES${color_norm}"
  create-and-upload-hollow-node-image
  create-kube-hollow-node-resources
  wait-for-hollow-nodes-to-run-or-timeout
}
start-hollow-nodes

3.3 使用kubemark集群中的kubeconfig(kubeconfig.kubemark)生产secret,一些不用的变量可以删掉

#改之前
  "${KUBECTL}" create secret generic "kubeconfig" --type=Opaque --namespace="kubemark" \
    --from-literal=kubelet.kubeconfig="${KUBELET_KUBECONFIG_CONTENTS}" \
    --from-literal=kubeproxy.kubeconfig="${KUBEPROXY_KUBECONFIG_CONTENTS}" \
    --from-literal=heapster.kubeconfig="${HEAPSTER_KUBECONFIG_CONTENTS}" \
    --from-literal=cluster_autoscaler.kubeconfig="${CLUSTER_AUTOSCALER_KUBECONFIG_CONTENTS}" \
    --from-literal=npd.kubeconfig="${NPD_KUBECONFIG_CONTENTS}" \
    --from-literal=dns.kubeconfig="${KUBE_DNS_KUBECONFIG_CONTENTS}"

#改之后
  LOCAL_KUBECONFIG="${RESOURCE_DIRECTORY}/kubeconfig.kubemark"
  "${KUBECTL}" create secret generic "kubeconfig" --type=Opaque --namespace="kubemark" \
    --from-file=kubelet.kubeconfig="${LOCAL_KUBECONFIG}" \
    --from-file=kubeproxy.kubeconfig="${LOCAL_KUBECONFIG}" \
    --from-file=heapster.kubeconfig="${LOCAL_KUBECONFIG}" \
    --from-file=cluster_autoscaler.kubeconfig="${LOCAL_KUBECONFIG}" \
    --from-file=npd.kubeconfig="${LOCAL_KUBECONFIG}" \
    --from-file=dns.kubeconfig="${LOCAL_KUBECONFIG}"

3.4 注释掉通过kubeclt create创建插件的内容(如Heapster,Cluster Autoscaler等)

3.5 把kubemark集群的kubeconfig复制到external集群test/kubemark/resources/kubeconfig.kubemark

3.6 根据外部集群做相应适配。例如在hollow-node_template.yaml修改(改hollow-node.yaml会被template刷掉),pod的service account,node-problem-detector的镜像, busybox镜像等

hollow镜像构建过程中会用到其他基础镜像,默认是docker.io,避免构建不成功可以为docker添加国内镜像仓库
修改 /etc/docker/daemon.json 文件并添加上 registry-mirrors 键值,重启docker服务
{
"registry-mirrors": ["https://privatetest.com"]
}

  1. 运行test/kubemark/start-kubemark.sh脚本

  2. 运行结束后,登陆kubemark集群查看hollow node状态

Trouble Shooting

  1. kubeproxy运行不起来
    kubeproxy容器内打印日志有如下内容
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
    panic: runtime error: invalid memory address or nil pointer dereference

修改kube-proxy的启动命令,增加配置--use-real-proxier=false

use-real-proxier默认为true,Set to true if you want to use real proxier inside hollow-proxy.

e2e测试

运行e2e测试脚本位置test/kubemark/run-e2e-tests.sh,根据是否是容器化环境有两种执行方式

if [[ -f /.dockerenv ]]; then
    # Running inside a dockerized runner.
    go run ./hack/e2e.go -- --check-version-skew=false --test --test_args="--e2e-verify-service-account=false --dump-logs-on-failure=false ${ARGS[*]}"
else
    # Running locally.
    for ((i=0; i < ${ARGS[@]}; i++)); do
      ARGS[$i]="$(echo "ARGS[$i]" | sed -e 's/\[/\\\[/g' -e 's/\]/\\\]/g' )"
    done
    "${KUBE_ROOT}/hack/ginkgo-e2e.sh" "--e2e-verify-service-account=false" "--dump-logs-on-failure=false" "${ARGS[@]}"
fi

同时也可以手动指定这两种运行方式:

  1. Running inside a dockerized runner
    代码会去自动下载e2e测试用例go get -u k8s.io/test-infra/kubetest及其依赖包,随后运行
# 声明变量
export KUBECTL_PATH=/usr/local/bin/kubectl   #不是kubelet,这里翻跟头了

# 编译ginkgo
cd $GOPATH/src/k8s.io/kubernetes
make ginkgo

make WHAT=test/e2e/e2e.test
# If you want to run your e2e testing framework without re-provisioning the e2e setup, you can do so via make WHAT=test/e2e/e2e.test, and then re-running the ginkgo tests.

# Performance测试
go run hack/e2e.go -- -v --test  --test_args="--host=http://172.20.0.113:8080 --ginkgo.focus=\[Feature:Performance\]" --provider=local(默认为gce)

在go编译kubetest时出现如下问题,github有issue说是k8s.io/klog版本不对,但是改正后存在又出现了其他问题,最后go变量开关打开export GO111MODULE=on,编译就通过了

go get k8s.io/test-infra/kubetest
# k8s.io/client-go/transport
client-go/transport/round_trippers.go:70:11: cannot convert klog.V(9) (type klog.Verbose) to type bool
client-go/transport/round_trippers.go:72:11: cannot convert klog.V(8) (type klog.Verbose) to type bool
client-go/transport/round_trippers.go:74:11: cannot convert klog.V(7) (type klog.Verbose) to type bool
client-go/transport/round_trippers.go:76:11: cannot convert klog.V(6) (type klog.Verbose) to type bool
  1. Running locally
# 编译ginkgo
cd $GOPATH/src/k8s.io/kubernetes
make ginkgo

make WHAT=test/e2e/e2e.test
# If you want to run your e2e testing framework without re-provisioning the e2e setup, you can do so via make WHAT=test/e2e/e2e.test, and then re-running the ginkgo tests.

把待测集群的kubeconfig复制到test/kubemark/resources/kubeconfig.kubemark

# Performance测试
KUBE_MASTER_IP="192.168.3.184:6443" test/kubemark/run-e2e-tests.sh --ginkgo.focus=\[Feature:Performance\] --gather-metrics-at-teardown=true --output-print-type=json

参考

上一篇 下一篇

猜你喜欢

热点阅读