K8S

k8s deployment

2022-09-06  本文已影响0人  与狼共舞666

k8s 的更新策略为滚动更新,通过新创建的RS(Replica Set)创建新的pod,等新的pod调度完成显示running,然后terminating掉老的RS下的pod,循环往复直至完成全部新pod的更新。

[root@master1 ~]# kubectl rollout history deployment myapp-v1
deployment.apps/myapp-v1 
REVISION  CHANGE-CAUSE
1         <none>
2         <none>

[root@master1 ~]# 
[root@master1 ~]# kubectl rollout undo deployment myapp-v1 --to-revision=1
deployment.apps/myapp-v1 rolled back
[root@master1 ~]# kubectl rollout history deployment myapp-v1
deployment.apps/myapp-v1 
REVISION  CHANGE-CAUSE
2         <none>
3         <none>

[root@master1 ~]# kubectl describe deployment myapp-v1
Name:                   myapp-v1
Namespace:              default
CreationTimestamp:      Tue, 06 Sep 2022 11:00:16 +0800
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 3
Selector:               app=myapp,version=v1
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=myapp
           version=v1
  Containers:
   myapp:
    Image:        janakiramm/myapp:v1
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   myapp-v1-8448d48797 (3/3 replicas created)
Events:
  Type    Reason             Age                 From                   Message
  ----    ------             ----                ----                   -------
  Normal  ScalingReplicaSet  22m                 deployment-controller  Scaled up replica set myapp-v1-8448d48797 to 2
  Normal  ScalingReplicaSet  17m                 deployment-controller  Scaled up replica set myapp-v1-8448d48797 to 4
  Normal  ScalingReplicaSet  15m                 deployment-controller  Scaled down replica set myapp-v1-8448d48797 to 3
  Normal  ScalingReplicaSet  11m                 deployment-controller  Scaled up replica set myapp-v1-69d5787956 to 1
  Normal  ScalingReplicaSet  11m                 deployment-controller  Scaled down replica set myapp-v1-8448d48797 to 2
  Normal  ScalingReplicaSet  11m                 deployment-controller  Scaled up replica set myapp-v1-69d5787956 to 2
  Normal  ScalingReplicaSet  10m                 deployment-controller  Scaled down replica set myapp-v1-8448d48797 to 1
  Normal  ScalingReplicaSet  10m                 deployment-controller  Scaled up replica set myapp-v1-69d5787956 to 3
  Normal  ScalingReplicaSet  10m                 deployment-controller  Scaled down replica set myapp-v1-8448d48797 to 0
  Normal  ScalingReplicaSet  2m16s               deployment-controller  Scaled up replica set myapp-v1-8448d48797 to 1
  Normal  ScalingReplicaSet  95s (x4 over 2m2s)  deployment-controller  (combined from similar events): Scaled up replica set myapp-v1-8448d48797 to 3
  Normal  ScalingReplicaSet  9s                  deployment-controller  Scaled down replica set myapp-v1-69d5787956 to 0
[root@master1 ~]# kubectl describe deployment myapp-v1
Name:                   myapp-v1
Namespace:              default
CreationTimestamp:      Tue, 06 Sep 2022 11:00:16 +0800
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 3
Selector:               app=myapp,version=v1
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=myapp
           version=v1
  Containers:
   myapp:
    Image:        janakiramm/myapp:v1
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   myapp-v1-8448d48797 (3/3 replicas created)
Events:
  Type    Reason             Age                From                   Message
  ----    ------             ----               ----                   -------
  Normal  ScalingReplicaSet  35m                deployment-controller  Scaled up replica set myapp-v1-8448d48797 to 2
  Normal  ScalingReplicaSet  30m                deployment-controller  Scaled up replica set myapp-v1-8448d48797 to 4
  Normal  ScalingReplicaSet  28m                deployment-controller  Scaled down replica set myapp-v1-8448d48797 to 3
  Normal  ScalingReplicaSet  24m                deployment-controller  Scaled up replica set myapp-v1-69d5787956 to 1
  Normal  ScalingReplicaSet  24m                deployment-controller  Scaled down replica set myapp-v1-8448d48797 to 2
  Normal  ScalingReplicaSet  24m                deployment-controller  Scaled up replica set myapp-v1-69d5787956 to 2
  Normal  ScalingReplicaSet  23m                deployment-controller  Scaled down replica set myapp-v1-8448d48797 to 1
  Normal  ScalingReplicaSet  23m                deployment-controller  Scaled up replica set myapp-v1-69d5787956 to 3
  Normal  ScalingReplicaSet  23m                deployment-controller  Scaled down replica set myapp-v1-8448d48797 to 0
  Normal  ScalingReplicaSet  15m                deployment-controller  Scaled up replica set myapp-v1-8448d48797 to 1
  Normal  ScalingReplicaSet  14m (x4 over 15m)  deployment-controller  (combined from similar events): Scaled up replica set m
  Normal  ScalingReplicaSet  13m                deployment-controller  Scaled down replica set myapp-v1-69d5787956 to 0
[root@master1 ~]# kubectl get pods --show-labels
NAME                        READY   STATUS    RESTARTS   AGE   LABELS
myapp-v1-8448d48797-7cn4p   1/1     Running   0          15m   app=myapp,pod-template-hash=8448d48797,version=v1
myapp-v1-8448d48797-7mhxk   1/1     Running   0          15m   app=myapp,pod-template-hash=8448d48797,version=v1
myapp-v1-8448d48797-fkb46   1/1     Running   0          15m   app=myapp,pod-template-hash=8448d48797,version=v1

滚动更新过程中新旧pod的创建过程如下:

[root@master1 ~]# kubectl get pods -l app=myapp -w
NAME                        READY   STATUS    RESTARTS   AGE
myapp-v1-8448d48797-phjwf   1/1     Running   0          4m25s
myapp-v1-8448d48797-r5sn8   1/1     Running   0          9m40s
myapp-v1-8448d48797-vz4jj   1/1     Running   0          9m37s
# 更新了镜像,查看pod的创建过程
myapp-v1-69d5787956-x2vtr   0/1     Pending   0          0s
myapp-v1-69d5787956-x2vtr   0/1     Pending   0          0s
myapp-v1-69d5787956-x2vtr   0/1     ContainerCreating   0          4s
myapp-v1-69d5787956-x2vtr   0/1     ContainerCreating   0          13s
myapp-v1-69d5787956-x2vtr   1/1     Running             0          24s
myapp-v1-8448d48797-phjwf   1/1     Terminating         0          6m2s
myapp-v1-69d5787956-vcjsb   0/1     Pending             0          0s
myapp-v1-69d5787956-vcjsb   0/1     Pending             0          0s
myapp-v1-69d5787956-vcjsb   0/1     ContainerCreating   0          0s
myapp-v1-69d5787956-vcjsb   0/1     ContainerCreating   0          13s
myapp-v1-8448d48797-phjwf   0/1     Terminating         0          6m16s
myapp-v1-8448d48797-phjwf   0/1     Terminating         0          6m19s
myapp-v1-8448d48797-phjwf   0/1     Terminating         0          6m20s
myapp-v1-69d5787956-vcjsb   1/1     Running             0          24s
myapp-v1-8448d48797-vz4jj   1/1     Terminating         0          11m
myapp-v1-69d5787956-qq58n   0/1     Pending             0          5s
myapp-v1-69d5787956-qq58n   0/1     Pending             0          5s
myapp-v1-69d5787956-qq58n   0/1     ContainerCreating   0          11s
myapp-v1-69d5787956-qq58n   0/1     ContainerCreating   0          25s
myapp-v1-8448d48797-vz4jj   0/1     Terminating         0          12m
myapp-v1-8448d48797-vz4jj   0/1     Terminating         0          12m
myapp-v1-8448d48797-vz4jj   0/1     Terminating         0          12m
myapp-v1-69d5787956-qq58n   1/1     Running             0          31s
myapp-v1-8448d48797-r5sn8   1/1     Terminating         0          12m
myapp-v1-8448d48797-r5sn8   0/1     Terminating         0          12m
myapp-v1-8448d48797-r5sn8   0/1     Terminating         0          12m
myapp-v1-8448d48797-r5sn8   0/1     Terminating         0          12m

## 执行了回滚版本V1,查看新pod创建过程
myapp-v1-8448d48797-7cn4p   0/1     Pending             0          0s
myapp-v1-8448d48797-7cn4p   0/1     Pending             0          0s
myapp-v1-8448d48797-7cn4p   0/1     ContainerCreating   0          0s
myapp-v1-8448d48797-7cn4p   0/1     ContainerCreating   0          8s
myapp-v1-8448d48797-7cn4p   1/1     Running             0          12s
myapp-v1-69d5787956-qq58n   1/1     Terminating         0          8m53s
myapp-v1-8448d48797-7mhxk   0/1     Pending             0          0s
myapp-v1-8448d48797-7mhxk   0/1     Pending             0          2s
myapp-v1-8448d48797-7mhxk   0/1     ContainerCreating   0          3s
myapp-v1-8448d48797-7mhxk   0/1     ContainerCreating   0          15s
myapp-v1-69d5787956-qq58n   0/1     Terminating         0          9m9s
myapp-v1-69d5787956-qq58n   0/1     Terminating         0          9m10s
myapp-v1-69d5787956-qq58n   0/1     Terminating         0          9m10s
myapp-v1-8448d48797-7mhxk   1/1     Running             0          22s
myapp-v1-69d5787956-vcjsb   1/1     Terminating         0          9m43s
myapp-v1-8448d48797-fkb46   0/1     Pending             0          0s
myapp-v1-8448d48797-fkb46   0/1     Pending             0          1s
myapp-v1-8448d48797-fkb46   0/1     ContainerCreating   0          12s
myapp-v1-69d5787956-vcjsb   0/1     Terminating         0          10m
myapp-v1-69d5787956-vcjsb   0/1     Terminating         0          10m
myapp-v1-8448d48797-fkb46   0/1     ContainerCreating   0          45s
myapp-v1-69d5787956-vcjsb   0/1     Terminating         0          10m
myapp-v1-8448d48797-fkb46   1/1     Running             0          59s
myapp-v1-69d5787956-x2vtr   1/1     Terminating         0          11m
myapp-v1-69d5787956-x2vtr   0/1     Terminating         0          11m
myapp-v1-69d5787956-x2vtr   0/1     Terminating         0          11m
myapp-v1-69d5787956-x2vtr   0/1     Terminating         0          11m

更改滚动更新策略:

[root@master1 deployment]# kubectl describe deployment myapp-v1 -n default
Name:                   myapp-v1
Namespace:              default
CreationTimestamp:      Tue, 06 Sep 2022 11:00:16 +0800
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 3
Selector:               app=myapp,version=v1
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  25% max unavailable, 25% max surge
Pod Template:
  Labels:  app=myapp
           version=v1
  Containers:
   myapp:
    Image:        janakiramm/myapp:v1
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   myapp-v1-8448d48797 (3/3 replicas created)
Events:          <none>
[root@master1 deployment]# kubectl patch deployment myapp-v1 -p '{"spec":{"strategy":{"rollingUpdate":{"maxSurge":1,"maxUnavailable":1}}}}' -n default
deployment.apps/myapp-v1 patched
[root@master1 deployment]# kubectl describe deployment myapp-v1 -n default
Name:                   myapp-v1
Namespace:              default
CreationTimestamp:      Tue, 06 Sep 2022 11:00:16 +0800
Labels:                 <none>
Annotations:            deployment.kubernetes.io/revision: 3
Selector:               app=myapp,version=v1
Replicas:               3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType:           RollingUpdate
MinReadySeconds:        0
RollingUpdateStrategy:  1 max unavailable, 1 max surge
Pod Template:
  Labels:  app=myapp
           version=v1
  Containers:
   myapp:
    Image:        janakiramm/myapp:v1
    Port:         80/TCP
    Host Port:    0/TCP
    Environment:  <none>
    Mounts:       <none>
  Volumes:        <none>
Conditions:
  Type           Status  Reason
  ----           ------  ------
  Available      True    MinimumReplicasAvailable
  Progressing    True    NewReplicaSetAvailable
OldReplicaSets:  <none>
NewReplicaSet:   myapp-v1-8448d48797 (3/3 replicas created)
Events:          <none>

上面可以看到RollingUpdateStrategy: 1 max unavailable, 1 max surge
这个rollingUpdate更新策略变成了刚才设定的,因为我们设定的pod副本数是3,1和1表示最少不能少于2个pod,最多不能超过4个pod 。
这个就是通过控制RollingUpdateStrategy这个字段来设置滚动更新策略的

Deployment资源清单详解

apiVersion: apps/v1
kind: Deployment 
metadata:
  name: portal
  namespace: ms 
spec:
  replicas: 1
  selector:
    matchLabels:
      project: ms
      app: portal
  template:
    metadata:
      labels:
        project: ms 
        app: portal
    spec:
      containers:
      - name: portal
        image:  xianchao/portal:v1
        imagePullPolicy: Always
        ports:
          - protocol: TCP
            containerPort: 8080 
        resources:  #资源配额
          limits:  #资源限制,最多可用的cpu和内存
            cpu: 1
            memory: 1Gi
         requests: #最少需要多少资源才可以运行Pod
            cpu: 0.5
            memory: 1Gi
        readinessProbe:
          tcpSocket:
            port: 8080
          initialDelaySeconds: 60
          periodSeconds: 10
        livenessProbe:
          tcpSocket:
            port: 8080
          initialDelaySeconds: 60
          periodSeconds: 

livenessProbe: 存活性探测
用于判断容器是否存活,即Pod是否为running状态,如果LivenessProbe探针探测到容器不健康,则kubelet将kill掉容器,并根据容器的重启策略是否重启。如果一个容器不包含LivenessProbe探针,则Kubelet认为容器的LivenessProbe探针的返回值永远成功。
tcpSocket:
port: 8080 #检测8080端口是否存在
initialDelaySeconds: 60 #Pod启动60s执行第一次检查
periodSeconds: 10 #第一次检查后每隔10s检查一次

readinessProbe: 就绪性探测
有时候应用程序可能暂时无法接受请求,比如Pod已经Running了,但是容器内应用程序尚未启动成功,在这种情况下,如果没有ReadinessProbe,则Kubernetes认为它可以处理请求了,然而此时,我们知道程序还没启动成功是不能接收用户请求的,所以不希望kubernetes把请求调度给它,则使用ReadinessProbe探针。

ReadinessProbe和livenessProbe可以使用相同探测方式,只是对Pod的处置方式不同,ReadinessProbe是将Pod IP:Port从对应的EndPoint列表中删除,而livenessProbe则Kill容器并根据Pod的重启策略来决定作出对应的措施。ReadinessProbe探针探测容器是否已准备就绪,如果未准备就绪则kubernetes不会将流量转发给此Pod。在Pod运行过程中,K8S仍然会每隔10s检测8080端口

上一篇下一篇

猜你喜欢

热点阅读