kube-controller-manager分析之NodeLi

2022-12-22  本文已影响0人  zhangzhifei

功能

监控node的健康状态,异常后打上相应的noExection 污点,由TaintManager驱逐异常node上的污点。
Kube-controller-manager支持了很多参数配置NodeLifecycleController,由于驱逐pod属于高危动作,很可能会引起集群崩溃,业务服务不可用,如node-master间网络异常,但pod能提供正常服务。如apiserver使用的外部lb异常,或者apiserver本身异常,都可能发生不可预知的问题。但是这个能力不能完全不用,当node真正异常时,需要相应的容错能力。本文将NodeLifecycleController所有的参数在源码层面进行了相关分析和解释,希望对大家有帮助

相关配置

NodeLifecycleControllerConfiguration


// NodeLifecycleControllerConfiguration contains elements describing NodeLifecycleController.
type NodeLifecycleControllerConfiguration struct {
   // If set to true enables NoExecute Taints and will evict all not-tolerating
   // Pod running on Nodes tainted with this kind of Taints.
   // 对应--enable-taint-manager 默认为 true,如果为 true,则表示NodeController 将会启动 TaintManager,当已经调度到该 node 上的 pod 不能容忍 node 的 taint 时,由 TaintManager 负责驱逐此类 pod,若不开启该特性则已调度到该 node 上的 pod 会继续存在
   EnableTaintManager bool 
   // nodeEvictionRate is the number of nodes per second on which pods are deleted in case of node failure when a zone is healthy
   // 通过--node-eviction-rate设置, 默认 0.1,表示当集群下某个 zone 为 unhealthy 时,每秒应该剔除的 node 数量,默认即每 10s 剔除1个 node
   NodeEvictionRate float32
   // secondaryNodeEvictionRate is the number of nodes per second on which pods are deleted in case of node failure when a zone is unhealthy
   // 通过 --secondary-node-eviction-rate设置,默认为 0.01,表示如果某个 zone 下的 unhealthy 节点的百分比超过 --unhealthy-zone-threshold (默认为 0.55)时,驱逐速率将会减小,如果集群较小(小于等于 --large-cluster-size-threshold 个 节点 - 默认为 50),驱逐操作将会停止,否则驱逐速率将降为每秒 --secondary-node-eviction-rate 个(默认为 0.01);
   SecondaryNodeEvictionRate float32
   // nodeStartupGracePeriod is the amount of time which we allow starting a node to
   // be unresponsive before marking it unhealthy.
   // --node-startup-grace-period 默认 60s,在 node 启动完成前标记节点为unhealthy 的允许无响应时间;
   NodeStartupGracePeriod metav1.Duration
   // NodeMonitorGracePeriod is the amount of time which we allow a running node to be
   // unresponsive before marking it unhealthy. Must be N times more than kubelet's
   // nodeStatusUpdateFrequency, where N means number of retries allowed for kubelet
   // to post node status.
   // 通过--node-monitor-grace-period 设置,默认 40s,表示在标记某个 node为 unhealthy 前,允许40s内该node无响应
   NodeMonitorGracePeriod metav1.Duration
   // podEvictionTimeout is the grace period for deleting pods on failed nodes.
   // 通过--pod-eviction-timeout 设置,默认 5 分钟,表示在强制删除 node 上的 pod 时,容忍 pod 时间;没有启动TaintBasedEvictions才有效,所以不会用到
   PodEvictionTimeout metav1.Duration
   // secondaryNodeEvictionRate is implicitly overridden to 0 for clusters smaller than or equal to largeClusterSizeThreshold
   // 通过--large-cluster-size-threshold 设置,默认为 50,当该 zone 的节点超过该阈值时,则认为该 zone 是一个大集群;对于小于或等于largeClusterSizeThreshold的集群,secondarynodeevtionrate将降级为0,即不进行驱逐
   LargeClusterSizeThreshold int32
   // Zone is treated as unhealthy in nodeEvictionRate and secondaryNodeEvictionRate when at least
   // unhealthyZoneThreshold (no less than 3) of Nodes in the zone are NotReady
   // --unhealthy-zone-threshold, 不健康zone阈值,会影响什么时候开启二级驱赶速率,默认为0.55,即当该zone中节点宕机数目超过55%,而认为该zone不健康
   UnhealthyZoneThreshold float32
}

NodeMonitorPeriod

// KubeCloudSharedConfiguration contains elements shared by both kube-controller manager
// and cloud-controller manager, but not genericconfig.
type KubeCloudSharedConfiguration struct {

    // nodeMonitorPeriod is the period for syncing NodeStatus in NodeController.    
    // 通过--node-monitor-period 设置,默认为 5s,表示在 NodeController 中同步NodeStatus 的周期,多长时间Controller检查一次。这个值应该小于--node-monitor-grace-period 
    NodeMonitorPeriod metav1.Duration
}

代码

代码来自1.21

startNodeLifecycleController->
    NewNodeLifecycleController->
    lifecycleController.Run->
        nc.taintManager.Run->
        nc.doNodeProcessingPassWorker->
        nc.doPodProcessingWorker->
        nc.doNoExecuteTaintingPass(EnableTaintManager)/nc.doEvictionPass->
        nc.monitorNodeHealth->

taintManager

主要功能:负责删除pod

  1. 监听集群中所有的Node和pod,当node上存在taints并且该node上的pod不能容忍所有taint,或者pod配置了tolerationSeconds,并且倒计时完成,则delete 该pod
  2. 相关参数,apiserver准入控制添加的,时间可配置
  tolerations:
  - effect: NoExecute
    key: node.kubernetes.io/not-ready
    operator: Exists
    tolerationSeconds: 300
  - effect: NoExecute
    key: node.kubernetes.io/unreachable
    operator: Exists
    tolerationSeconds: 300

doNodeProcessingPassWorker

对该node执行doNoScheduleTaintingPass–根据node status里的condition设置taint(noschedule)


// map {NodeConditionType: {ConditionStatus: TaintKey}}
// represents which NodeConditionType under which ConditionStatus should be
// tainted with which TaintKey
// for certain NodeConditionType, there are multiple {ConditionStatus,TaintKey} pairs
nodeConditionToTaintKeyStatusMap = map[v1.NodeConditionType]map[v1.ConditionStatus]string{
   v1.NodeReady: {
      v1.ConditionFalse:   v1.TaintNodeNotReady,
      v1.ConditionUnknown: v1.TaintNodeUnreachable,
   },
   v1.NodeMemoryPressure: {
      v1.ConditionTrue: v1.TaintNodeMemoryPressure,
   },
   v1.NodeDiskPressure: {
      v1.ConditionTrue: v1.TaintNodeDiskPressure,
   },
   v1.NodeNetworkUnavailable: {
      v1.ConditionTrue: v1.TaintNodeNetworkUnavailable,
   },
   v1.NodePIDPressure: {
      v1.ConditionTrue: v1.TaintNodePIDPressure,
   },
}
image.png

doPodProcessingWorker

主要的功能是当node ReadyCondition不是true时,将pod的ReadyCondition更新为false.这里也比较重要,node notreay后pod都会被设置成not ready,service对象的endpoint的pod就会被摘除,依赖service的服务需要关注这块。

// pod的spec.NodeName不是nil并且pod的spec.NodeName发生变化则会加入到podUpdateQueue。
func (nc *Controller) podUpdated(oldPod, newPod *v1.Pod) {
   if newPod == nil {
      return
   }
   if len(newPod.Spec.NodeName) != 0 && (oldPod == nil || newPod.Spec.NodeName != oldPod.Spec.NodeName) {
      podItem := podUpdateItem{newPod.Namespace, newPod.Name}
      nc.podUpdateQueue.Add(podItem)
   }
}
image.png

doNoExecuteTaintingPass

处理基于taint的evictions方式,驱逐的时候是不会做限速的,所以这里要现实添加taint的速度(monitorNodeHealth侧实现限速)。真正的驱逐pod还是在taintManager中。RateLimitedTimedQueue令牌桶限速队列


func (nc *Controller) doNoExecuteTaintingPass() {
   nc.evictorLock.Lock()
   defer nc.evictorLock.Unlock()
   for k := range nc.zoneNoExecuteTainter {
      // Function should return 'false' and a time after which it should be retried, or 'true' if it shouldn't (it succeeded).
      nc.zoneNoExecuteTainter[k].Try(func(value scheduler.TimedValue) (bool, time.Duration) {
         node, err := nc.nodeLister.Get(value.Value)
         if apierrors.IsNotFound(err) {
            klog.Warningf("Node %v no longer present in nodeLister!", value.Value)
            return true, 0
         } else if err != nil {
            klog.Warningf("Failed to get Node %v from the nodeLister: %v", value.Value, err)
            // retry in 50 millisecond
            return false, 50 * time.Millisecond
         }
         _, condition := nodeutil.GetNodeCondition(&node.Status, v1.NodeReady)
         // Because we want to mimic NodeStatus.Condition["Ready"] we make "unreachable" and "not ready" taints mutually exclusive.
         taintToAdd := v1.Taint{}
         oppositeTaint := v1.Taint{}
         switch condition.Status {
         case v1.ConditionFalse:
            taintToAdd = *NotReadyTaintTemplate
            oppositeTaint = *UnreachableTaintTemplate
         case v1.ConditionUnknown:
            taintToAdd = *UnreachableTaintTemplate
            oppositeTaint = *NotReadyTaintTemplate
         default:
            // It seems that the Node is ready again, so there's no need to taint it.
            klog.V(4).Infof("Node %v was in a taint queue, but it's ready now. Ignoring taint request.", value.Value)
            return true, 0
         }

         result := nodeutil.SwapNodeControllerTaint(nc.kubeClient, []*v1.Taint{&taintToAdd}, []*v1.Taint{&oppositeTaint}, node)
         if result {
            //count the evictionsNumber
            zone := utilnode.GetZoneKey(node)
            evictionsNumber.WithLabelValues(zone).Inc()
         }

         return result, 0
      })
   }
}

monitorNodeHealth

每隔nodeMonitorPeriod周期,执行一次monitorNodeHealth,维护node状态和zone的状态,当 node 处于异常状态时更新 node 的 taint 。根据集群不同状态设置zone的速率。
NodeLifecycleController 会为每一个 node 划分一个 zoneStates,不同的zoneStates 分别对应着不同的驱逐速率


switch {
case currentReadyCondition.Status != v1.ConditionTrue && observedReadyCondition.Status == v1.ConditionTrue:
   // Report node event only once when status changed.
   nodeutil.RecordNodeStatusChange(nc.recorder, node, "NodeNotReady")
   fallthrough
case needsRetry && observedReadyCondition.Status != v1.ConditionTrue:
   if err = nodeutil.MarkPodsNotReady(nc.kubeClient, nc.recorder, pods, node.Name); err != nil {
      utilruntime.HandleError(fmt.Errorf("unable to mark all pods NotReady on node %v: %v; queuing for retry", node.Name, err))
      nc.nodesToRetry.Store(node.Name, struct{}{})
      continue
   }
}

tryUpdateNodeHealth

  1. 从nodeHealthMap获取上一次存储的nodeHealth(nodeHealthData)
type nodeHealthData struct {
   probeTimestamp           metav1.Time
   readyTransitionTimestamp metav1.Time
   status                   *v1.NodeStatus
   lease                    *coordv1.Lease
}
  1. 从node(apiserver中获取的)获取currentReadyCondition(ReadyCondition),如果是nil说明kubelet或者nodecontroller(也就是NodeLifecycleController)还没有上报过状态,这里能看出controller-manager也会更新node的status。
  1. 根据savedCondition、currentReadyCondition、observedLease更新nodeHealth的probeTimestamp
  2. 判断nodeHealth的probeTimestamp是不是已经超过了gracePeriod(nodeMonitorGracePeriod/nodeStartupGracePeriod),如果超时了就将node 的condition设置为"Unknown"然后更新到apiserver
  3. 将gracePeriod, observedReadyCondition, currentReadyCondition返回
    handleDisruption
    根据各zone中unhealthy node的情况(依据zoneToNodeConditions),给 zone 设置不同的驱逐速率。
    关键概念:
  1. 当allAreFullyDisrupted为false,allWasFullyDisrupted为true,之前zone未全挂,现在所有zone全挂:
  1. 当allAreFullyDisrupted为true,allWasFullyDisrupted为false,过去所有zone全挂,现在所有zone未全挂:
// ComputeZoneState returns a slice of NodeReadyConditions for all Nodes in a given zone.
// The zone is considered:
// - fullyDisrupted if there're no Ready Nodes,
// - partiallyDisrupted if at least than nc.unhealthyZoneThreshold percent of Nodes are not Ready,
// - normal otherwise
func (nc *Controller) ComputeZoneState(nodeReadyConditions []*v1.NodeCondition) (int, ZoneState) {
        readyNodes := 0
        notReadyNodes := 0
        for i := range nodeReadyConditions {
                if nodeReadyConditions[i] != nil && nodeReadyConditions[i].Status == v1.ConditionTrue {
                        readyNodes++
                } else {
                        notReadyNodes++
                }
        }
        switch {
        case readyNodes == 0 && notReadyNodes > 0:
                return notReadyNodes, stateFullDisruption
        case notReadyNodes > 2 && float32(notReadyNodes)/float32(notReadyNodes+readyNodes) >= nc.unhealthyZoneThreshold:
                return notReadyNodes, statePartialDisruption
        default:
                return notReadyNodes, stateNormal
        }
}

isNodeExcludedFromDisruptionChecks
只是用来排除一些node,使其不参与限速相关的计算。

上一篇 下一篇

猜你喜欢

热点阅读