2020-08-02 深入理解Service(与外界连通)

2020-08-05  本文已影响0人  阿丧小威

1. 为什么要有service

pod具有不停销毁、创建的特征,每个Pod又有自己分配的IP,Pod的创建、销毁意味着IP不停的变更,前端如何跟踪IP地址变得非常困难,service就是来解决这个问题的。它可以实现监控Pod的变化,并对外提供一个固定的访问入口(负载均衡的入口)。

2. Service 存在的意义

3. Pod与Service的关系

图片.png

4. Service三种常用类型

ClusterIP:默认,分配一个稳定的IP地址,即VIP,只能在集群内部访问(同Namespace内的Pod)。

ClusterIP

NodePort:在每个节点上启用一个端口(端口建议固定,不要随机分配)来暴露服务,可以在集群外部访问。也会分配一个稳定内部集群IP地址。
访问地址:<NodeIP>:<NodePort>

NodePort

LoadBalancer:与NodePort类似,在每个节点上启用一个端口来暴露服务。除此之外,Kubernetes会请求底层云平台上的负载均衡器,将每个Node([NodeIP]:[NodePort])作为后端添加进去。

LoadBalancer

5. Service代理模式

在kubernetes集群中,每个node运行一个kube-proxy进程。kube-proxy负责为Service实现一种VIP(虚拟IP)的形式。
cat /opt/kubernetes/cfg/kube-proxy.conf ---node上设置代理模式
底层流量转发与负载均衡实现:

Iptables

iptables模式在检测到分配的第一个pod链接失败后,会自动分配其他pod进行重试。
iptable会创建很多规则(更新,非增量)
iptable会从上到下逐条匹配(延迟大)
iptables -L 查看规则

IPVS

相对于iptables模式,IPVS模式下的kube-proxy重定向通信延迟更短,同步代理规则性能更好,
ipvsadm -ln 查看ipvs规则

对比

6. 集群内部DNS服务

DNS服务监视Kubernetes API,为每一个Service创建DNS记录用于域名解析。
Core DNS 是目前kubernetes系统内部默认的DNS服务。

# Warning: This is a file generated from the base underscore template file: coredns.yaml.base
apiVersion: v1
kind: ServiceAccount
metadata:
  name: coredns
  namespace: kube-system
  labels:
      kubernetes.io/cluster-service: "true"
      addonmanager.kubernetes.io/mode: Reconcile
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: Reconcile
  name: system:coredns
rules:
- apiGroups:
  - ""
  resources:
  - endpoints
  - services
  - pods
  - namespaces
  verbs:
  - list
  - watch
- apiGroups:
  - ""
  resources:
  - nodes
  verbs:
  - get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  annotations:
    rbac.authorization.kubernetes.io/autoupdate: "true"
  labels:
    kubernetes.io/bootstrapping: rbac-defaults
    addonmanager.kubernetes.io/mode: EnsureExists
  name: system:coredns
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: system:coredns
subjects:
- kind: ServiceAccount
  name: coredns
  namespace: kube-system
---
apiVersion: v1
kind: ConfigMap
metadata:
  name: coredns
  namespace: kube-system
  labels:
      addonmanager.kubernetes.io/mode: EnsureExists
data:
  Corefile: |
    .:53 {
        log
        errors
        health {
            lameduck 5s
        }
        ready
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            fallthrough in-addr.arpa ip6.arpa
            ttl 30
        }
        prometheus :9153
        forward . /etc/resolv.conf
        cache 30
        loop
        reload
        loadbalance
    }
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: coredns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        seccomp.security.alpha.kubernetes.io/pod: 'runtime/default'
    spec:
      priorityClassName: system-cluster-critical
      serviceAccountName: coredns
      tolerations:
        - key: "CriticalAddonsOnly"
          operator: "Exists"
      nodeSelector:
        kubernetes.io/os: linux
      containers:
      - name: coredns
        image: lizhenliang/coredns:1.6.7
        imagePullPolicy: IfNotPresent
        resources:
          limits:
            memory: 512Mi 
          requests:
            cpu: 100m
            memory: 70Mi
        args: [ "-conf", "/etc/coredns/Corefile" ]
        volumeMounts:
        - name: config-volume
          mountPath: /etc/coredns
          readOnly: true
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        - containerPort: 9153
          name: metrics
          protocol: TCP
        livenessProbe:
          httpGet:
            path: /health
            port: 8080
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /ready
            port: 8181
            scheme: HTTP
        securityContext:
          allowPrivilegeEscalation: false
          capabilities:
            add:
            - NET_BIND_SERVICE
            drop:
            - all
          readOnlyRootFilesystem: true
      dnsPolicy: Default
      volumes:
        - name: config-volume
          configMap:
            name: coredns
            items:
            - key: Corefile
              path: Corefile
---
apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  annotations:
    prometheus.io/port: "9153"
    prometheus.io/scrape: "true"
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    addonmanager.kubernetes.io/mode: Reconcile
    kubernetes.io/name: "CoreDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.0.0.2 
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP
  - name: metrics
    port: 9153
    protocol: TCP

使用kubectl apply -f coredns.yaml部署。
使用kubectl get pods -n kube-system查看。

ClusterIP A记录格式:<service-name>.<namespace-name>.svc.cluster.local
示例:my-svc.my-namespace.svc.cluster.local

DNS原理
上一篇下一篇

猜你喜欢

热点阅读