ingress 根据header头进行服务转发

2022-02-17  本文已影响0人  dozenx

目的

希望通过path里的路径 转发到不通的服务上

sadfasdf 

?:

以上是mermaid

开始部署ingress

mandatory.yaml

apiVersion: v1  # api 版本
kind: Namespace  # 意味着创建的是命名空间
metadata:  #元数据
  name: ingress-nginx  #创建的命名空间是 ingress-nginx
  labels:  #部署当前资源的标签 
    app.kubernetes.io/name: ingress-nginx #
    app.kubernetes.io/part-of: ingress-nginx #

---

kind: ConfigMap #ConfigMap是一种API对象,用来将非加密数据保存到键值对中。可以用作环境变量、命令行参数或者存储卷中的配置文件。
apiVersion: v1 #版本
metadata:  # 元数据
  name: nginx-configuration #nginx的配置
  namespace: ingress-nginx #命名空间
  labels: #标签
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: tcp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
kind: ConfigMap
apiVersion: v1
metadata:
  name: udp-services
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nginx-ingress-serviceaccount
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
  name: nginx-ingress-clusterrole
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - endpoints
      - nodes
      - pods
      - secrets
    verbs:
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - nodes
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - services
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - ""
    resources:
      - events
    verbs:
      - create
      - patch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses
    verbs:
      - get
      - list
      - watch
  - apiGroups:
      - "extensions"
      - "networking.k8s.io"
    resources:
      - ingresses/status
    verbs:
      - update

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: Role
metadata:
  name: nginx-ingress-role
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
rules:
  - apiGroups:
      - ""
    resources:
      - configmaps
      - pods
      - secrets
      - namespaces
    verbs:
      - get
  - apiGroups:
      - ""
    resources:
      - configmaps
    resourceNames:
      # Defaults to "<election-id>-<ingress-class>"
      # Here: "<ingress-controller-leader>-<nginx>"
      # This has to be adapted if you change either parameter
      # when launching the nginx-ingress-controller.
      - "ingress-controller-leader-nginx"
    verbs:
      - get
      - update
  - apiGroups:
      - ""
    resources:
      - configmaps
    verbs:
      - create
  - apiGroups:
      - ""
    resources:
      - endpoints
    verbs:
      - get

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
  name: nginx-ingress-role-nisa-binding
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: nginx-ingress-role
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
  name: nginx-ingress-clusterrole-nisa-binding
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: nginx-ingress-clusterrole
subjects:
  - kind: ServiceAccount
    name: nginx-ingress-serviceaccount
    namespace: ingress-nginx

---

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-ingress-controller  #这个是本次的部署的名臣
  namespace: ingress-nginx #命名空间
  labels:
    app.kubernetes.io/name: ingress-nginx #ingress nginx
    app.kubernetes.io/part-of: ingress-nginx #ingress -nginx
spec: 
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: ingress-nginx
      app.kubernetes.io/part-of: ingress-nginx
  template:
    metadata:
      labels:
        app.kubernetes.io/name: ingress-nginx
        app.kubernetes.io/part-of: ingress-nginx
      annotations:
        prometheus.io/port: "10254"
        prometheus.io/scrape: "true"
    spec:
      # wait up to five minutes for the drain of connections
      terminationGracePeriodSeconds: 300
      serviceAccountName: nginx-ingress-serviceaccount
      nodeSelector:
        kubernetes.io/os: linux
      containers:
        - name: nginx-ingress-controller
          image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller:0.30.0
          args:
            - /nginx-ingress-controller
            - --configmap=$(POD_NAMESPACE)/nginx-configuration
            - --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
            - --udp-services-configmap=$(POD_NAMESPACE)/udp-services
            - --publish-service=$(POD_NAMESPACE)/ingress-nginx
            - --annotations-prefix=nginx.ingress.kubernetes.io
          securityContext:
            allowPrivilegeEscalation: true
            capabilities:
              drop:
                - ALL
              add:
                - NET_BIND_SERVICE
            # www-data -> 101
            runAsUser: 101
          env:
            - name: POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: https
              containerPort: 443
              protocol: TCP
          livenessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            initialDelaySeconds: 10
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          readinessProbe:
            failureThreshold: 3
            httpGet:
              path: /healthz
              port: 10254
              scheme: HTTP
            periodSeconds: 10
            successThreshold: 1
            timeoutSeconds: 10
          lifecycle:
            preStop:
              exec:
                command:
                  - /wait-shutdown

---

apiVersion: v1
kind: LimitRange
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  limits:
  - min:
      memory: 90Mi
      cpu: 100m
    type: Container

serviceNode.yaml

apiVersion: v1
kind: Service
metadata:
  name: ingress-nginx
  namespace: ingress-nginx
  labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
spec:
  type: NodePort
  ports:
    - name: http
      port: 80
      targetPort: 80
      nodePort: 80
      protocol: TCP
    - name: https
      port: 443
      targetPort: 443
      nodePort: 443
      protocol: TCP
  selector:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx

---

ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-nginx
  annotations:
    # use the shared ingress-nginx
    kubernetes.io/ingress.class: "nginx"
spec:
  rules:
  - host: nginx.kube.com
    http:
      paths:
      - path: /
        backend:
          serviceName: nginx
          servicePort: 80


通过lua脚本实现代理

日志查看

access_log /var/log/nginx/access.log upstreaminfo  if=$loggable;                                                   error_log  /var/log/nginx/error.log notice

启动后修改nginx的nginx.conf

插件机制

https://blog.csdn.net/qq_42914720/article/details/114675596

需要代理后的url是

http://nginx.kube.com:30955/stream/quality/config/add

实际的请求路径是

http://vc-stream-quality-provider:8080/stream/quality/config/add

balance()

  upstream upstream_balancer {                                                                                                                                                                                    
                ### Attention!!!                                                                                                                                                                                        
                #                                                                                                                                                                                                       
                # We no longer create "upstream" section for every backend.                                                                                                                                             
                # Backends are handled dynamically using Lua. If you would like to debug                                                                                                                                
                # and see what backends ingress-nginx has in its memory you can                                                                                                                                         
                # install our kubectl plugin https://kubernetes.github.io/ingress-nginx/kubectl-plugin.                                                                                                                 
                # Once you have the plugin you can use "kubectl ingress-nginx backends" command to                                                                                                                      
                # inspect current backends.                                                                                                                                                                             
                #                                                                                                                                                                                                       
                ###                                                                                                                                                                                                  
                server 0.0.0.1; # placeholder                                                    
                balancer_by_lua_block {                                                        
                        balancer.balance()                                                      
                }                                                                                                                                             
                keepalive 32;                                                                   
                keepalive_timeout  60s;                                                         
                keepalive_requests 100;                                                          
        }       

balancer.lua:262

function _M.balance()                                
  local balancer = get_balancer()  
  if not balancer then                                                  
    return                                   
  end                                         
                                                   
  local peer = balancer:balance()      
  if not peer then                                 
    ngx.log(ngx.WARN, "no peer was returned, balancer: " .. balancer.name)
    return                                                     
  end                                
                     
  ngx_balancer.set_more_tries(1)
                                    
  local ok, err = ngx_balancer.set_current_peer(peer)
  if not ok then         
    ngx.log(ngx.ERR, "error while setting current upstream peer ", peer,
            ": ", err)    
  end                                       
end   

get_balancer

balancer.lua:226:

local function get_balancer()                                             
  if ngx.ctx.balancer then                                     
    return ngx.ctx.balancer                                
  end                                                      
                                                           
  local backend_name = ngx.var.proxy_upstream_name        //这个是什么时候塞进去的  
                                                           
  local balancer = balancers[backend_name]                                 
  if not balancer then                                                  
    return                                        
  end                                             
                                                  
  if route_to_alternative_balancer(balancer) then                 
    local alternative_backend_name = balancer.alternative_backends[1]
    ngx.var.proxy_alternative_upstream_name = alternative_backend_name      
                                                                        
    balancer = balancers[alternative_backend_name]                    
  end                                                                 
                                                                      
  ngx.ctx.balancer = balancer                                         
                                                                      
  return balancer                                                     
end        

通过ngx.var.proxy_upstream_name是

default-vc-stream-quality-provider-8080

然后找到 balancers["default-vc-stream-quality-provider-8080"]

我把代码拷贝下来的具体内容如下

分为nginx.conf还有lua脚本两个文件

kubectl get pods -A | grep ingress

 location / {
            set_by_lua_file $cur_ups /etc/nginx/lua/zhangzw.lua 
            proxy_next_upstream off;
            proxy_set_header Host $host:$server_port;
            proxy_set_header Remote_Addr $remote_addr;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            #proxy_pass http://$cur_ups;
            proxy_pass http://upstream_balancer;
         }  

vi lua/zhangzw.lua

--获取请求的header头
local headers_tab = ngx.req.get_headers()
获取header头里的参数
ngx.var.proxy_upstream_name=headers_tab["serviceName"]
return ""

收尾

怎么通过yaml配置的生效让上面的代码生效呢

思路是通过yaml配置的方式让 ngx.var.proxy_upstream_name动态设置

已知的是 这个变量的值是来至于

ingress.yaml的

    backend:
      serviceName: vc-stream-quality-provider
      servicePort: 8080

可以参考这个文章

https://blog.csdn.net/nangonghen/article/details/117759125

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: ingress-nginx
  annotations:
    # use the shared ingress-nginx
    kubernetes.io/ingress.class: "nginx"
    nginx.ingress.kubernetes.io/configuration-snippet: |
      ngx.var.proxy_upstream_name=$serviceName;

spec:
  rules:
  - host: nginx.kube.com
    http:
      paths:
      - path: /
        backend:
          serviceName: vc-stream-quality-provider
          servicePort: 8080
      - path: /hello
        backend:
          serviceName: rdp-svnadmin
          servicePort: 8080
  - host: nengli.kube.com
    http:
      paths:
      - path: /
        backend:
          serviceName: open-platform-portal
          servicePort: 80

上一篇 下一篇

猜你喜欢

热点阅读