Devops

持续集成环境搭建(二):kubernetes Dynamic p

2019-04-29  本文已影响0人  一个全栈的小白

{Dynamic Provisioning}的目标是完全自动化存储资源的生命周期管理,让用户无需过多的关注存储的管理,可以按需求自动动态创建和调整存储资源。在Kubernetes 1.6正式版中,动态配置被提升至稳定版。

image.png

{StorageClass}是实现动态配置的关键,它本质上是底层存储介质的抽象:不同的存储介质拥有统一的表示和行为。StorageClass使用特定存储平台或者云提供商为Kubernetes提供的物理介质,动态生成PV。所以使用时必须通过指定的Internal Provisioner或者external provisioners来指定生成PV依赖的物理介质。目前K8S所支持的Internal Provisioner:

Volume Plugin Internal Provisioner Config Example
AWSElasticBlockStore AWS EBS
AzureFile Azure File
AzureDisk Azure Disk
CephFS - -
Cinder OpenStack Cinder
FC - -
Flexvolume - -
Flocker -
GCEPersistentDisk GCE PD
Glusterfs Glusterfs
iSCSI - -
Quobyte Quobyte
NFS - -
RBD Ceph RBD
VsphereVolume vSphere
PortworxVolume Portworx Volume
ScaleIO ScaleIO
StorageOS StorageOS
Local - Local

因为是在本地搭建,所以这里我采用了NFS SERVER作为StorageClass的物理介质,NFS并没有K8S支持的Internal Provisioner,我们就需要借助于Kubernetes NFS-Client Provisioner

持续集成环境搭建(一):kubernetes安装,已经搭建好了K8S集群,还需要搭建NFS服务器,可参考Linux下NFS服务器的搭建与配置,并保证Node节点可以与NFS服务器都能网络连同。

NFS-Client Provisioner

一、node节点安装nfs客户端

~]#  yum -y install nfs-utils
~]#  systemctl start nfs-utils
~]#  systemctl enable  nfs-utils

二、master安装所需服务

  1. 创建ServiceAccount:
apiVersion: v1
kind: ServiceAccount
metadata:
  name:  nfs-client-provisioner

2.ServiceAccount赋权:

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get","list","watch","create","delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get","list","watch","update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get","list","watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create","update","patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: leader-locking-nfs-client-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata: 
  name: leader-locking-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

3.创建StorageClass:

apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: fuseim.pri/ifs
parameters:
  archiveOnDelete: "false"

4.创建Deployment

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate #重建式更新。
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner:latest # 需要翻墙获取镜像
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs # 需要与StorageClass中provisioner属性保持一致
            - name: NFS_SERVER
              value: 192.168.120.169
            - name: NFS_PATH
              value: /data/localk8s/
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.120.169
            path: /data/localk8s/

在 NFS-Client Provisioner中,动态创建的PV是通过容器内部的GO代码实现的。从上面的yaml文件中可以看到,在StorageClass与Deployment之间仅仅是在容器的环境变量中有关联,那么问题来了:
StorageClass 是怎么通过provisioner与NFS-Client Provisioner中的Pod产生联系的?

参考

上一篇 下一篇

猜你喜欢

热点阅读