Openshift:可靠的Kubernetes发行版docker. k8s

Openshift上搭建NFS StorageClass,别再说

2019-03-25  本文已影响10人  潘晓华Michael

动态存储是什么

Openshift持久化存储(PV)有两种,一种是静态的,另一种是动态。

StorageClass是什么

没有StorageClass时代,如何使用NFS

每次需要手动创建PV,一句话:麻烦。

StorageClass时代来了

一次配置,永久自动,无需手动创建PV,一句话:方便。

NFS Provisioner原理

NFS Provisioner原理
  1. 新建PVC时,指定为默认驱动,或者指定storageclass为nfs storage
  2. 运行nfs client provisioner的pod会根据配置,在共享的NFS目录下创建新的文件夹,同时创建新的PV指向该文件夹
  3. 将新建的PVC与2中新建的PV关联,完成PVC的创建
  4. 该PVC就可以被调用的Pod挂载了。

NFS StorageClass具体配置步骤

  1. 准备NFS服务
$ yum install nfs -y
$ mkdir -p /nfsdata/share
$ chown nfsnobody:nfsnobody /nfsdata/share
$ chmod 700 /nfsdata/share

$ #开放nfs访问的端口
$ iptables -A INPUT -p tcp --dport 111 -j ACCEPT
$ iptables -A INPUT -p udp --dport 111 -j ACCEPT
$ iptables -A INPUT -p tcp --dport 2049 -j ACCEPT
$ iptables -A INPUT -p udp --dport 2049 -j ACCEPT

$ # 配置NFS
$ echo "/nfsdata/share *(rw,async,no_root_squash)" >> /etc/exports
$ exportfs -a #加载共享目录配置
$ showmount -e #查看当前可用的共享目录

$ # 启动NFS
$ systemctl restart nfs
  1. 确定Provisioner安装的project(默认为default)
    如果使用default project的话
$ oc project default

如果希望将它部署在自定义的project中,则新建project

$ oc new-project nfs-provisoner
  1. 如果安装的project不是default的话,需要更改配置rbac.yaml,再设置权限
$ cat rbac.yaml
kind: ServiceAccount
apiVersion: v1
metadata:
  name: nfs-client-provisioner
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

$ NAMESPACE=`oc project -q`
$ sed -i'' "s/namespace:.*/namespace: $NAMESPACE/g" ./deploy/rbac.yaml

$ oc create -f deploy/rbac.yaml
$ oc adm policy add-scc-to-user hostmount-anyuid system:serviceaccount:$NAMESPACE:nfs-client-provisioner
  1. 更新deploy/deployment.yaml,设置NFS Server的配置
$ cat deploy/deployment.yaml
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
  name: nfs-client-provisioner
spec:
  replicas: 1
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: docker.io/xhuaustc/nfs-client-provisioner:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: fuseim.pri/ifs
            - name: NFS_SERVER
              value: <YOUR NFS SERVER HOSTNAME>
            - name: NFS_PATH
              value: /nfsdata/share
      volumes:
        - name: nfs-client-root
          nfs:
            server: <YOUR NFS SERVER HOSTNAME>
            path: /nfsdata/share

$ oc create -f deploy/deployment.yaml
  1. 创建storageclass
$ cat deploy/class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
  annotations:
    storageclass.kubernetes.io/is-default-class: "true" # 设置该storageclass为PVC创建时默认使用的存储机制
provisioner: fuseim.pri/ifs # 匹配deployment中的环境变量'PROVISIONER_NAME'
parameters:
  archiveOnDelete: "true" # "false" 删除PVC时不会保留数据,"true"将保留PVC数据
reclaimPolicy: Delete
$ oc create -f deploy/class.yaml

NFS StorageClass使用

  1. 创建PVC
$ cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  annotations:
    volume.beta.kubernetes.io/storage-class: managed-nfs-storage
    volume.beta.kubernetes.io/storage-provisioner: fuseim.pri/ifs
  name: testpvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
$ oc create pvc.yaml

如果storageclass中设置了storageclass.kubernetes.io/is-default-class: "true",可以更简单地创建PVC

$ cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: hello-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
$ oc create -f pvc.yaml
  1. 查看PVC
$ oc get pv
NAME                                       CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM                            STORAGECLASS          REASON    AGE
pvc-fb952566-4bed-11e9-9007-525400ad3b43   1Gi        RWO            Delete           Bound      test/hello-pvc                 managed-nfs-storage             5m

$ oc get pvc
hello-pvc   Bound     pvc-fb952566-4bed-11e9-9007-525400ad3b43   1Gi        RWO            managed-nfs-storage   4m
  1. 如果storageclass中设置了archiveOnDelete: "true",在删除PVC时,会将数据目录归档
$ ls /nfsdata/share
test-hello-pvc-pvc-fb952566-4bed-11e9-9007-525400ad3b43

$ oc delete pvc hello-pvc
$ ls /nfsdata/share
archived-test-hello-pvc-pvc-fb952566-4bed-11e9-9007-525400ad3b43
$ #数据目录被改名为以archived开头的文件夹,同时删除了对应的PV和PVC

总结

有了NFS StorageClass后,创建存储就非常简单方便了。
Openshift NFS动态存储代码 https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client

引用自:https://mp.weixin.qq.com/s/HgDCDgYjkX5en7ORNeG0yA

上一篇下一篇

猜你喜欢

热点阅读