在k3s中使用local-path-provisioner实现L
2020-10-26 本文已影响0人
万州客
Local PV是从kuberntes 1.10开始引入,本质目的是为了解决hostPath的缺陷。通过PV控制器与Scheduler的结合,会对local PV做针对性的逻辑处理,从而,让Pod在多次调度时,能够调度到同一个Node上。
kubernetes v1.14.0正式发布了,这个版本带来的一个新特性就是本地持久化管理( Local StorageManagement)特性正式GA(稳定)了。
一,安装local-path-provisioner
1,当k3s正常安装完成之后,就已预安装好了local-path-provisioner。
NAME READY STATUS RESTARTS AGE
svclb-traefik-wwr4q 2/2 Running 6 19h
svclb-traefik-tkzjl 2/2 Running 4 8h
helm-install-traefik-zdsx4 0/1 Completed 0 33m
traefik-758cd5fc85-m8j5n 1/1 Running 0 33m
coredns-7944c66d8d-bdk4f 1/1 Running 2 8h
local-path-provisioner-6d59f47c7-l2zc9 1/1 Running 3 8h
metrics-server-7566d596c8-csdqb 1/1 Running 3 8h
倒数第二个就是:local-path-provisioner-6d59f47c7-l2zc9。
2,local-storage的配置
/var/lib/rancher/k3s/server/manifests/local-storage.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: local-path-provisioner-service-account
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: local-path-provisioner-role
rules:
- apiGroups: [""]
resources: ["nodes", "persistentvolumeclaims"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["endpoints", "persistentvolumes", "pods"]
verbs: ["*"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "patch"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: local-path-provisioner-bind
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: local-path-provisioner-role
subjects:
- kind: ServiceAccount
name: local-path-provisioner-service-account
namespace: kube-system
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: local-path-provisioner
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: local-path-provisioner
template:
metadata:
labels:
app: local-path-provisioner
spec:
serviceAccountName: local-path-provisioner-service-account
tolerations:
- key: "CriticalAddonsOnly"
operator: "Exists"
- key: "node-role.kubernetes.io/master"
operator: "Exists"
effect: "NoSchedule"
containers:
- name: local-path-provisioner
image: rancher/local-path-provisioner:v0.0.11
imagePullPolicy: IfNotPresent
command:
- local-path-provisioner
- start
- --config
- /etc/config/config.json
volumeMounts:
- name: config-volume
mountPath: /etc/config/
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumes:
- name: config-volume
configMap:
name: local-path-config
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: local-path
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: rancher.io/local-path
volumeBindingMode: WaitForFirstConsumer
reclaimPolicy: Delete
---
kind: ConfigMap
apiVersion: v1
metadata:
name: local-path-config
namespace: kube-system
data:
config.json: |-
{
"nodePathMap":[
{
"node":"DEFAULT_PATH_FOR_NON_LISTED_NODES",
"paths":["/var/lib/rancher/k3s/storage"]
}
]
}
/var/lib/rancher/k3s/storage为节点存储空间
3,确认storageclass资源已生成
[root@localhost yaml]# kubectl get storageclass -n kube-system
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-path (default) rancher.io/local-path Delete WaitForFirstConsumer false 20h
二,创建PVC
1,pvc.yaml内容如下:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: local-path-pvc
namespace: default
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 50Mi
2,将Yaml文件应用到k3s集群
kubectl apply -f pvc.yaml
3,查看pv
kubectl get pv -A
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-1d6d8a75-93cd-4f89-bef7-2251ce1695bd 50Mi RWO Delete Bound default/local-path-pvc local-path 25m
4,查看pvc
kubectl get pvc -A
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
default local-path-pvc Bound pvc-1d6d8a75-93cd-4f89-bef7-2251ce1695bd 50Mi RWO local-path 27m
三,创建测试POD
1,pvc-pod.yaml内容如下:
apiVersion: v1
kind: Pod
metadata:
name: volume-test
namespace: default
spec:
containers:
- name: volume-test
image: nginx:1.18-alpine
imagePullPolicy: IfNotPresent
volumeMounts:
- name: volv
mountPath: /data
ports:
- containerPort: 80
volumes:
- name: volv
persistentVolumeClaim:
claimName: local-path-pvc
2,将Yaml文件应用到k3s集群
kubectl apply -f pvc-pod.yaml
3,查看pod的部署节点
kubectl get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-deployment-559fdddb7b-cb8xz 1/1 Running 2 7h39m 10.42.2.19 localhost.localdomain <none> <none>
nginx-deployment-559fdddb7b-qw6pn 1/1 Running 2 7h39m 10.42.1.23 k3s-agent.localdomain <none> <none>
volume-test 1/1 Running 0 34s 10.42.1.27 k3s-agent.localdomain <none> <none>
4,查看k3s-agent.localdomain 节点目录
ll /var/lib/rancher/k3s/
total 4
drwx------. 3 root root 4096 Oct 24 21:46 agent
drwxr-xr-x. 3 root root 78 Oct 24 21:46 data
drwxr-xr-x. 3 root root 54 Oct 25 17:10 storage
可以看到,在local-storage.yaml定义的目录已生成。
四,验证
1,在pod中写入文件
/data # echo "hello, local PV" > pvc-test
/data # cat pvc-test
hello, local PV
/data # pwd
/data
2,在local PV查看是否同样有此文件
pwd
/var/lib/rancher/k3s/storage/pvc-1d6d8a75-93cd-4f89-bef7-2251ce1695bd
[root@k3s-agent pvc-1d6d8a75-93cd-4f89-bef7-2251ce1695bd]# ls
pvc-test
[root@k3s-agent pvc-1d6d8a75-93cd-4f89-bef7-2251ce1695bd]# cat pvc-test
hello, local PV
可以看到,Local PV测试成功