OCP4.4 部署EFK-使用local-volume持久化
概述
为了在生产环境中部署EFK,我们需要准备好相应的资源,如内存、持久化、固定ES节点等。固定节点,安装官方文档,使用taint进行配置;大部分客户那里并没有像ceph rbd这样的存储,一般只有nas,但是nas并不能满足es,在文件系统和性能都不满足,而且es要求使用storageclass,那么从性能的角度来看,使用local-volume是比较合适的了。
Using NFS storage as a volume or a persistent volume (or via NAS such as Gluster) is not supported for Elasticsearch storage, as Lucene relies on file system behavior that NFS does not supply. Data corruption and other problems can occur.
部署 Local-volume storageclass
- 创建local-storage项目
oc new-project local-storage
- 安装Local Storage operator
Operators → OperatorHub → Local Storage Operator → Click Install → 选择 local-storage namespace →
点击 Subscribe.
- 查看pod状态
# oc -n local-storage get pods
NAME READY STATUS RESTARTS AGE
local-storage-operator-7cd4799b4b-6bzg4 1/1 Running 0 12h
- 给3个es节点加一块盘(我这里是sdb 50G,建议200G),然后创建 localvolume.yaml:
通过指定 nodeSelector 选择 es 节点,配置指定硬盘设备和文件系统以及 storageClass。
apiVersion: "local.storage.openshift.io/v1"
kind: "LocalVolume"
metadata:
name: "local-disks"
namespace: "local-storage"
spec:
nodeSelector:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- worker02.ocp44.cluster1.com
- worker03.ocp44.cluster1.com
- worker04.ocp44.cluster1.com
storageClassDevices:
- storageClassName: "local-sc"
volumeMode: Filesystem
fsType: xfs
devicePaths:
- /dev/sdb
- 创建
oc create -f localvolume.yaml
- 检查pod
# oc get all -n local-storage
NAME READY STATUS RESTARTS AGE
pod/local-disks-local-diskmaker-7p448 1/1 Running 0 43m
pod/local-disks-local-diskmaker-grkjx 1/1 Running 0 43m
pod/local-disks-local-diskmaker-lmknj 1/1 Running 0 43m
pod/local-disks-local-provisioner-5s9nk 1/1 Running 0 43m
pod/local-disks-local-provisioner-hv42l 1/1 Running 0 43m
pod/local-disks-local-provisioner-tzlkt 1/1 Running 0 43m
pod/local-storage-operator-7cd4799b4b-6bzg4 1/1 Running 0 12h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/local-storage-operator ClusterIP 172.30.93.34 <none> 60000/TCP 12h
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/local-disks-local-diskmaker 3 3 3 3 3 <none> 11h
daemonset.apps/local-disks-local-provisioner 3 3 3 3 3 <none> 11h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/local-storage-operator 1/1 1 1 12h
NAME DESIRED CURRENT READY AGE
replicaset.apps/local-storage-operator-7cd4799b4b 1 1 1 12h
- 查看pv
# oc get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
local-pv-2337578c 50Gi RWO Delete Available local-sc 4m42s
local-pv-77162aba 50Gi RWO Delete Available local-sc 4m38s
local-pv-cc7b7951 50Gi RWO Delete Available local-sc 4m46s
- pv 内容
oc get pv local-pv-2337578c -oyaml
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/provisioned-by: local-volume-provisioner-worker02.ocp44.cluster1.com-e1f9a639-6872-43d7-b53c-d6255b3d7976
creationTimestamp: "2020-05-25T15:29:46Z"
finalizers:
- kubernetes.io/pv-protection
labels:
storage.openshift.com/local-volume-owner-name: local-disks
storage.openshift.com/local-volume-owner-namespace: local-storage
name: local-pv-2337578c
resourceVersion: "5661501"
selfLink: /api/v1/persistentvolumes/local-pv-2337578c
uid: 7f72ebb4-7212-4f0f-9f1a-d0af103ed70e
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 50Gi
local:
fsType: xfs
path: /mnt/local-storage/local-sc/sdb
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- worker02.ocp44.cluster1.com
persistentVolumeReclaimPolicy: Delete
storageClassName: local-sc
volumeMode: Filesystem
status:
phase: Available
- 查看storageclass
# oc get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
local-sc kubernetes.io/no-provisioner Delete WaitForFirstConsumer false 11h
- 查看storageclass内容
# oc get sc local-sc -oyaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
creationTimestamp: "2020-05-25T04:09:31Z"
labels:
local.storage.openshift.io/owner-name: local-disks
local.storage.openshift.io/owner-namespace: local-storage
name: local-sc
resourceVersion: "5273371"
selfLink: /apis/storage.k8s.io/v1/storageclasses/local-sc
uid: 0c625dad-3879-43b1-9b0a-f0606de91e5a
provisioner: kubernetes.io/no-provisioner
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
部署 Elasticsearch Operator
Operators → OperatorHub → Elasticsearch Operator → 点击 Install → Installation Mode 选择 All namespaces → Installed Namespace 选择 openshift-operators-redhat → 选择 Enable operator recommended cluster monitoring on this namespace → 选择一个 Update Channel and Approval Strategy → 点击 Subscribe → 验证 Operators → Installed Operators page → 确认 Elasticsearch Operator 的状态是 Succeeded.
部署 Cluster Logging Operator
Operators → OperatorHub → Cluster Logging Operators → 点击 Install → Installation Mode 选择 specific namespace on the cluster → Installed Namespace 选择 openshift-logging → 选择 Enable operator recommended cluster monitoring on this namespace → 选择一个 Update Channel and Approval Strategy → Subscribe → 去Installed Operators验证状态 → 去 Workloads → Pods 查看状态
安装EFK
Administration → Custom Resource Definitions → Custom Resource Definitions → ClusterLogging → Custom Resource Definition Overview page → Instances → click Create ClusterLogging,使用以下内容:
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
name: "instance"
namespace: "openshift-logging"
spec:
managementState: "Managed"
logStore:
type: "elasticsearch"
elasticsearch:
nodeCount: 3
storage:
storageClassName: local-sc
size: 48G
resources:
limits:
cpu: "4"
memory: "16Gi"
requests:
cpu: "4"
memory: "16Gi"
redundancyPolicy: "SingleRedundancy"
visualization:
type: "kibana"
kibana:
replicas: 1
curation:
type: "curator"
curator:
schedule: "30 3 * * *"
collection:
logs:
type: "fluentd"
fluentd: {}
es这里主要配置一下节点数量、sc名称、存储大小、资源配额(内存尽量大些),三节点下,副本模式,除了主分片,一个副本就够了,否则存储会占用很大,看具体情况了。curator这里配置每天3:30做一次清理,默认是清理30天以前的数据,具体可以配置某些索引或者某些项目索引:https://docs.openshift.com/container-platform/4.4/logging/config/cluster-logging-curator.html
补充说明
EFK固定节点
EFK可以通过设置 taint/tolerations 或者 nodeSelector来控制节点运行在什么节点,但是我这里通过使用local-volume已经实现了节点绑定,所以就不需要再进行节点绑定了,使用taint/tolerations有个问题得注意,在给node打上taint后,有些infra pod会被驱逐,比如dns pod、machine-config-daemon pod,这些pod是没有tolerations 我们打的taint,但是查了下这些 pod 的operator没有对应 tolerations 的配置,虽然可以通过这些pod的ds直接修改,不会被还原,但是这样的做法还是不标准,有可能出问题。
- tolerations
apiVersion: "logging.openshift.io/v1"
kind: "ClusterLogging"
metadata:
name: "instance"
namespace: openshift-logging
spec:
managementState: "Managed"
logStore:
type: "elasticsearch"
elasticsearch:
nodeCount: 1
tolerations:
- key: "logging"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 6000
resources:
limits:
memory: 8Gi
requests:
cpu: 100m
memory: 1Gi
storage: {}
redundancyPolicy: "ZeroRedundancy"
visualization:
type: "kibana"
kibana:
tolerations:
- key: "logging"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 6000
resources:
limits:
memory: 2Gi
requests:
cpu: 100m
memory: 1Gi
replicas: 1
curation:
type: "curator"
curator:
tolerations:
- key: "logging"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 6000
resources:
limits:
memory: 200Mi
requests:
cpu: 100m
memory: 100Mi
schedule: "*/5 * * * *"
collection:
logs:
type: "fluentd"
fluentd:
tolerations:
- key: "logging"
operator: "Exists"
effect: "NoExecute"
tolerationSeconds: 6000
resources:
limits:
memory: 2Gi
requests:
cpu: 100m
memory: 1Gi
- nodeSelector
apiVersion: logging.openshift.io/v1
kind: ClusterLogging
....
spec:
collection:
logs:
fluentd:
resources: null
type: fluentd
curation:
curator:
nodeSelector:
node-role.kubernetes.io/infra: ''
resources: null
schedule: 30 3 * * *
type: curator
logStore:
elasticsearch:
nodeCount: 3
nodeSelector:
node-role.kubernetes.io/infra: ''
redundancyPolicy: SingleRedundancy
resources:
limits:
cpu: 500m
memory: 16Gi
requests:
cpu: 500m
memory: 16Gi
storage: {}
type: elasticsearch
managementState: Managed
visualization:
kibana:
nodeSelector:
node-role.kubernetes.io/infra: ''
proxy:
resources: null
replicas: 1
resources: null
type: kibana
....
补充说明
在固定几个节点给ES用后,这些节点还是有可能会被普通的应用 pod 所使用,所以可以给真正的应用节点打上app标签,然后通过给project 模板注入nodeSelector,这样新建的project就可以使用真正的应用节点,不用在deployment之类的配置nodeSelector了。
如果ES使用的是ceph rbd这样的存储,那么就需要使用nodeSelector或者taint了,否则es会飘。prometheus同理。
参考链接
https://docs.openshift.com/container-platform/4.4/logging/config/cluster-logging-tolerations.html
https://docs.openshift.com/container-platform/4.4/logging/cluster-logging-moving-nodes.html
https://docs.openshift.com/container-platform/4.4/applications/projects/configuring-project-creation.html
https://docs.openshift.com/container-platform/4.4/networking/configuring-networkpolicy.html#nw-networkpolicy-creating-default-networkpolicy-objects-for-a-new-project
https://access.redhat.com/solutions/4946861