Kubernetes部署应用服务
2022-06-19 本文已影响0人
想成为大师的学徒小纪
承接上文
Kubernetes高可用部署
,本文介绍应用程序服务部署到k8s平台步骤流程
一、安装网络文件系统(NFS)
此处为单机版安装,不具备高可用冗余性能
-
下载软件
yum -y install nfs-utils
-
编写配置文件
cat > /etc/exports <<'EOF' /data/share *(rw,sync,root_squash,all_squash,anonuid=65534,anongid=65534,insecure,no_wdelay,hide,no_subtree_check) EOF
-
创建目录并授权
mkdir -p /data/share chown nfsnobody. /data/share
-
启动服务
systemctl start rpcbind systemctl enable rpcbind systemctl start nfs systemctl enable nfs showmount -e 192.168.87.154
-
测试是否正常
<!== 另一机器执行 ==>
$ yum -y install nfs-utils $ mount -t nfs 192.168.87.154:/data/share /mnt $ echo 123 > text $ ssh -p 60025 192.168.87.154 "cat /data/share/text" 123
二、helm安装高可用harbor镜像仓库
1、安装外部postgresql数据库
-
安装软件
yum -y install gcc* readline-devel wget zlib zlib-devel cd /usr/local/src && wget --no-check-certificate https://ftp.postgresql.org/pub/source/v13.3/postgresql-13.3.tar.gz tar zxf postgresql-13.3.tar.gz mkdir -p /data/svc cd postgresql-13.3 && ./configure --prefix=/data/svc/pgsql gmake && gmake install cat >> /etc/profiles <<'EOF' export PGHOME="/data/svc/pgsql" export PATH="$PATH:${PGHOME}/bin" EOF source /etc/profile
-
创建用户和数据目录并授权
groupadd postgres useradd -g postgres postgres mkdir -p /data/svc/pgsql/data touch /data/svc/pgsql/.pgsql_history chown -R postgres. /data/svc/pgsql
-
初始化数据库
su - postgres initdb -D /data/svc/pgsql/data
-
设置systemd管理
cp /usr/local/src/postgresql-13.3/contrib/start-scripts/linux /etc/init.d/postgresql chmod a+x /etc/init.d/postgresql vim /etc/init.d/postgresql prefix=/data/svc/pgsql PGDATA="/data/svc/pgsql/data" systemctl daemon-reload systemctl start postgresql systemctl enable postgresql
-
创建数据库、用户并授权
psql -U postgres postgres=# alter user postgres with password 'Postgresql123'; ALTER ROLE postgres=# create user harbor with password 'harbor123'; CREATE ROLE postgres=# create database registry owner harbor; CREATE DATABASE postgres=# create database notary_server owner harbor; CREATE DATABASE postgres=# create database notary_signer owner harbor; CREATE DATABASE postgres=# grant all on database registry to harbor; GRANT postgres=# grant all on database notary_server to harbor; GRANT postgres=# grant all on database notary_signer to harbor; GRANT
-
修改客户端认证配置文件
vim /data/svc/pgsql/data/pg_hba.conf # "local" is for Unix domain socket connections only local all all password # IPv4 local connections: host all all 127.0.0.1/32 password host all all 0.0.0.0/0 password # IPv6 local connections: host all all ::1/128 password vim /data/svc/pgsql/data/postgresql.conf listen_addresses = '10.81.56.217' max_connections = 1024 systemctl restart postgresql
2、创建PV和PVC
-
NFS服务端创建共享目录
<!== nfs服务端主机执行 ==>
mkdir /data/share/harbor/jobservice mkdir /data/share/harbor/redis mkdir /data/share/harbor/registry mkdir /data/share/harbor/trivy chown -R nfsnobody. /data/share/harbor cat >> /etc/exports <<'EOF' /data/share/harbor/jobservice *(rw,sync,root_squash,all_squash,anonuid=65534,anongid=65534,insecure,no_wdelay,hide,no_subtree_check) /data/share/harbor/redis *(rw,sync,root_squash,all_squash,anonuid=65534,anongid=65534,insecure,no_wdelay,hide,no_subtree_check) /data/share/harbor/registry *(rw,sync,root_squash,all_squash,anonuid=65534,anongid=65534,insecure,no_wdelay,hide,no_subtree_check) /data/share/harbor/trivy *(rw,sync,root_squash,all_squash,anonuid=65534,anongid=65534,insecure,no_wdelay,hide,no_subtree_check) EOF systemctl restart nfs
-
创建PV存储
harbor-pv.yaml
apiVersion: v1 kind: PersistentVolume metadata: name: harbor-jobservice labels: app: harbor-jobservice spec: accessModes: - ReadWriteOnce capacity: storage: 50Gi nfs: path: /data/share/harbor/jobservice server: 192.168.87.154 persistentVolumeReclaimPolicy: Retain storageClassName: harbor-jobservice volumeMode: Filesystem --- apiVersion: v1 kind: PersistentVolume metadata: name: harbor-redis labels: app: harbor-redis spec: accessModes: - ReadWriteOnce capacity: storage: 50Gi nfs: path: /data/share/harbor/redis server: 192.168.87.154 persistentVolumeReclaimPolicy: Retain storageClassName: harbor-redis volumeMode: Filesystem --- apiVersion: v1 kind: PersistentVolume metadata: name: harbor-registry labels: app: harbor-registry spec: accessModes: - ReadWriteMany capacity: storage: 100Gi nfs: path: /data/share/harbor/registry server: 192.168.87.154 persistentVolumeReclaimPolicy: Retain storageClassName: harbor-registry volumeMode: Filesystem --- apiVersion: v1 kind: PersistentVolume metadata: name: harbor-trivy labels: app: harbor-trivy spec: accessModes: - ReadWriteOnce capacity: storage: 50Gi nfs: path: /data/share/harbor/trivy server: 192.168.87.154 persistentVolumeReclaimPolicy: Retain storageClassName: harbor-trivy volumeMode: Filesystem
-
创建harbor命名空间及podpreset
kubectl create ns harbor cat > /data/k8s/install/podpreset.yaml <<'EOF' apiVersion: settings.k8s.io/v1alpha1 kind: PodPreset metadata: name: tz-env namespace: harbor spec: selector: matchLabels: env: - name: TZ value: Asia/Shanghai EOF kubectl apply -f /data/k8s/install/podpreset.yaml
-
创建PVC申请PV
harbor-pvc.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: harbor-jobservice namespace: harbor labels: app: harbor-jobservice spec: selector: matchLabels: app: harbor-jobservice accessModes: - ReadWriteOnce resources: requests: storage: 50Gi storageClassName: harbor-jobservice volumeMode: Filesystem volumeName: harbor-jobservice --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: harbor-redis namespace: harbor labels: app: harbor-redis spec: selector: matchLabels: app: harbor-redis accessModes: - ReadWriteOnce resources: requests: storage: 50Gi storageClassName: harbor-redis volumeMode: Filesystem volumeName: harbor-redis --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: harbor-registry namespace: harbor labels: app: harbor-registry spec: selector: matchLabels: app: harbor-registry accessModes: - ReadWriteOnce resources: requests: storage: 100Gi storageClassName: harbor-registry volumeMode: Filesystem volumeName: harbor-registry --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: harbor-trivy namespace: harbor labels: app: harbor-trivy spec: selector: matchLabels: app: harbor-trivy accessModes: - ReadWriteOnce resources: requests: storage: 50Gi storageClassName: harbor-trivy volumeMode: Filesystem volumeName: harbor-trivy
3、helm安装harbor
-
添加helm仓库
-
将已有证书上传到k8s集群
-
修改values配置
values.yaml文件
caSecretName: '' chartmuseum: absoluteUrl: false affinity: {} automountServiceAccountToken: false enabled: false image: repository: goharbor/chartmuseum-photon tag: v2.5.1 indexLimit: 0 nodeSelector: {} podAnnotations: {} priorityClassName: null replicas: 1 revisionHistoryLimit: 10 serviceAccountName: '' tolerations: [] core: affinity: {} artifactPullAsyncFlushDuration: null automountServiceAccountToken: false image: repository: goharbor/harbor-core tag: v2.5.1 nodeSelector: {} podAnnotations: {} priorityClassName: null replicas: 2 revisionHistoryLimit: 10 secret: '' secretName: 'tls-harbor-ingress' serviceAccountName: '' startupProbe: enabled: true initialDelaySeconds: 10 tolerations: [] xsrfKey: '' database: external: coreDatabase: registry host: 10.81.56.217 notaryServerDatabase: notary_server notarySignerDatabase: notary_signer password: harbor123 port: '5432' sslmode: disable username: harbor internal: affinity: {} automountServiceAccountToken: false image: repository: goharbor/harbor-db tag: v2.5.1 initContainer: migrator: {} permissions: {} nodeSelector: {} password: changeit priorityClassName: null serviceAccountName: '' shmSizeLimit: 512Mi tolerations: [] maxIdleConns: 100 maxOpenConns: 2048 podAnnotations: {} type: external enableMigrateHelmHook: false exporter: affinity: {} automountServiceAccountToken: false cacheCleanInterval: 14400 cacheDuration: 23 image: repository: goharbor/harbor-exporter tag: v2.5.1 nodeSelector: {} podAnnotations: {} priorityClassName: null replicas: 1 revisionHistoryLimit: 10 serviceAccountName: '' tolerations: [] expose: clusterIP: annotations: {} name: harbor ports: httpPort: 80 httpsPort: 443 notaryPort: 4443 ingress: annotations: ingress.kubernetes.io/proxy-body-size: '0' ingress.kubernetes.io/ssl-redirect: 'true' nginx.ingress.kubernetes.io/proxy-body-size: '0' nginx.ingress.kubernetes.io/ssl-redirect: 'true' className: 'nginx' controller: default harbor: annotations: {} labels: {} hosts: core: harbor.eminxing.com notary: notary-harbor.eminxing.com kubeVersionOverride: '' notary: annotations: {} labels: {} loadBalancer: IP: '' annotations: {} name: harbor ports: httpPort: 80 httpsPort: 443 notaryPort: 4443 sourceRanges: [] nodePort: name: harbor ports: http: nodePort: 30002 port: 80 https: nodePort: 30003 port: 443 notary: nodePort: 30004 port: 4443 tls: auto: commonName: '' certSource: secret enabled: true secret: notarySecretName: 'tls-harbor-ingress' secretName: 'tls-harbor-ingress' type: ingress externalURL: 'https://harbor.eminxing.com' harborAdminPassword: Harbor12345 imagePullPolicy: IfNotPresent imagePullSecrets: null internalTLS: certSource: auto chartmuseum: crt: '' key: '' secretName: '' core: crt: '' key: '' secretName: '' enabled: false jobservice: crt: '' key: '' secretName: '' portal: crt: '' key: '' secretName: '' registry: crt: '' key: '' secretName: '' trivy: crt: '' key: '' secretName: '' trustCa: '' ipFamily: ipv4: enabled: true ipv6: enabled: true jobservice: affinity: {} automountServiceAccountToken: false image: repository: goharbor/harbor-jobservice tag: v2.5.1 jobLoggers: - file loggerSweeperDuration: 7 maxJobWorkers: 10 nodeSelector: {} podAnnotations: {} priorityClassName: null replicas: 1 revisionHistoryLimit: 10 secret: '' serviceAccountName: '' tolerations: [] logLevel: info metrics: core: path: /metrics port: 8001 enabled: true exporter: path: /metrics port: 8001 jobservice: path: /metrics port: 8001 registry: path: /metrics port: 8001 serviceMonitor: additionalLabels: {} enabled: false interval: '' metricRelabelings: [] relabelings: [] nginx: affinity: {} automountServiceAccountToken: false image: repository: goharbor/nginx-photon tag: v2.5.1 nodeSelector: {} podAnnotations: {} priorityClassName: null replicas: 1 revisionHistoryLimit: 10 serviceAccountName: '' tolerations: [] notary: enabled: true secretName: '' server: affinity: {} automountServiceAccountToken: false image: repository: goharbor/notary-server-photon tag: v2.5.1 nodeSelector: {} podAnnotations: {} priorityClassName: null replicas: 1 serviceAccountName: '' tolerations: [] signer: affinity: {} automountServiceAccountToken: false image: repository: goharbor/notary-signer-photon tag: v2.5.1 nodeSelector: {} podAnnotations: {} priorityClassName: null replicas: 1 serviceAccountName: '' tolerations: [] persistence: enabled: true imageChartStorage: azure: accountkey: base64encodedaccountkey accountname: accountname container: containername disableredirect: false filesystem: rootdirectory: /storage gcs: bucket: bucketname encodedkey: base64-encoded-json-key-file oss: accesskeyid: accesskeyid accesskeysecret: accesskeysecret bucket: bucketname region: regionname s3: bucket: bucketname region: us-west-1 swift: authurl: 'https://storage.myprovider.com/v3/auth' container: containername password: password username: username type: filesystem persistentVolumeClaim: chartmuseum: accessMode: ReadWriteOnce annotations: {} existingClaim: '' size: 5Gi storageClass: '' subPath: '' database: accessMode: ReadWriteOnce annotations: {} existingClaim: '' size: 1Gi storageClass: '' subPath: '' jobservice: accessMode: ReadWriteOnce annotations: {} existingClaim: 'harbor-jobservice' size: 50Gi storageClass: 'harbor-jobservice' subPath: '' redis: accessMode: ReadWriteOnce annotations: {} existingClaim: 'harbor-redis' size: 50Gi storageClass: 'harbor-redis' subPath: '' registry: accessMode: ReadWriteOnce annotations: {} existingClaim: 'harbor-registry' size: 100Gi storageClass: 'harbor-registry' subPath: '' trivy: accessMode: ReadWriteOnce annotations: {} existingClaim: 'harbor-trivy' size: 50Gi storageClass: 'harbor-trivy' subPath: '' resourcePolicy: keep portal: affinity: {} automountServiceAccountToken: false image: repository: goharbor/harbor-portal tag: v2.5.1 nodeSelector: {} podAnnotations: {} priorityClassName: null replicas: 2 revisionHistoryLimit: 10 serviceAccountName: '' tolerations: [] proxy: components: - core - jobservice - trivy httpProxy: null httpsProxy: null noProxy: '127.0.0.1,localhost,.local,.internal' redis: external: addr: '192.168.0.2:6379' chartmuseumDatabaseIndex: '3' coreDatabaseIndex: '0' jobserviceDatabaseIndex: '1' password: '' registryDatabaseIndex: '2' sentinelMasterSet: '' trivyAdapterIndex: '5' internal: affinity: {} automountServiceAccountToken: false image: repository: goharbor/redis-photon tag: v2.5.1 nodeSelector: {} priorityClassName: null serviceAccountName: '' tolerations: [] podAnnotations: {} type: internal registry: affinity: {} automountServiceAccountToken: false controller: image: repository: goharbor/harbor-registryctl tag: v2.5.1 credentials: password: harbor123 username: harbor_registry_user middleware: cloudFront: baseurl: example.cloudfront.net duration: 3000s ipfilteredby: none keypairid: KEYPAIRID privateKeySecret: my-secret enabled: false type: cloudFront nodeSelector: {} podAnnotations: {} priorityClassName: null registry: image: repository: goharbor/registry-photon tag: v2.5.1 relativeurls: false replicas: 2 revisionHistoryLimit: 10 secret: '' serviceAccountName: '' tolerations: [] upload_purging: age: 168h dryrun: false enabled: false interval: 24h secretKey: not-a-secure-key trace: enabled: false jaeger: endpoint: 'http://hostname:14268/api/traces' otel: compression: false endpoint: 'hostname:4318' insecure: true timeout: 10s url_path: /v1/traces provider: jaeger sample_rate: 1 trivy: affinity: {} automountServiceAccountToken: false debugMode: false enabled: true gitHubToken: '' ignoreUnfixed: false image: repository: goharbor/trivy-adapter-photon tag: v2.5.1 insecure: false nodeSelector: {} offlineScan: false podAnnotations: {} priorityClassName: null replicas: 1 resources: limits: cpu: 1 memory: 1Gi requests: cpu: 200m memory: 512Mi serviceAccountName: '' severity: 'UNKNOWN,LOW,MEDIUM,HIGH,CRITICAL' skipUpdate: false timeout: 5m0s tolerations: [] vulnType: 'os,library' updateStrategy: type: RollingUpdate
-
安装harbor
-
测试访问
本机hosts绑定域名
10.81.0.101 harbor.eminxing.com
三、Docker配置使用harbor
-
创建harbor普通用户
-
创建项目
-
Docker配置harbor仓库
<!== 所有k8s主机执行 ==>
echo '10.81.0.101 harbor.eminxing.com' >> /etc/hosts vim /etc/docker/daemon.json { "registry-mirrors": [ "https://xigwl1gq.mirror.aliyuncs.com", "https://harbor.eminxing.com" ], "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "200m", "max-file": "7" }, "data-root": "/data/docker/data", "storage-driver": "overlay2", "storage-opts": ["overlay2.override_kernel_check=true"], "dns": ["192.168.94.94", "192.168.94.95", "192.168.109.104"] } systemctl restart docker systemctl status docker
-
Docker登录仓库
<!== 所有k8s主机执行 ==>
$ docker login https://harbor.eminxing.com Username: fbu-uat Password: WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded
四、构建镜像上传私库
-
编写Dockerfile文件
FROM openjdk:8-jdk-alpine ARG Version ARG System ARG Author ARG JAR_FILE LABEL Version=${Version:-v1.0.0} \ System=${System:-public} \ Author=${Author} ENV ENVS=${ENVS:-pt} ENV Xmx=${Xmx:-1024m} ENV TZ=Asia/Shanghai RUN set -eux; \ apk add --no-cache --update tzdata; \ ln -snf /usr/share/zoneinfo/$TZ /etc/localtime; \ echo $TZ > /etc/timezone COPY ${JAR_FILE}.jar app.jar EXPOSE 9102 ENTRYPOINT ["sh","-c","java -Xms1024m -Xmx${Xmx} -XX:MetaspaceSize=256M -XX:MaxMetaspaceSize=512M -XX:+HeapDumpOnOutOfMemoryError -XX:AutoBoxCacheMax=20000 -Xloggc:/dev/shm/JVM.log -XX:+PrintGCDateStamps -XX:+PrintGCDetails -XX:+PrintGCApplicationStoppedTime -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=10 -XX:GCLogFileSize=1M -Dspring.profiles.active=${ENVS} -jar app.jar"]
-
构建镜像试运行
docker build --build-arg Version=v1.0.0 --build-arg System=FPS --build-arg Author=zt17879 --build-arg JAR_FILE=fps-task-worker -t fbu-fps-task-worker:v1.0.0 . docker run --net=host -e ENVS=test -e Xmx=2048m -p 9102:9102 -t --name fps-task-worker fbu-fps-task-worker:v1.0.0
-
推送镜像到harbor仓库
# 在项目中标记镜像 docker tag fbu-fps-task-worker:v1.0.0 harbor.eminxing.com/fbu-fps-task-worker/fbu-fps-task-worker:v1.0.0 # 推送镜像到当前项目 docker push harbor.eminxing.com/fbu-fps-task-worker/fbu-fps-task-worker:v1.0.0
五、使用k8s集群部署服务
1、创建服务运行命名空间
kubectl create ns apps
cat > /data/k8s/install/podpreset.yaml <<'EOF'
apiVersion: settings.k8s.io/v1alpha1
kind: PodPreset
metadata:
name: tz-env
namespace: apps
spec:
selector:
matchLabels:
env:
- name: TZ
value: Asia/Shanghai
EOF
kubectl apply -f /data/k8s/install/podpreset.yaml
2、创建harbor仓库身份验证的secret
先使用docker login命令登录harbor,保存有授权令牌的config.json文件,再执行以下命令
kubectl create secret generic harbor-login \
--from-file=.dockerconfigjson=/root/.docker/config.json \
--type=kubernetes.io/dockerconfigjson -n apps
3、创建deployment资源
fbu-fps-task-worker-test_deployment.yaml文件
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
field.cattle.io/description: Export Billing Services
labels:
business: fbu
environment: test
service: fps-task-worker
system: fps
tier: backend
name: fbu-fps-task-worker-test
namespace: apps
spec:
progressDeadlineSeconds: 600
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
service: fps-task-worker
matchExpressions:
- key: environment
operator: In
values: [test]
- key: business
operator: In
values: [fbu]
strategy:
rollingUpdate:
maxSurge: 25%
maxUnavailable: 25%
type: RollingUpdate
template:
metadata:
labels:
business: fbu
environment: test
service: fps-task-worker
system: fps
tier: backend
spec:
containers:
- env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: ENVS
value: test
- name: Xmx
value: 2048m
name: fbu-fps-task-worker
image: harbor.eminxing.com/fbu-fps-task-worker/fbu-fps-task-worker:v1.0.0
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 5
periodSeconds: 1
successThreshold: 1
tcpSocket:
port: 9102
timeoutSeconds: 1
ports:
- containerPort: 9102
name: web
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /health
port: 9102
scheme: HTTP
initialDelaySeconds: 2
periodSeconds: 2
successThreshold: 1
timeoutSeconds: 2
resources:
limits:
cpu: "2"
memory: 3Gi
requests:
cpu: 500m
memory: 1Gi
startupProbe:
failureThreshold: 3
httpGet:
path: /health
port: 9102
scheme: HTTP
initialDelaySeconds: 45
periodSeconds: 2
successThreshold: 1
timeoutSeconds: 2
volumeMounts:
- mountPath: /opt/logs/fps-task-worker
name: logs
subPathExpr: $(POD_NAME)
dnsConfig:
options:
- name: ndots
value: "2"
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: harbor-login
restartPolicy: Always
terminationGracePeriodSeconds: 30
volumes:
- hostPath:
path: /opt/logs/fps-task-worker
type: DirectoryOrCreate
name: logs
4、创建service资源
fbu-fps-task-worker-test_service.yaml文件
apiVersion: v1
kind: Service
metadata:
name: fbu-fps-task-worker-test
annotations:
field.cattle.io/description: Export Billing Services
labels:
business: fbu
environment: test
service: fps-task-worker
system: fps
tier: backend
namespace: apps
spec:
selector:
business: fbu
environment: test
service: fps-task-worker
type: ClusterIP
ports:
- name: web
port: 9102
protocol: TCP
targetPort: 9102
sessionAffinity: None
5、创建ingress资源
-
上传已有证书到k8s集群
kubectl create secret tls tls-fps-ingress --cert=eminxing.crt --key=eminxing.key -n apps
-
编写ingress资源文件
fbu-fps-task-worker-test_ingress.yaml文件
apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: fbu-fps-task-worker-test annotations: field.cattle.io/description: Export Billing Services nginx.org/redirect-to-https: "true" namespace: apps labels: business: fbu environment: test service: fps-task-worker system: fps tier: backend spec: ingressClassName: nginx rules: - http: paths: - backend: service: port: name: web name: fbu-fps-task-worker-test pathType: ImplementationSpecific host: fps-task-api.eminxing.com tls: - hosts: - fps-task-api.eminxing.com secretName: tls-fps-ingress
-
访问是否正常