Service Mesh - Istio实战篇(下)
上篇:
收集指标并监控应用
在可观察性里,指标是最能够从多方面去反映系统运行状况的。因为指标有各种各样,我们可以通过多维数据分析的方式来对系统的各个维度进行一个测量和监控。
Istio 默认是通过自带的 Promethuse 和 Grafana 组件来完成指标的收集和展示,但是监控系统这样的基础工具,通常在每个公司的生产环境上都是必备的,所以如果使用 Istio 自带的组件就重复了。
因此把现有的监控系统和 Istio 整合在一起是最好的解决方案。所以本小节就演示下用现有的监控系统和 Istio 进行一个指标收集方面的整合。
Istio 的指标接口
首先,我们需要了解 Istio 是怎么把它的指标暴露出来的。它主要提供了以下两个指标接口:
-
/metrics
:提供 Istio 自身运行状况的指标信息 -
/stats/prometheus
:Envoy 提供的接口,可获取网络流量相关的指标
image.png
我们可以请求 /stats/prometheus
接口查看它提供的指标数据:
$ kubectl exec -it -n demo ${sleep_pod_name} -c sleep -- curl http://httpbin.demo:15090/stats/prometheus
istiod 服务的 /metrics
接口暴露了控制平面的一些指标,我们可以通过如下方式获取到:
$ kubectl exec -it -n demo ${sleep_pod_name} -c sleep -- curl http://istiod.istio-system:15014/metrics
Prometheus 配置方式
image.png- 静态配置局限性比较大,不能很好的适应变化,所以一般都是使用动态配置的方式
支撑动态配置的基础是 Prometheus 的服务发现机制:
- 服务发现机制可以保证 Prometheus 能够通过服务暴露出来的接口来找到这些对应指标提供的接口
-
kubernetes_sd_config.role
配置项定义了对哪些目标进行指标收集- node:集群节点
- service:服务,常用于黑盒监控
- pod:以pod中容器为目标
- endpoints:端点
- ingress:入口网关
-
relabel_configs
配置项定义了过滤机制,用于对暴露出来的接口进行过滤
实战
我们先来搭建一个监控系统,然后与 Istio 进行整合。首先部署 Prometheus ,具体的配置清单内容如下:
apiVersion: apps/v1
kind: Deployment
metadata:
name: prometheus
namespace: monitoring
labels:
app: prometheus
spec:
selector:
matchLabels:
app: prometheus
template:
metadata:
labels:
app: prometheus
spec:
serviceAccount: appmesh-prometheus
serviceAccountName: appmesh-prometheus
containers:
- image: prom/prometheus:latest
name: prometheus
command:
- "/bin/prometheus"
args:
- "--config.file=/etc/prometheus/prometheus.yml"
- "--storage.tsdb.path=/prometheus"
- "--storage.tsdb.retention=24h"
- "--web.enable-admin-api"
- "--web.enable-lifecycle"
ports:
- containerPort: 9090
protocol: TCP
name: http
volumeMounts:
- mountPath: /etc/prometheus
name: config-volume
- mountPath: /prometheus/data
name: data-volume
resources:
requests:
cpu: 100m
memory: 512Mi
limits:
cpu: 100m
memory: 512Mi
securityContext:
runAsUser: 0
volumes:
- configMap:
name: prometheus-config
name: config-volume
- emptyDir: {}
name: data-volume
---
apiVersion: v1
kind: Service
metadata:
name: prometheus
namespace: monitoring
labels:
app: prometheus
spec:
selector:
app: prometheus
type: NodePort
ports:
- name: web
port: 9090
targetPort: http
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: appmesh-prometheus
namespace: monitoring
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
namespace: monitoring
name: appmesh-prometheus
rules:
- apiGroups:
- ""
resources:
- nodes
- nodes/proxy
- nodes/metrics
- services
- endpoints
- pods
- ingresses
- configmaps
verbs:
- get
- list
- watch
- apiGroups:
- "extensions"
- "networking.k8s.io"
resources:
- ingresses/status
- ingresses
verbs:
- get
- list
- watch
- nonResourceURLs:
- "/metrics"
verbs:
- get
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: appmesh-prometheus
subjects:
- kind: ServiceAccount
name: appmesh-prometheus
namespace: monitoring
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: appmesh-prometheus
创建 Prometheus 的 ConfigMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-config
namespace: monitoring
data:
prometheus.yml: |
global:
scrape_interval: 15s
scrape_timeout: 15s
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
然后部署 Grafana ,配置清单内容如下:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: grafana
name: grafana
namespace: monitoring
spec:
replicas: 1
selector:
matchLabels:
app: grafana
template:
metadata:
labels:
app: grafana
spec:
containers:
- name: grafana
image: grafana/grafana:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 3000
name: grafana
env:
- name: GRAFANA_PORT
value: "3000"
- name: GF_AUTH_BASIC_ENABLED
value: "false"
- name: GF_AUTH_ANONYMOUS_ENABLED
value: "true"
- name: GF_AUTH_ANONYMOUS_ORG_ROLE
value: Admin
resources:
limits:
cpu: 100m
memory: 256Mi
requests:
cpu: 100m
memory: 256Mi
volumeMounts:
- mountPath: /var/lib/grafana
name: grafana-storage
volumes:
- name: grafana-storage
emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
name: grafana
namespace: monitoring
labels:
app: grafana
spec:
selector:
app: grafana
type: NodePort
ports:
- name: http
port: 3000
targetPort: 3000
nodePort: 32000
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: grafana
namespace: monitoring
确认都正常启动了:
[root@m1 ~]# kubectl get all -n monitoring
NAME READY STATUS RESTARTS AGE
pod/grafana-86f5dc96d-6hsmz 1/1 Running 0 20m
pod/prometheus-9dd6bd8bb-wcdrw 1/1 Running 0 2m30s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/grafana NodePort 10.101.215.111 <none> 3000:32000/TCP 20m
service/prometheus NodePort 10.101.113.122 <none> 9090:31053/TCP 13m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/grafana 1/1 1 1 20m
deployment.apps/prometheus 1/1 1 1 13m
NAME DESIRED CURRENT READY AGE
replicaset.apps/grafana-86f5dc96d 1 1 1 20m
replicaset.apps/prometheus-9dd6bd8bb 1 1 1 13m
[root@m1 ~]#
查看 prometheus 和 grafana 调度在哪台 work 节点上:
[root@m1 ~]# kubectl get po -l app=grafana -n monitoring -o jsonpath='{.items[0].status.hostIP}'
192.168.243.139
[root@m1 ~]# kubectl get po -l app=prometheus -n monitoring -o jsonpath='{.items[0].status.hostIP}'
192.168.243.139
使用浏览器访问 prometheus,并查看其配置内容是否符合预期,即是否能与 ConfigMap 的内容对应上:
image.png
从上图可以看到目前 prometheus 是静态配置,接下来我们需要将其改为动态配置,修改其 ConfigMap 内容如下:
apiVersion: v1
kind: ConfigMap
metadata:
name: prometheus-config
namespace: monitoring
data:
prometheus.yml: |-
global:
scrape_interval: 15s
scrape_timeout: 15s
scrape_configs:
- job_name: 'prometheus'
static_configs:
- targets: ['localhost:9090']
# 以下是整合Istio的配置
- job_name: envoy-stats
honor_timestamps: true
metrics_path: /stats/prometheus
scheme: http
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_container_port_name]
separator: ;
regex: .*-envoy-prom
replacement: $1
action: keep
- source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
separator: ;
regex: ([^:]+)(?::\d+)?;(\d+)
target_label: __address__
replacement: $1:15090
action: replace
- separator: ;
regex: __meta_kubernetes_pod_label_(.+)
replacement: $1
action: labeldrop
- source_labels: [__meta_kubernetes_namespace]
separator: ;
regex: (.*)
target_label: namespace
replacement: $1
action: replace
- source_labels: [__meta_kubernetes_pod_name]
separator: ;
regex: (.*)
target_label: pod_name
replacement: $1
action: replace
-
Tips:这里 ConfigMap 配置内容是从 Istio 官方提供的 Prometheus 配置文件拷贝的,每个版本可能配置会不一样。路径为:
$ISTIO_HOME/samples/addons/prometheus.yaml
然后通过 patch 命令重建一下 prometheus :
[root@m1 ~]# kubectl patch deployment prometheus -n monitoring -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
deployment.apps/prometheus patched
[root@m1 ~]#
查看配置是否已生效:
image.png
此时在 prometheus 上就可以查询到 Istio 的指标了:
image.png
Grafana 方面则只需要将 Istio 内置的 Dashboard 导出,然后再导入到另一个 Grafana 即可,比较简单就不演示了。
集成 ELK Stack 日志套件
在分布式系统中,应用产生的日志会分布在各个节点上,非常不利于查看和管理。所以一般都会采用集中式日志架构,把日志数据汇总到一个日志平台进行统一的管理,而日志平台中最广为人知的就是 ELK Stack 了。
集中式日志架构
image.png主要功能:
- 收集
- 处理
- 展示
ELK Stack 日志架构
[图片上传失败...(image-18b5d8-1608900591030)]
- ElasticSearch:负责数据存储及搜索
- Logstash:负责数据收集管道,提供过滤、预处理功能
- Kibana:用于对数据进行图表展示
- LibBeats:轻量化的数据收集器
ELK 部署形态
image.png实战
接下来我们安装 ELK 套件,去收集 Istio Envoy的log数据。首先在集群中创建一个命名空间:
[root@m1 ~]# kubectl create ns elk
namespace/elk created
[root@m1 ~]#
然后使用如下配置清单,部署Elastic Search和Kibana:
kind: List
apiVersion: v1
items:
- apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
spec:
selector:
matchLabels:
app: kibana
replicas: 1
template:
metadata:
name: kibana
labels:
app: kibana
spec:
containers:
- image: docker.elastic.co/kibana/kibana:6.4.0
name: kibana
env:
- name: ELASTICSEARCH_URL
value: "http://elasticsearch:9200"
ports:
- name: http
containerPort: 5601
- apiVersion: v1
kind: Service
metadata:
name: kibana
spec:
type: NodePort
ports:
- name: http
port: 5601
targetPort: 5601
nodePort: 32001
selector:
app: kibana
- apiVersion: apps/v1
kind: Deployment
metadata:
name: elasticsearch
spec:
selector:
matchLabels:
app: elasticsearch
replicas: 1
template:
metadata:
name: elasticsearch
labels:
app: elasticsearch
spec:
initContainers:
- name: init-sysctl
image: busybox
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
containers:
- image: docker.elastic.co/elasticsearch/elasticsearch:6.4.0
name: elasticsearch
env:
- name: network.host
value: "_site_"
- name: node.name
value: "${HOSTNAME}"
- name: discovery.zen.ping.unicast.hosts
value: "${ELASTICSEARCH_NODEPORT_SERVICE_HOST}"
- name: cluster.name
value: "test-single"
- name: ES_JAVA_OPTS
value: "-Xms128m -Xmx128m"
volumeMounts:
- name: es-data
mountPath: /usr/share/elasticsearch/data
volumes:
- name: es-data
emptyDir: {}
- apiVersion: v1
kind: Service
metadata:
name: elasticsearch-nodeport
spec:
type: NodePort
ports:
- name: http
port: 9200
targetPort: 9200
nodePort: 32002
- name: tcp
port: 9300
targetPort: 9300
nodePort: 32003
selector:
app: elasticsearch
- apiVersion: v1
kind: Service
metadata:
name: elasticsearch
spec:
clusterIP: None
ports:
- name: http
port: 9200
- name: tcp
port: 9300
selector:
app: elasticsearch
指定命名空间进行部署:
[root@m1 ~]# kubectl apply -f elk/deploy.yaml -n elk
deployment.apps/kibana created
service/kibana created
deployment.apps/elasticsearch created
service/elasticsearch-nodeport created
service/elasticsearch created
[root@m1 ~]#
以上只是部署了elasticsearch和kibana,但想要对 Envoy 的日志进行收集,我们还需要部署Logstash或FileBeats,这里以FileBeats作为示例,配置清单内容如下:
kind: List
apiVersion: v1
items:
- apiVersion: v1
kind: ConfigMap
metadata:
name: filebeat-config
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
app: filebeat-config
data:
filebeat.yml: |
processors:
- add_cloud_metadata:
filebeat.modules:
- module: system
filebeat.inputs:
- type: log
paths:
- /var/log/containers/*.log
symlinks: true
output.elasticsearch:
hosts: ['elasticsearch:9200']
logging.level: info
- apiVersion: apps/v1
kind: Deployment
metadata:
name: filebeat
labels:
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
spec:
selector:
matchLabels:
app: filebeat
replicas: 1
template:
metadata:
name: filebeat
labels:
app: filebeat
k8s-app: filebeat
kubernetes.io/cluster-service: "true"
spec:
containers:
- image: docker.elastic.co/beats/filebeat:6.4.0
name: filebeat
args: [
"-c", "/home/filebeat-config/filebeat.yml",
"-e",
]
securityContext:
runAsUser: 0
volumeMounts:
- name: filebeat-storage
mountPath: /var/log/containers
- name: varlogpods
mountPath: /var/log/pods
- name: varlibdockercontainers
mountPath: /var/lib/docker/containers
- name: "filebeat-volume"
mountPath: "/home/filebeat-config"
volumes:
- name: filebeat-storage
hostPath:
path: /var/log/containers
- name: varlogpods
hostPath:
path: /var/log/pods
- name: varlibdockercontainers
hostPath:
path: /var/lib/docker/containers
- name: filebeat-volume
configMap:
name: filebeat-config
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: filebeat
subjects:
- kind: ServiceAccount
name: filebeat
namespace: elk
roleRef:
kind: ClusterRole
name: filebeat
apiGroup: rbac.authorization.k8s.io
- apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: filebeat
labels:
k8s-app: filebeat
rules:
- apiGroups: [""] # "" indicates the core API group
resources:
- namespaces
- pods
verbs:
- get
- watch
- list
- apiVersion: v1
kind: ServiceAccount
metadata:
name: filebeat
namespace: elk
labels:
k8s-app: filebeat
确认所有组件都已经部署成功:
[root@m1 ~]# kubectl get all -n elk
NAME READY STATUS RESTARTS AGE
pod/elasticsearch-697c88cd76-xvn4j 1/1 Running 0 4m53s
pod/filebeat-8646b847b7-f58zg 1/1 Running 0 32s
pod/kibana-fc98677d7-9z5dl 1/1 Running 0 8m14s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/elasticsearch ClusterIP None <none> 9200/TCP,9300/TCP 8m14s
service/elasticsearch-nodeport NodePort 10.96.106.229 <none> 9200:32002/TCP,9300:32003/TCP 8m14s
service/kibana NodePort 10.105.91.140 <none> 5601:32001/TCP 8m14s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/elasticsearch 1/1 1 1 8m14s
deployment.apps/filebeat 1/1 1 1 32s
deployment.apps/kibana 1/1 1 1 8m14s
NAME DESIRED CURRENT READY AGE
replicaset.apps/elasticsearch-697c88cd76 1 1 1 4m53s
replicaset.apps/filebeat-8646b847b7 1 1 1 32s
replicaset.apps/kibana-fc98677d7 1 1 1 8m14s
[root@m1 ~]#
到 Kibana 上创建一个简单的 Index Pattern:
image.png
创建完成:
image.png
然后在 Discover 页面就可以查看到FileBeat收集并存储在Elastic Search中的日志数据了:
image.png
集成分布式追踪工具
Istio 的分布式追踪
- Istio 的分布式追踪基于 Envoy 实现
- 应用负责传递追踪头信息(b3 trace header),所以对应用并非完全透明,需要应用自己去传递报头
- b3 这种信息报头最早由openzipkin提出:https://github.com/openzipkin/b3-propagation
- 支持采样率
基于 Envoy 实现的分布式追踪的流程如下:
image.png
- 首先为流过 Envoy 代理的请求生成 RequestId,以及 TraceHeader 也就是信息报头
- 基于请求和响应的元数据生成对应的 TraceSpan,然后把 Span 发送到 Trace 后端
- 最后再把 Trace 头转发到代理的应用节点
部署 Jaeger
接下来我们利用 Operator 安装 Jaeger,以此演示 Istio 如何与现存的分布式追踪系统进行集成。我们先简单了解 一下 Operator:
- 部署和管理 Kubernetes 应用的工具包
- 部署在集群中,使用 Kubernetes API 管理应用
- Operator Framework
- Operator SDK
-
Operator Lifecycle Manager
image.png
首先克隆 jaeger-operator 的仓库:
[root@m1 ~]# cd /usr/local/src
[root@m1 /usr/local/src]# git clone https://github.com/jaegertracing/jaeger-operator.git
修改配置文件,将 WATCH_NAMESPACE
的 value
设置为空,让其能够追踪所有命名空间下的请求:
[root@m1 /usr/local/src]# vim jaeger-operator/deploy/operator.yaml
...
env:
- name: WATCH_NAMESPACE
value:
...
创建 jaeger 的 crd:
[root@m1 /usr/local/src]# kubectl apply -f jaeger-operator/deploy/crds/jaegertracing.io_jaegers_crd.yaml
customresourcedefinition.apiextensions.k8s.io/jaegers.jaegertracing.io created
[root@m1 /usr/local/src]#
然后创建一个命名空间,并将 jaeger 的其他资源创建在该命名空间下:
$ kubectl create ns observability
$ kubectl apply -f jaeger-operator/deploy/role.yaml -n observability
$ kubectl apply -f jaeger-operator/deploy/role_binding.yaml -n observability
$ kubectl apply -f jaeger-operator/deploy/service_account.yaml -n observability
$ kubectl apply -f jaeger-operator/deploy/cluster_role.yaml -n observability
$ kubectl apply -f jaeger-operator/deploy/cluster_role_binding.yaml -n observability
$ kubectl apply -f jaeger-operator/deploy/operator.yaml -n observability
确认 operator 已正常启动:
[root@m1 /usr/local/src]# kubectl get all -n observability
NAME READY STATUS RESTARTS AGE
pod/jaeger-operator-7f76698d98-x9wkh 1/1 Running 0 105s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/jaeger-operator-metrics ClusterIP 10.100.189.227 <none> 8383/TCP,8686/TCP 11s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/jaeger-operator 1/1 1 1 105s
NAME DESIRED CURRENT READY AGE
replicaset.apps/jaeger-operator-7f76698d98 1 1 1 105s
[root@m1 /usr/local/src]#
安装 jaegers 这个自定义资源,operator 会通过该自定义资源自动帮我们部署 jaeger:
[root@m1 /usr/local/src]# kubectl apply -f jaeger-operator/examples/simplest.yaml -n observability
jaeger.jaegertracing.io/simplest created
[root@m1 /usr/local/src]# kubectl get jaegers -n observability
NAME STATUS VERSION STRATEGY STORAGE AGE
simplest Running 1.21.0 allinone memory 3m8s
[root@m1 /usr/local/src]#
Jaeger 集成 Istio
将 Jaeger 部署好后,接下来我们就是将其与 Istio 进行集成。集成很简单,只需要通过 istioctl
工具设置一些配置变量即可,命令如下:
[root@m1 ~]# istioctl install --set profile=demo -y \
--set values.global.tracer.zipkin.address=simplest-collector.observability:9411 \
--set values.pilot.traceSampling=100
-
Tips:
profile
值需要设置为安装 Istio 时所设置的值,否则会按默认值重新安装 Istio
Jaeger 集成 Istio 完成后,还剩最后一步,我们需要通过注入的方式来把 Jaeger 的 agent 注入到我们的应用中。Jaeger Operator支持自动注入,我们只需要在 Annotation 里增加一个注入的标志即可。
之前我们也提到过了 Istio 的 tracing 对应用并不是完全透明的,我们需要自己在应用中去对 trace header 进行处理。所以为了测试方便,我们就使用官方提供的 Bookinfo 应用作为演示。部署 Bookinfo :
[root@m1 ~]# kubectl apply -f /usr/local/istio-1.8.1/samples/bookinfo/platform/kube/bookinfo.yaml
[root@m1 ~]# kubectl apply -f /usr/local/istio-1.8.1/samples/bookinfo/networking/bookinfo-gateway.yaml
Jaeger 支持针对命名空间或 Deployment 进行注入,以 product 这个 Deployment 为例,我们只需要在其 Annotation 中添加一行 sidecar.jaegertracing.io/inject: "true"
即可:
[root@m1 ~]# kubectl edit deployments.apps/productpage-v1
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
sidecar.jaegertracing.io/inject: "true"
...
然后通过 patch 命令重建一下 productpage:
[root@m1 ~]# kubectl patch deployment productpage-v1 -p "{\"spec\":{\"template\":{\"metadata\":{\"labels\":{\"date\":\"`date +'%s'`\"}}}}}"
deployment.apps/productpage-v1 patched
[root@m1 ~]#
此时可以看到 productpage Pod 里的容器数量增加到3个了,说明 Jaeger 已经将 agent 注入进去了:
[root@m1 ~]# kubectl get pods
NAME READY STATUS RESTARTS AGE
productpage-v1-5c75dcd69f-g9sjh 3/3 Running 0 96s
...
使用如下命令打开 Jaeger 的 Web UI 访问端口:
[root@m1 ~]# kubectl port-forward svc/simplest-query -n observability 16686:16686 --address 192.168.243.138
Forwarding from 192.168.243.138:16686 -> 16686
在页面上可以看到 Jaeger 已经能探测到 productpage 服务了:
image.png
调试工具和方法:调试网格的工具和方法有哪些?
Istio 常见的调试方式主要有以下几种:
- istioctl 命令行
- controlZ 控制平面的自检工具
- Envoy admin 接口
- Pilot debug 接口
istioctl 命令行
我们可以使用 --help
参数查看 istioctl
命令的帮助信息:
$ istioctl --help
安装部署相关
-
istioctl verify-install
:可用于验证当前的k8s集群环境是否可以部署 Istio -
istioctl install [flags]
:用于在当前集群安装 Istio 环境 -
istioctl profile [list / diff / dump]
:操作 Istio 的 profile -
istioctl kube-inject
:用于对Pod注入Envoy sidecar -
istioctl dashboard [command]
:启动指定的 Istio Dashboard Web UIcontrolz / envoy / Grafana / jaeger / kiali / Prometheus / zipkin
网格配置状态检查
-
istioctl ps <pod-name>
:查看网格配置同步状态。有如下几种状态:- SYNCED:配置已同步
- NOT SENT:配置没有下发
- STALE:配置下发了,但是Pod并没有响应act
-
istioctl pc [cluster/route/…] <pod-name.namespace>
:获取指定资源的网格配置详情
查看 Pod 相关网格配置信息
istioctl x( experimental )describe pod <pod-name>
:
- 验证是否在网格内
- 验证 VirtualService
- 验证 DestinationRule
- 验证路由
- …
示例:
[root@m1 ~]# istioctl x describe pod productpage-v1-65576bb7bf-4bwwr
Pod: productpage-v1-65576bb7bf-4bwwr
Pod Ports: 9080 (productpage), 15090 (istio-proxy)
--------------------
Service: productpage
Port: http 9080/HTTP targets pod port 9080
Exposed on Ingress Gateway http://192.168.243.140
VirtualService: bookinfo
/productpage, /static*, /login, /logout, /api/v1/products*
[root@m1 ~]#
网格配置诊断
-
istioctl analyze [–n <namespace> / --all-namespaces]
:检查指定命名空间下的网格配置情况,如果有问题会提示相应的警告或错误信息 -
istioctl analyze a.yaml b.yaml my-app-config/
:针对单个配置文件或某个目录下的所有配置文件进行检查 -
istioctl analyze --use-kube=false a.yaml
:以忽略部署平台的方式去检查指定的配置文件
controlZ 可视化自检工具
controlZ 是针对控制平面的可视化自检工具,其主要功能如下:
- 调整日志输出级别
- 查看内存使用情况
- 查看环境变量
- 查看进程信息
使用方式如下:
istioctl d controlz <istiod-podname> -n istio-system
image.png
Envoy admin API 接口
Envoy admin API 可以查看和操作数据平面,其主要功能如下:
- 日志级别调整
- 性能数据分析
- 配置等信息
- 指标查看
使用如下命令打开指定Pod的Envoy admin API:
istioctl d envoy <pod-name>.[namespace] --address ${ip}
或通过如下方式开放其端口:
kubectl port-forward <pod-name> 15000:15000 ${ip}
其页面如下:
image.png
Pilot debug 接口
Pilot debug 接口的主要功能如下:
- xDS 和配置信息
- 性能问题分析
- 配置同步情况
使用如下命令开放其端口:
kubectl port-forward service/istiod -n istio-system 15014:15014 --address ${ip}
其页面如下:
image.png