kubernetes

Istio1.8 Bookinfo 路由分发

2021-01-16  本文已影响0人  泥人冷风

在前述 K8S集群安装Istio1.8 配置好集群后,演示第一个经典例子Istio/Bookinfo应用。

1. 应用介绍

Bookinfo 应用分为四个单独的微服务:

Bookinfo 应用中的几个微服务是由不同的语言编写的。 这些服务对 Istio 并无依赖,但是构成了一个有代表性的服务网格的例子:它由多个服务、多个语言构成,并且 reviews 服务具有多个版本。

image

2. 环境准备

节点域名 作用 IP
kmaster.local.com master 192.168.8.121

Windows

C:\Windows\System32\drivers\etc\hosts

kmaster.local.com 192.168.8.121

Centos

$ echo kmaster.local.com 192.168.8.121 >> /etc/hosts

3. 启动Istio Bookinfo部署

3.1 进入istio安装目录

$ cd /usr/local/istio-1.8.1/

3.2 开启静态side-car注入

Istio 默认自动注入 Sidecar.

$ kubectl label namespace default istio-injection=enabled

3.3 安装组件

 istio-1.8.1]# kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml
 service/details created
serviceaccount/bookinfo-details created
deployment.apps/details-v1 created
service/ratings created
serviceaccount/bookinfo-ratings created
deployment.apps/ratings-v1 created
service/reviews created
serviceaccount/bookinfo-reviews created
deployment.apps/reviews-v1 created
deployment.apps/reviews-v2 created
deployment.apps/reviews-v3 created
service/productpage created
serviceaccount/bookinfo-productpage created
deployment.apps/productpage-v1 created

上面的命令会启动全部的四个服务,其中也包括了 reviews 服务的三个版本(v1、v2 以及 v3)

在实际部署中,微服务版本的启动过程需要持续一段时间,并不是同时完成的。

3.4 确认所有的服务和 Pod 都已经正确的定义和启动:

$ kubectl get services
NAME          TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)    AGE
details       ClusterIP   10.110.213.183   <none>        9080/TCP   3m39s
kubernetes    ClusterIP   10.96.0.1        <none>        443/TCP    17d
productpage   ClusterIP   10.105.81.93     <none>        9080/TCP   3m38s
ratings       ClusterIP   10.108.169.202   <none>        9080/TCP   3m39s
reviews       ClusterIP   10.107.62.95     <none>        9080/TCP   3m38s

还有

$ kubectl get pods
NAME                              READY   STATUS    RESTARTS   AGE
details-v1-79c697d759-5shh5       2/2     Running   0          5m42s
productpage-v1-65576bb7bf-s5psx   2/2     Running   0          5m41s
ratings-v1-7d99676f7f-5d4x6       2/2     Running   0          5m42s
reviews-v1-987d495c-wlhpl         2/2     Running   0          5m42s
reviews-v2-6c5bf657cf-rdttt       2/2     Running   0          5m42s
reviews-v3-5f7b9f4f77-rdd52       2/2     Running   0          5m42s

3.5 确认服务启动

$ kubectl exec -it $(kubectl get pod -l app=ratings -o jsonpath='{.items[0].metadata.name}') -c ratings -- curl productpage:9080/productpage | grep -o "<title>.*</title>"

<title>Simple Bookstore App</title>

3.6 启动并暴露服务

istio-1.8.1]#  kubectl apply -f samples/bookinfo/networking/bookinfo-gateway.yaml
gateway.networking.istio.io/bookinfo-gateway created
virtualservice.networking.istio.io/bookinfo created

3.7 查看一下80对应的对外暴露的端口

明确自身 Kubernetes 集群环境支持外部负载均衡


image

对外暴露端口:32550

4 查看Bookinfo应用

浏览器输入-http://kmaster.local.com:32550/productpage

4.1 初次访问

image

4.2 再次刷新或F5

image

4.3 再次刷新或F5

image

那么问题来了,上述演示每次刷新都会换到不同的版本;我怎么来分发到不同版本呢

5 建立路由分发规则

5.1 查看所有的服务的子集,label对应pod定义的label(基本是空的)

kubectl get destinationrules -o yaml
apiVersion: v1
items: []
kind: List
metadata:
  resourceVersion: ""
  selfLink: ""

5.2 建立路由分发规则

istio-1.8.1]# kubectl apply -f samples/bookinfo/networking/destination-rule-all.yaml
destinationrule.networking.istio.io/productpage created
destinationrule.networking.istio.io/reviews created
destinationrule.networking.istio.io/ratings created
destinationrule.networking.istio.io/details created
istio-1.8.1]# kubectl get destinationrules -o yaml

5.2.1 全量到v1版本

istio-1.8.1]# kubectl apply -f samples/bookinfo/networking/virtual-service-all-v1.yaml
virtualservice.networking.istio.io/productpage created
virtualservice.networking.istio.io/reviews created
virtualservice.networking.istio.io/ratings created
virtualservice.networking.istio.io/details created

virtual-service-all-v1.yaml

...
  - route:
    - destination:
        host: productpage/reviews/ratings/details #注意这个简写的
        subset: v1
...

这个时候,再怎么刷新画面,都会定位到v1版本上


image

5.2.2 用户jason路由到V2服务

istio-1.8.1]# kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml
virtualservice.networking.istio.io/reviews configured

这个时候,再怎么刷新画面,json都会定位到v2版本上


image

5.2.3 用户admin路由到V3服务

istio-1.8.1]# cp samples/bookinfo/networking/virtual-service-reviews-test-v2.yaml samples/bookinfo/networking/virtual-service-reviews-test-v3.yaml
istio-1.8.1]# kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-test-v3.yaml
virtualservice.networking.istio.io/reviews configured

修改后:

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews
spec:
  hosts:
    - reviews
  http:
  - match:
    - headers:
        end-user:
          exact: admin
    route:
    - destination:
        host: reviews
        subset: v3
  - route:
    - destination:
        host: reviews
        subset: v1

有效化

istio-1.8.1]# kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-test-v3.yaml
virtualservice.networking.istio.io/reviews configured

这个时候,再怎么刷新画面,admin都会定位到v3版本上


image

5.2.4 版本v1:v2按照80:20比例分发

看一下virtual-service-reviews-80-20.yaml

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews
spec:
  hosts:
    - reviews
  http:
  - route:
    - destination:
        host: reviews
        subset: v1
      weight: 80
    - destination:
        host: reviews
        subset: v2
      weight: 20

有效化一下

istio-1.8.1]# kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-80-20.yaml
virtualservice.networking.istio.io/reviews configured

看一下效果

10次刷新(请求操作)只有2次定位到v2,大部分都定位到v1

5.2.5 版本v1:v2按照90:10比例分发

看一下virtual-service-reviews-90-10.yaml

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: reviews
spec:
  hosts:
    - reviews
  http:
  - route:
    - destination:
        host: reviews
        subset: v1
      weight: 90
    - destination:
        host: reviews
        subset: v2
      weight: 10

有效化一下

istio-1.8.1]# kubectl apply -f samples/bookinfo/networking/virtual-service-reviews-90-10.yaml
virtualservice.networking.istio.io/reviews configured

看一下效果
这个时候10次请求有一次定位到v2上就已经很不错了。

大家试想如果不断加大v1的流量分配,减少v2的流量分配

上一篇下一篇

猜你喜欢

热点阅读