HA Cluster和keepalived主从,主主高可用设置以

2019-01-01  本文已影响0人  佐岸的咖啡

1)、简述HA cluster原理

i). HA cluster的定义

集群cluster是指使用一组计算机作为一个整体向用户提供一组网络资源。

在集群中的每个计算机系统称为集群节点(node)。集群可随着业务的增长,通过添加新的节点的方式来提升集群性能。集群的类型包括:Load Balance、High Availability、High Performance这三种,而我们通常所说的HA cluster就是High availability cluster。

集群类型: LB(lvs/nginx(http/upstream,stream/upstream))、HA、HP

ii). HA cluster 的性能衡量及工作方式

HA cluster 性能公式:

HA=MTBF/(MTBF+MTTR)*100%

MTBF: 平均故障间隔时间
MTTR: 故障的平均恢复时间
其计算值的范围为0-1,计算得到的结果越接近1,说明此HA cluster 就越稳定。

指标: 99%,...,99.999%,99.9999%

99% 意味着一年宕机时间不超过4天;99.9% 意味一年宕机时间不超过10小时;99.99% 意味一年宕机时间不超过1小时;99.999% 意味一年宕机时间不超过6分钟。

iii). HA cluster的工作方式

主备方式

即HA cluster集群中的节点以主备的方式运行,主机处于工作状态,备机处于监控准备状态;当主机出现宕机状态时,备机接管主机的一切工作, 待主机恢复正常后,备机再根据事先设置的设定来决定是否把服务切换到主机上运行。

双主方式

即HA cluster 集群中的节点均已主机方式运行,互相之间同时运行维护各自的服务工作并相互检测。当任意一台主机宕机后,另一台主机会接管它的一切工作,保证服务正常运行。

iii). HA cluster的运行原理

自动侦测(Auto-Detect)阶段 由主机上的软件通过冗余侦测线,经由复杂的监听程序。逻辑判断,来相互侦测对方运行的情况,所检查的项目有:主机硬件(CPU和周边)、主机网络、主机操作系统、数据库引擎及其它应用程序、主机与磁盘阵列连线。为确保侦测的正确性,而防止错误的判断,可设定安全侦测时间,包括侦测时间间隔,侦测次数以调整安全系数,并且由主机的冗余通信连线,将所汇集的讯息记录下来,以供维护参考。

自动切换(Auto-Switch)阶段 某一主机如果确认对方故障,则正常主机除了继续进行原来的任务,还将依据各种容错备援模式接管预先设定的备援作业程序,并进行后续的程序及服务,此类故障切换又被称为failover。

自动恢复(Auto-Recovery)阶段 在正常主机代替故障主机工作后,故障主机可离线进行修复工作。在故障主机修复后,透过冗余通讯线与原正常主机连线,自动切换回修复完成的主机上。整个恢复过程完成由HA相关软件自动完成,亦可依据预先配置,选择恢复动作为半自动或不恢复。而某资源的主节点故障后重新修改上线后,将转移至其它节点的资源重新切回的过程通常称为failback。


2)、keepalived实现主从、主主架构

测试环境:共5台主机
RealServer1: 192.168.10.114/24
RealServer1: 192.168.10.224/24
DirectorServer1: 192.168.10.226/24 VirtualServer: 192.168.10.10/24
DirectorServer2: 192.168.10.228/24 VirtualServer: 192.168.10.10/24

keepalived的主从架构

i). 配置RealServer端环境
[root@rs1 ~]#ntpdate ntp1.aliyun.com
31 Dec 23:50:12 ntpdate[1617]: step time server 120.25.115.20 offset 20.688191 sec
[root@rs1 ~]#systemctl stop firewalld.service
[root@rs1 ~]#systemctl disable firewalld.service
[root@rs1 ~]#getenforce
Disabled
ii). 配置nginx测试主页 (RS1和RS2配置类似)
[root@rs1 ~]#yum install nginx -y
[root@rs1 ~]#vim /usr/share/nginx/html/index.html
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS1_Server </h1>
[root@rs1 html]#systemctl start nginx.service
[root@rs1 html]#ss -tnl
State       Recv-Q Send-Q                   Local Address:Port                                  Peer Address:Port              
LISTEN      0      128                                  *:111                                              *:*                  
LISTEN      0      128                                  *:80
iii). 配置lvs-dr模型脚本文件
[root@rs1 html]#vim RS.sh

#!/bin/bash
#
vip=192.168.10.10
mask=255.255.255.255

case $1 in
start)
    echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
    echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
    echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
    echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
    ifconfig lo:0 $vip netmask $mask broadcast $vip up
    route add -host $vip dev lo:0
    ;;
stop)
    ifconfig lo:0 down
    echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
        echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
        echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
        echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
    ;;
*)
    echo "Usage: $(basename $0) start|stop"
    exit 1
    ;;
esac


[root@rs1 html]#bash -n RS.sh
[root@rs1 html]#bash -x RS.sh start
[root@rs1 html]#scp RS.sh 192.168.10.224:/root/
iiii). 配置DirectorServer端(DR1和DR2配置类似)
[root@dr1 ~]#ntpdate ntp.aliyun.com
 1 Jan 00:35:12 ntpdate[1653]: step time server 203.107.6.88 offset 20.667238 sec
[root@dr1 ~]#systemctl stop firewalld.service
[root@dr1 ~]#systemctl disable firewalld.service
[root@dr1 ~]#getenforce
Disabled
iiiii). 配置keepalived文件

(DR2配置需要做相应IP的调整,包括状态类型BACKUP以及优先级)

[root@dr1 ~]#yum install ipvsadm keepalived -y
[root@dr1 ~]#vim keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
    root@localhost
   }
   notification_email_from keepalived@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id 192.168.10.226
   vrrp_mcast_group4 224.0.100.19
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 1
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 571f97b2
    }
    virtual_ipaddress {
        192.168.10.10/24 dev ens33 Label ens33:0
    }
}

virtual_server 192.168.10.10 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    protocol TCP

    real_server 192.168.10.114 80 {
        weight 1
        HTTP_GET {
            url {
              path /index.html
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 192.168.10.224 80 {
        weight 1
        HTTP_GET {
            url {
              path /index.html
              status_code 200
            } 
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }   
    }   

}
[root@dr1 ~]#systemctl start keepalived
[root@dr1 ~]#ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens33: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
    link/ether 00:0c:29:66:40:a6 brd ff:ff:ff:ff:ff:ff
    inet 192.168.10.226/24 brd 192.168.10.255 scope global noprefixroute dynamic ens33
       valid_lft 11937sec preferred_lft 11937sec
    inet 192.168.10.10/24 scope global secondary ens33
       valid_lft forever preferred_lft forever

DR2同样参照上述配置进行设置并启动.

iv).客户端进行测试
[root@CentOS6 ~]#for i in {1..20}; do curl http://192.168.10.10/index.html; done
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>

关闭dr1的keepalived服务,查看dr2状态已经发生改变

[root@dr1 ~]#systemctl stop keepalived            
[root@dr2 keepalived]#systemctl status keepalived     
● keepalived.service - LVS and VRRP High Availability Monitor
   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)
   Active: active (running) since 二 2019-01-01 02:17:37 CST; 8s ago
  Process: 49596 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 49597 (keepalived)
    Tasks: 3
   CGroup: /system.slice/keepalived.service
           ├─49597 /usr/sbin/keepalived -D
           ├─49598 /usr/sbin/keepalived -D
           └─49599 /usr/sbin/keepalived -D

1月 01 02:17:37 dr2 Keepalived_vrrp[49599]: VRRP sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)]
1月 01 02:17:42 dr2 Keepalived_vrrp[49599]: VRRP_Instance(VI_1) Transition to MASTER STATE
1月 01 02:17:43 dr2 Keepalived_vrrp[49599]: VRRP_Instance(VI_1) Entering MASTER STATE
1月 01 02:17:43 dr2 Keepalived_vrrp[49599]: VRRP_Instance(VI_1) setting protocol VIPs.
1月 01 02:17:43 dr2 Keepalived_vrrp[49599]: Sending gratuitous ARP on ens33 for 192.168.10.10
1月 01 02:17:43 dr2 Keepalived_vrrp[49599]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on ens33 for 192.168.10.10
1月 01 02:17:43 dr2 Keepalived_vrrp[49599]: Sending gratuitous ARP on ens33 for 192.168.10.10
1月 01 02:17:43 dr2 Keepalived_vrrp[49599]: Sending gratuitous ARP on ens33 for 192.168.10.10
1月 01 02:17:43 dr2 Keepalived_vrrp[49599]: Sending gratuitous ARP on ens33 for 192.168.10.10
1月 01 02:17:43 dr2 Keepalived_vrrp[49599]: Sending gratuitous ARP on ens33 for 192.168.10.10

查看服务调度一切正常,说明keepalived主从配置生效,反之亦然

[root@CentOS6 ~]#for i in {1..20}; do curl http://192.168.10.10/index.html; done
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>

keepalived的主主架构

在上述基础主从基础做对应的调整

i). RS方面脚本做对应调整
[root@rs1 html]#cat RS2.sh 
#!/bin/bash
#
vip=192.168.10.99
mask=255.255.255.255

case $1 in
start)
    echo 1 > /proc/sys/net/ipv4/conf/all/arp_ignore
    echo 1 > /proc/sys/net/ipv4/conf/lo/arp_ignore
    echo 2 > /proc/sys/net/ipv4/conf/all/arp_announce
    echo 2 > /proc/sys/net/ipv4/conf/lo/arp_announce
    ifconfig lo:1 $vip netmask $mask broadcast $vip up
    route add -host $vip dev lo:1
    ;;
stop)
    ifconfig lo:1 down
    echo 0 > /proc/sys/net/ipv4/conf/all/arp_ignore
        echo 0 > /proc/sys/net/ipv4/conf/lo/arp_ignore
        echo 0 > /proc/sys/net/ipv4/conf/all/arp_announce
        echo 0 > /proc/sys/net/ipv4/conf/lo/arp_announce
    ;;
*)
    echo "Usage: $(basename $0) start|stop"
    exit 1
    ;;
esac

传输给RS2主机,并都启用脚本

[root@rs1 html]#scp RS2.sh 192.168.10.224:/root/
[root@rs1 html]#bash -n RS2.sh 
[root@rs1 html]#bash -x RS2.sh start
ii). DR方面对conf文件添加对应的主备参数

DR1的配置文件:

[root@dr1 ~]#vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
    root@localhost
   }
   notification_email_from keepalived@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id 192.168.10.226
   vrrp_mcast_group4 224.0.100.19
}

vrrp_instance VI_1 {
    state MASTER
    interface ens33
    virtual_router_id 1
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 571f97b2
    }
    virtual_ipaddress {
        192.168.10.10
    }
}

virtual_server 192.168.10.10 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    protocol TCP

    real_server 192.168.10.114 80 {
        weight 1
        HTTP_GET {
            url {
              path /index.html
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 192.168.10.224 80 {
        weight 1
        HTTP_GET {
            url {
              path /index.html
              status_code 200
            } 
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }   
    }   

}

vrrp_instance VI_2 {
    state BACKUP
    interface ens33
    virtual_router_id 2
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 572f97b2
    }
    virtual_ipaddress {
        192.168.10.99
    }
}

virtual_server 192.168.10.99 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    protocol TCP

    real_server 192.168.10.114 80 {
        weight 1
        HTTP_GET {
            url {
              path /index.html
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 192.168.10.224 80 {
        weight 1
        HTTP_GET {
            url {
              path /index.html
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }

}
[root@dr1 ~]#ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.10.10:80 rr
  -> 192.168.10.114:80            Route   1      0          0         
  -> 192.168.10.224:80            Route   1      0          0         
TCP  192.168.10.99:80 rr
  -> 192.168.10.114:80            Route   1      0          0         
  -> 192.168.10.224:80            Route   1      0          0 

DR2的配置文件:

[root@dr2 ~]#cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived

global_defs {
   notification_email {
    root@localhost
   }
   notification_email_from keepalived@localhost
   smtp_server 127.0.0.1
   smtp_connect_timeout 30
   router_id 192.168.10.228
   vrrp_mcast_group4 224.0.100.19
}

vrrp_instance VI_1 {
    state BACKUP
    interface ens33
    virtual_router_id 1
    priority 90
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 571f97b2
    }
    virtual_ipaddress {
        192.168.10.10
    }
}

virtual_server 192.168.10.10 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    protocol TCP

    real_server 192.168.10.114 80 {
        weight 1
        HTTP_GET {
            url {
              path /index.html
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 192.168.10.224 80 {
        weight 1
        HTTP_GET {
            url {
              path /index.html
              status_code 200
            } 
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }   
    }   

}

vrrp_instance VI_2 {
    state MASTER
    interface ens33
    virtual_router_id 2
    priority 100
    advert_int 1
    authentication {
        auth_type PASS
        auth_pass 572f97b2
    }
    virtual_ipaddress {
        192.168.10.99
    }
}

virtual_server 192.168.10.99 80 {
    delay_loop 6
    lb_algo rr
    lb_kind DR
    protocol TCP

    real_server 192.168.10.114 80 {
        weight 1
        HTTP_GET {
            url {
              path /index.html
              status_code 200
            }
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }
    }
    real_server 192.168.10.224 80 {
        weight 1
        HTTP_GET {
            url {
              path /index.html
              status_code 200
            } 
            connect_timeout 3
            nb_get_retry 3
            delay_before_retry 3
        }   
    }   
    
}
[root@dr2 ~]#ipvsadm -ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn
TCP  192.168.10.10:80 rr
  -> 192.168.10.114:80            Route   1      0          0         
  -> 192.168.10.224:80            Route   1      0          0         
TCP  192.168.10.99:80 rr
  -> 192.168.10.114:80            Route   1      0          0         
  -> 192.168.10.224:80            Route   1      0          0 
iii). 客户机测试
[root@CentOS6 ~]#for i in {1..20}; do curl http://192.168.10.10/index.html; done
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
[root@CentOS6 ~]#for i in {1..20}; do curl http://192.168.10.99/index.html; done
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>
<h1> 192.168.10.224 RS2_Server </h1>
<h1> 192.168.10.114 RS1_Server </h1>

3)、简述http协议缓存原理及常用首部讲解

程序的运行具有局部性的特征

时间局部性: 一个数据被访问过之后,可能很快会被再次访问到
空间局部性: 一个数据被访问时,其周边的数据也有可能被访问到

cache: 命中

热区: 局部性

缓存命中率: hit/(hit+miss)

缓存与否:

http协议缓存的原理

http协议缓存的原理

基于nginx的反代服务时,为了加速性能,可以开启nginx缓存;如果这nginx为负载均衡器时,还要承担缓存的功能,在高并发下,会面临带宽瓶颈;因此在规模交大时,会在反代服务器后面添加专门用于缓存的服务器,来提供缓存功能。这样让代理功能的服务器只负责代理,让缓存功能的服务器只负责缓存,当前端主机请求资源时,它所指向的上游服务器就不在是真正的服务器,而是缓存服务器,他们之间是通过http请求和http响应报文来通信;因此,代理服务器取资源时缓存服务器如果本地未能命中,会到后端服务器读取数据,取到数据后按照缓存策略是否可缓存,如果可缓存就把数据缓存到本地,并响应给前端主机;如果缓存服务器能命中,则缓存服务器直接响应,省去了到后端读取数据的过程

常用首部

- Cache-related Header Fields
    - The most important caching header fields are;
        - Expire: 过期时间
            - Expries: Thu, 22 Oct 2026 06:34:30 GMT
            - Cache-Control: max-age=
            - Etag
            - If-None-Match
            - Last-Modified
            - If-Modified-Since
            - Vary
            - Age
    - 缓存有效性判断机制:
        - 过期时间: Expires
            - HTTP/1.0
                - Expires
            - HTTP/1.1
                - Cache-Control: maxage=
                - Cache-Control: s-maxage=
        - 条件式请求:
            - Last-Modified/If-Modified-Since
            - Etag/If-None-Match
        - Expires: Thu, 13 Aug 2026 02:05:12 GMT
        - Cache-Control: maxage=315360000
        - Etag:"1ec5-502264e2ae4c0"
        - Last-Modified: Wed, 03 Sep 2014 10:00:27 GMT

4)、简述回源原理和CDN常见多级缓存

一、CDN回源

回源原理

i). 源站内容有更新的时候,源站主动把内容推送到CDN节点。

ii). 常规的CDN都是回源的。即:当有用户访问某一个URL的时候,如果被解析到的那个CDN节点没有缓存响应的内容,或者是缓存已经到期,就会回源站去获取。如果没有人访问,那么CDN节点不会主动去源站拿的。

iii). 回源域名一般是cdn领域的专业术语,通常情况下,是直接用ip进行回源的,但是如果客户源站有多个ip,并且ip地址会经常变化,对于cdn厂商来说,为了避免经常更改配置(回源ip),会采用回源域名方式进行回源,这样即使源站的ip变化了,也不影响原有的配置。

二、CDN常见多级缓存

1、CDN概念

2、CDN工作方法

CDN的典型拓扑图

3、CDN缓存

浏览器本地缓存失效后,浏览器会向CDN边缘节点发起请求。类似浏览器缓存,CDN边缘节点也存在着一套缓存机制。

4、CDN缓存的缺点

CDN的分流作用不仅减少了用户的访问延时,也减少的源站的负载。但其缺点也很明显:当网站更新时,如果CDN节点上数据没有及时更新,即便用户再浏览器使用Ctrl +F5的方式使浏览器端的缓存失效,也会因为CDN边缘节点没有同步最新数据而导致用户访问异常。

5、CDN缓存策略

6、CDN缓存刷新

CDN边缘节点对开发者是透明的,相比于浏览器Ctrl+F5的强制刷新来使浏览器本地缓存失效,开发者可以通过CDN服务商提供的“刷新缓存”接口来达到清理CDN边缘节点缓存的目的。这样开发者在更新数据后,可以使用“刷新缓存”功能来强制CDN节点上的数据缓存过期,保证客户端在访问时,拉取到最新的数据。


5)、varnish实现缓存对象及反代后端主机

请求报文用于通知缓存服务如何使用缓存响应请求:

cache-request-directive =
    "no-cache"
    "no-store"
    "max-age" "=" delta-seconds
    "max-stale" [ "=" delta-seconds ]
    "min-fresh" "=" delta-seconds
    "no-transform"
    "only-if-cached"
    cache-extension

响应报文用于通知缓存服务器如何存储上级服务器响应的内容

cache-response-directive =
    "public"
    "public" [ "=" <"> 1#field-name <">]
    "no-cache" [ "=" <"> 1#field-name <">],可缓存,但响应给客户端之前需要revalidation
    "no-store", 不允许存储响应内容于缓存中
    "no-transform"
    "must-revalidate"
    "proxy-revalidate"
    "max-age" "=" delta-seconds
    "s-maxage" "=" delta-seconds
    cache-extension

开源解决方案:

~]# varnish_reload_vcl

-S /etc/varnish/secret -T [ADDRESS:]PORT
help [<command>]
ping [<timestamp>]
auth <response>
quit
banner
status
start
stop
vcl.load <configname> <filename>
vcl.inline <configname> <quoted_VCLstring>
vcl.use <configname>
vcl.discard <configname>
vcl.list
param.show [-i] [<param>]
param.set <param> <value>
panic.show
panic.clear
storage.list
vcl.show [-v] <configname>
backend.list [<backend_expression>]
backend.set_health <backend_expression> <state>
ban <field> <operator> <arg> [&& <field> <oper> <arg>]...
ban.list

VCL:

sub vcl_recv {
    if (req.method == "PRI") {
    /* We do not support SPDY or HTTP/2.0 */
    return (synth(405));
    }
    if (req.method != "GET" &&
      req.method != "HEAD" &&
      req.method != "PUT" &&
      req.method != "POST" &&
      req.method != "TRACE" &&
      req.method != "OPTIONS" &&
      req.method != "DELETE") {
        /* Non-RFC2616 or CONNECT which is weird. */
        return (pipe);
    }

    if (req.method != "GET" && req.method != "HEAD") {
        /* We only deal with GET and HEAD by default */
        return (pass);
    }
    if (req.http.Authorization || req.http.Cookie) {
        /* Not cacheable by default */
        return (pass);
    }
    return (hash);
}

vcl的语法格式:

The VCL Finite State Machine

三类主要语法

sub subroutine {
    ...
}

if CONDITION {
    ...
}else{
    ...
}

return(),hash_data()

VCL Built-in Functions and Keywords

操作符:

if(obj.hits>0) {
    set resp.http.X-Cache = "HIT via" + server.ip;
}else{
    set resp.http.X-Cache = "MISS via" + server.ip;
}

变量类型:

内建变量:

req.*: request, 表示由客户端发来的请求报文相关;
    req.http.*
        req.http.User-Agent, req.http.Referer,...
        
bereq.*: 由varnish发往BE主机的httpd请求先关
    bereq.http.*
    
beresp.*: 由BE主机响应给varnish的响应报文相关
    beresp.http.*
    
resp.*: 由varnish响应给client相关

obj.*: 存储在缓存空间中的缓存兑现的属性; 只读;

常用变量:

bereq.*,req.*:
    bereq.http.HEADERS
    bereq.request: 请求方法
    bereq.url: 请求的url
    bereq.proto: 请求的协议版本
    bereq.backend: 指明要调用的后端主机

    req.http.Cookie: 客户端的请求报文中Cookie首部的值
    req.http.User-Agent ~ "chrome"


beresp.*.resp.*:
    beresp.http.HEADERS
    beresp.status: 响应的状态码
    beresp.proto: 协议版本
    beresp.backend.name: BE主机的主机名
    beresp.ttl: BE主机响应的内容的余下的可缓存时长

obj.*
    obj.hits: 此对象从缓存中命中的次数
    obj.ttl: 对象的ttl值

server.*
    server.ip
    server.hostname

client.*
    client.ip

用户自定义

示例1: 强制对某类资源的请求不检查缓存
vcl_recv {
    if(req.url ~ "(?i)^/(login|admin)") {
        return(pass);
    }
}
示例2: 对于特定类型的资源,例如公开的图片等,取消其私有标识,并强行设定其可以由varnish缓存的时长
if(beresp.http.cache-control !~ "s-maxage") {
    if(bereq.url ~ "(?i)\.(jpg|jpeg|png|gif|css|js)$") {
        unset beresp.http.Set-Cookie;
        set beresp.ttl=3600s;
    }
}
示例3:
if(req.restarts == 0) {
    if(req.http.X-Fowarded-For) {
        set.req.http.X-Forwarded-For = req.http.X-forwarded-For + "," + client.ip;
    }else {
        set.req.http.X-Forwarded-For = client.ip;
    }
}

缓存对象的修剪: purge, ban

sub vcl_purge {
    return(synth(200,"Purged"));
}
sub vcl_recv {
    if(req.method == "PURGE") {
        return(purge);
    }
    ...
}       
acl purgers {
    "127.0.0.1";
    "192.168.0.0"/24;
}

sub vcl_recv {
    # allow PURGE from localhost and 192.168.0...
    if (req.method == "PURGE") {
        if (!client.ip ~ purgers) {
            return (synth(405, "Purging not allowed for " + client.ip));
        }
        return (purge);
    }
}

sub vcl_purge {
    set req.method = "GET";
    return (restart);
}

Banning

ban <filed> <operator> <arg>

示例:

ban req.url ~ ^/javascripts
  if (req.method == "BAN") {
        ban("req.http.host == " + req.http.host + " && req.url == " + req.url);
        # Throw a synthetic page so the request won't go to the backend.
        return(synth(200, "Ban added"));
    }
11999111-18851ad8917f7b95.png 11999111-818174b37637f74f.png

如何设定使用多个后端主机

backend default {
    .host = "172.16.100.6";
    .port = "80";
}

backend appsrv {
    .host = "172.16.100.7";
    .port = "80";
}

sub req.recv {
    if(req.url ~ "(?i)\.php$") {
        set req.backend_hint = appsrv;
    }else {
        set req.backend_hint = default;
    }
    ...
}       

Director

backend server1 {
    .host =
    .port =
}

backend server2 {
    .host =
    .port =
}

sub vcl_init {
    new GROUP_NAME = directors.round.robin();
    GROUP_NAME.add_backend(server1);
    GROUP_NAME.add_backend(server2);
}

sub vcl_recv {
    set req.backend_hint = GROUP_NAME.backend();
}

基于cookie的session sticky

sub vcl_init {
    new h = directors.hash();
    h.add_backend(one, 1);   // backend 'one' with weight '1'
    h.add_backend(two, 1);   // backend 'two' with weight '1'
}

sub vcl_recv {
    // pick a backend based on the cookie header of the client
    set req.backend_hint = h.backend(req.http.cookie);
}

BE Health Check

backend BE_NAME {
    .host = 
    .probe =
        .url = 
        .timeout = 
        .interval =
        .window =
        .threshold =
    }
}

健康状态检测的配置方式:

backend NAME = {
    .probe = PB_NAME;
    ...
}
backend NAME = {
    .probe = {
    ...
    }
}

示例:

probe check {
    .url = "/healthcheck.html";
    .timeout = 1s;
    .interval = 2s;
    .window = 5;
    .threshold = 4;
}

backend default {
    .host = "10.1.0.68";
    .port = "80";
    .probe = check;
}

backend appsrv {
    .host = "10.1.0.69";
    .port = "80";
    .probe = check;
}

[图片上传失败...(image-1f6caf-1546287959236)]

设置后端的主机属性

backend BE_NAME {
    ...
    .connect_timeout = 0.5S;
    .first_byte_timeout = 20S;
    .between_bytes_timeout = 5S;
    .max_connections = 50;
}

varnish的运行时参数:

线程相关的参数:

在线程池内部,其每一个请求由一个线程来处理,其worker线程的最大数决定了varnish的并发响应能力

Timer相关的参数:

varnish日志区域

### varnishstat -1 -f MAIN.cache_hit -f MAIN.cache_miss
### varnishstat -l -f MAIN -f MEMPOOL
上一篇下一篇

猜你喜欢

热点阅读