20171107 Tomcat(二)
- 会话保持
- Tomcat Cluster
- Tomcat实现会话粘滞
- Tomcat实现会话集群
- Tomcat实现会话服务器
一、会话保持
三种实现方式:
- session sticky:会话粘滞
- source_ip:源IP地址绑定
nginx: ip_hash
haproxy: source
lvs: sh - cookie:cookie绑定
nginx:hash
haproxy: cookie
- source_ip:源IP地址绑定
- session cluster:会话集群
delta session manager - session server:会话服务器
redis(store), memcached(cache)
二、Tomcat Cluster:Tomcat 集群
httpd或nginx作为调度器,三种实现方式:
- nginx + tomcat cluster
- httpd + tomcat cluster
httpd: mod_proxy, mod_proxy_http, mod_proxy_balancer
tomcat cluster: http connector - httpd + tomcat cluster
httpd: mod_proxy, mod_proxy_ajp, mod_proxy_balancer
tomcat cluster: ajp connector
(一)nginx作为调度器
(1)配置tomcat cluster
ntpdate 172.18.0.1
// 分别将两台主机命名为node1.hellopeiyang.com和node2.hellopeiyang.com
hostnamectl set-hostname node1.hellopeiyang.com
hostnamectl set-hostname node2.hellopeiyang.com
yum install java-1.8.0-openjdk-devel tomcat tomcat-webapps tomcat-admin-webapps tomcat-docs-webapp tomcat-webapps
systemctl start tomcat
mkdir -pv /usr/share/tomcat/webapps/myapp/WEB-INF
vim /usr/share/tomcat/webapps/myapp/index.jsp
// node1主机上的内容
<%@ page language="java" %>
<html>
<head><title>TomcatA</title></head>
<body>
<h1><font color="red">TomcatA.magedu.com</font></h1>
<table align="centre" border="1">
<tr>
<td>Session ID</td>
<% session.setAttribute("magedu.com","magedu.com"); %>
<td><%= session.getId() %></td>
</tr>
<tr>
<td>Created on</td>
<td><%= session.getCreationTime() %></td>
</tr>
</table>
</body>
</html>
// node2主机上的内容
<%@ page language="java" %>
<html>
<head><title>TomcatB</title></head>
<body>
<h1><font color="blue">TomcatB.magedu.com</font></h1>
<table align="centre" border="1">
<tr>
<td>Session ID</td>
<% session.setAttribute("magedu.com","magedu.com"); %>
<td><%= session.getId() %></td>
</tr>
<tr>
<td>Created on</td>
<td><%= session.getCreationTime() %></td>
</tr>
</table>
</body>
</html>
分别测试node1和node2的tomcat服务,成功运行
![](https://img.haomeiwen.com/i6851458/d93d110bfb4287db.png)
![](https://img.haomeiwen.com/i6851458/dc8a544f9f9b4b6e.png)
(2)配置nginx调度器
ntpdate 172.18.0.1
yum install nginx
systemctl start nginx
hostnamectl set-hostname www.hellopeiyang.com
vim /etc/nginx/nginx.conf
upstream tcsrvs {
server 192.168.136.130:8080;
server 192.168.136.131:8080;
}
server {
listen 80 default_server;
server_name www.hellopeiyang.com;
root /usr/share/nginx/html;
location / {
proxy_pass http://tcsrvs;
}
}
nginx -t
nginx -s reload
登录nginx服务器的文本服务,成功调度
![](https://img.haomeiwen.com/i6851458/cd28cb8a9088bb5b.png)
![](https://img.haomeiwen.com/i6851458/8bf24152746e9bc6.png)
(二)httpd作为调度器,通过http协议
-
BalancerMember:定义后端服务器
语法:BalancerMember [balancerurl] url [key=value [key=value ...]]
-
status:状态
D:服务器不可用
H:服务器只有在其他服务器都不可用时才提供服务 -
loadfactor:负载因子,即权重
-
-
lbmethod:调度算法
byrequests:默认,按照设置的权重调度,相当于wrr
bytraffic:按照流量调度
bybusyness:按照存在的连接调度,相当于lc -
实现httpd通过http协议调度
- tomcat cluster 保持不变,现在配置httpd服务
ntpdate 172.18.0.1
yum install httpd
vim /etc/httpd/conf.d/httpd-tomcat.conf
<proxy balancer://tcsrvs>
BalancerMember http://192.168.136.130:8080
BalancerMember http://192.168.136.131:8080
ProxySet lbmethod=byrequests
</Proxy>
<VirtualHost *:80>
ServerName www.hellopeiyang.com
ProxyVia On
ProxyRequests Off
ProxyPreserveHost On
<Proxy *>
Require all granted
</Proxy>
ProxyPass / balancer://tcsrvs/
ProxyPassReverse / balancer://tcsrvs/
<Location />
Require all granted
</Location>
</VirtualHost>
systemctl start httpd
测试成功进行调度
![](https://img.haomeiwen.com/i6851458/8e036dc6417ea0a5.png)
(三)httpd作为调度器,通过ajp协议
- tomcat cluster 保持不变,现在配置httpd服务
mv /etc/httpd/conf.d/httpd-tomcat.conf /etc/httpd/conf.d/httpd-tomcat.conf.bak
vim /etc/httpd/conf.d/ajp-tomcat.conf
<proxy balancer://tcsrvs>
BalancerMember ajp://192.168.136.130:8009 loadfactor=2
BalancerMember ajp://192.168.136.131:8009 loadfactor=1
ProxySet lbmethod=byrequests
</Proxy>
<VirtualHost *:80>
ServerName www.hellopeiyang.com
ProxyVia On
ProxyRequests Off
ProxyPreserveHost On
<Proxy *>
Require all granted
</Proxy>
ProxyPass / balancer://tcsrvs/
ProxyPassReverse / balancer://tcsrvs/
<Location />
Require all granted
</Location>
</VirtualHost>
httpd -t
systemctl restart httpd
测试成功按照权重进行调度
![](https://img.haomeiwen.com/i6851458/2bf3495768cd345b.png)
当后端服务器出现故障不能提供服务时,调度器具备健康状态检查功能;
当将后端node2服务器的tomcat服务关闭时,自动全部调度至node1
![](https://img.haomeiwen.com/i6851458/f78581b5899fcb4f.png)
(四)启用httpd的调度器管理接口
vim /etc/httpd/conf.d/ajp-tomcat.conf // 添加以下几行内容
<Location /balancer-manager>
SetHandler balancer-manager
ProxyPass !
Require all granted // 演示用,实际工作要具体设置权限,并设置账号认证
</Location>
systemctl restart httpd
![](https://img.haomeiwen.com/i6851458/48fc23fa168b08f1.png)
三、Tomcat实现会话粘滞(session sticky)
-
httpd实现通过cookie会话粘滞
-
原理:客户端的第一次请求调度至后方主机后,响应报文经过调度器时会添加cookie说明具体被调度至后方哪一台主机。此后客户端的请求报文报文都会附带此cookie信息,调度器通过cookie信息调度至第一次调度到的主机,实现会话粘滞。
-
实现过程:主要是在httpd服务器上对balancer设定不同的cookie_ID
vim /etc/httpd/conf.d/ajp-tomcat.conf
// 添加自定义cookie属性ROUTEID
Header add Set-Cookie "ROUTEID=.%{BALANCER_WORKER_ROUTE}e;path=/" env=BALANCER_ROUTE_CHANGED
<proxy balancer://tcsrvs>
// 给cookie的ROUTEID赋值,下一条语句作用相似
BalancerMember ajp://192.168.136.130:8009 loadfactor=2 route=TomcatA
BalancerMember ajp://192.168.136.131:8009 loadfactor=1 route=TomcatB
ProxySet lbmethod=byrequests
// 设置以ROUTEID作为会话粘滞的依据
ProxySet stickysession=ROUTEID // 设置以ROUTEID作为会话粘滞的依据
</Proxy>
<VirtualHost *:80>
ServerName www.hellopeiyang.com
ProxyVia On
ProxyRequests Off
ProxyPreserveHost On
<Proxy *>
Require all granted
</Proxy>
ProxyPass / balancer://tcsrvs/
ProxyPassReverse / balancer://tcsrvs/
<Location />
Require all granted
</Location>
</VirtualHost>
<Location /balancer-manager>
SetHandler balancer-manager
ProxyPass !
Require all granted
</Location>
httpd -t
systemctl reload httpd
在浏览器上测试成功,从客户端的第二次请求开始,请求报文都包含有cookie信息,内容中存在设置的ROUTEID
![](https://img.haomeiwen.com/i6851458/68f18935e61345b0.png)
- 通过http协议实现负载均衡、会话粘滞与ajp协议下的实现非常相似,只需修改http协议与端口号即可,不再赘述
四、Tomcat实现会话集群(session cluster)
-
实现原理:与会话粘滞通过配置调度器实现不同,会话集群是通过设置后台tomcat服务器,在tomcat服务器集群的内部主机中以多播通信同步会话信息,实现会话保持的
-
实现过程:在每台tomcat服务器上执行以下配置
// 配置启用集群,将下列配置放置于<engine>或<host>中
vim /etc/tomcat/server.xml
<Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" channelSendOptions="8">
<Manager className="org.apache.catalina.ha.session.DeltaManager"
expireSessionsOnShutdown="false"
notifyListenersOnReplication="true"/>
<Channel className="org.apache.catalina.tribes.group.GroupChannel">
<Membership className="org.apache.catalina.tribes.membership.McastService"
address="228.58.5.8" // 多播地址必须相同
port="45564"
frequency="500"
dropTime="3000"/>
<Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver"
address="192.168.136.132" // 填入每台服务器的ip地址
port="4000"
autoBind="100"
selectorTimeout="5000"
maxThreads="6"/>
<Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter">
<Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/>
</Sender>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/>
<Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/>
</Channel>
<Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter=""/>
<Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/>
<Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer"
tempDir="/tmp/war-temp/"
deployDir="/tmp/war-deploy/"
watchDir="/tmp/war-listen/"
watchEnabled="false"/>
<ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/>
<ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/>
</Cluster>
// 确保Engine的jvmRoute属性配置正确
<Engine name="Catalina" defaultHost="localhost" jvmRoute="TomcatA"> //每个服务器的jvmRoute值不同
// 配置webapps
vim /usr/share/tomcat/webapps/myapp/WEB-INF/web.xml
<distributable/> // 在web-app标签下添加
// 重启tomcat服务
systemctl restart tomcat
调度器可以使用nginx, httpd调度,在浏览器测试,成功进行了session cluster
![](https://img.haomeiwen.com/i6851458/cddd2510246dac61.png)
![](https://img.haomeiwen.com/i6851458/9ab0567a065f48ca.png)
从测试结果看出:尽管调度器分别调度至后台的两台tomcat服务器,但是两台服务器提供服务的session id相同,证明通过集群实现了会话保持
五、Tomcat实现会话服务器(session server)
(一)memcached:高性能、分布式的内存对象缓存系统
(1)特性:
- k/v cache:仅可存储可序列化数据;存储项:k/v
- 智能性一半依赖于客户端(调用memcached的API开发程序),一半依赖于服务端
- 分布式缓存:互不通信的分布式集群
- 分布式系统请求路由方法:取模法,一致性哈希算法,算法复杂度:O(1)
- 清理过期缓存项:
- 缓存耗尽:LRU,最近最少使用算法
- 缓存项过期:惰性清理机制(标记过期缓存而不删除,新缓存直接覆盖旧缓存,减少了系统开销)
(2)安装配置:
- 由CentOS 7 base仓库直接提供:
yum install memcached - 监听的端口:
11211/tcp, 11211/udp - 文件结构:
主程序:/usr/bin/memcached
配置文件:/etc/sysconfig/memcached
Unit File:memcached.service - 协议格式:memcached协议
文本格式,易理解,效率低
二进制格式,效率高
(3)命令:
- 统计类:stats, stats items, stats slabs, stats sizes
- 存储类:set, add, replace, append, prepend
- 命令格式:
<command name> <key> <flags> <exptime> <bytes> <cas unique> - 检索类:get, delete, incr/decr
- 清空:flush_all
(4)memcached程序的常用选项:
- -m <num>:缓存的最大内存空间,默认64MB
- -c <num>:最大并发连接,默认1024
- -u <username>:以指定的用户身份来运行进程
- -l <ip_addr>:监听的IP地址,默认为本机所有地址
- -p <num>:监听的TCP端口, 默认11211
- -U <num>:监听的UDP端口,默认11211,当为0时为关闭UDP监听
- -M:内存耗尽时,不执行LRU清理缓存,而是拒绝存入新的缓存项,直到有多余的空间可用时为止
- -f <factor>:增长因子;默认是1.25
- -t <threads>:启动的用于响应用户请求的线程数
(二)实现memcached缓存下的session server
(1)实现环境:
使用memcached-session-manager作为会话管理器,需要下载相应的jar包
项目地址:https://github.com/magro/memcached-session-manager
(2)实现过程:
- 项目地址下载相关的jar包,并复制jar包至tomcat的库目录中
cp memcached-session-manager-${version}.jar /usr/share/java/tomcat/
cp memcached-session-manager-tc${6,7,8}-${version}.jar /usr/share/java/tomcat/
cp spymemcached-${version}.jar /usr/share/java/tomcat/
cp msm-kryo-serializer-${version}.jar /usr/share/java/tomcat/
cp kryo-serializers-${0.34+}.jar /usr/share/java/tomcat/
cp kryo-${3.x}.jar /usr/share/java/tomcat/
cp minlog.${version}.jar /usr/share/java/tomcat/
cp reflectasm.${version}.jar /usr/share/java/tomcat/
cp asm-${5.x}.jar /usr/share/java/tomcat/
cp objenesis-${2.x}.jar /usr/share/java/tomcat/
- 分别在两个tomcat上的某host上定义一个用于测试的context容器,并在其中创建一个会话管理器
vim /etc/tomcat/server.xml
<Context path="/myapp" docBase="/usr/share/tomcat/webapps/myapp" reloadable="true">
<Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager"
memcachedNodes="n1:192.168.136.131:11211,n2:192.168.136.132:11211"
failoverNodes="n2"
requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$"
transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory"
/>
</Context>
- 测试用例仍旧使用之前的myapp应用,启动memcached和tomcat服务
systemctl start memcached
systemctl start tomcat
- 调度器服务器上使用nginx或httpd配置转发,方法如上文,不再赘述
在浏览器中测试,可以确定session ID在负载均衡环境中保持不变
![](https://img.haomeiwen.com/i6851458/779f4455b87f6e56.png)
![](https://img.haomeiwen.com/i6851458/6677d4160dc1475f.png)