SS持续优化
2018-01-09 本文已影响38人
治部少辅
Fast TCP开启
如果双端都支持FastTCP,那么可以通过开启FastTCP来降低延时。服务端设置方法有两种,要门在config.json
中添加fast_open
为true
,要么在执行ssserver
带上--fast-open
。然后在命令行中运行
echo 3 > /proc/sys/net/ipv4/tcp_fastopen
进一步优化
这个优化方法适合所有的shadowsocks版本,具体方法如下。创建文件/etc/sysctl.d/local.conf
,并在文件中添加如下内容:
# max open files
fs.file-max = 51200
# max read buffer
net.core.rmem_max = 67108864
# max write buffer
net.core.wmem_max = 67108864
# default read buffer
net.core.rmem_default = 65536
# default write buffer
net.core.wmem_default = 65536
# max processor input queue
net.core.netdev_max_backlog = 4096
# max backlog
net.core.somaxconn = 4096
# resist SYN flood attacks
net.ipv4.tcp_syncookies = 1
# reuse timewait sockets when safe
net.ipv4.tcp_tw_reuse = 1
# turn off fast timewait sockets recycling
net.ipv4.tcp_tw_recycle = 0
# short FIN timeout
net.ipv4.tcp_fin_timeout = 30
# short keepalive time
net.ipv4.tcp_keepalive_time = 1200
# outbound port range
net.ipv4.ip_local_port_range = 10000 65000
# max SYN backlog
net.ipv4.tcp_max_syn_backlog = 4096
# max timewait sockets held by system simultaneously
net.ipv4.tcp_max_tw_buckets = 5000
# turn on TCP Fast Open on both client and server side
net.ipv4.tcp_fastopen = 3
# TCP receive buffer
net.ipv4.tcp_rmem = 4096 87380 67108864
# TCP write buffer
net.ipv4.tcp_wmem = 4096 65536 67108864
# turn on path MTU discovery
net.ipv4.tcp_mtu_probing = 1
# for high-latency network
net.ipv4.tcp_congestion_control = hybla
# for low-latency network, use cubic instead
# net.ipv4.tcp_congestion_control = cubic
然后运行
sysctl --system
应用上述设置。最后在启动脚本中,于ssserver
前添加
ulimit -n 51200
这个设置方法,会消耗比较多的内存,但是会换来速度的大幅上升。