Nginx反向代理tornado的http请求

2019-11-27  本文已影响0人  逐风细雨

一次反向代理实践笔记:
单个并发在2000左右,通过nginx反向代理4个web服务,实现7000+的并发,web服务器cpu使用率70%;
web服务器(个人pc机)环境信息:


image.png

反向代理服务器信息:
centos7
4(Intel(R) Xeon(R) CPU E5-2630 v3 @ 2.40GHz)核32G

web服务器:python 3.7.3 + tornado6.0.3
tornado的脚本如下

import tornado.ioloop
import tornado.web

class MainHandler(tornado.web.RequestHandler):
    def get(self):
        self.write("Hello, world")

def make_app():
    return tornado.web.Application([
        (r"/", MainHandler),
    ])

if __name__ == "__main__":
    app = make_app()
    app.listen(21985)
    tornado.ioloop.IOLoop.current().start()

Nginx配置信息

user  nobody;
worker_processes 4;
worker_cpu_affinity 00000001 00000010 00000011 00000100;
error_log  logs/error.log;
pid        logs/nginx.pid;
worker_rlimit_nofile 102400;

events {
    use epoll;
    worker_connections  51200;
}

http {
   include       mime.types;
    default_type  application/octet-stream;
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                     '"$http_user_agent" "$http_x_forwarded_for" $upstream_cache_status $request_time';
     
 #  include proxy.conf;     
    access_log  logs/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    #keepalive_timeout  0;
    keepalive_timeout  45;
    tcp_nodelay     on;
    
    gzip    on;
    gzip_min_length   1k;
    gzip_buffers   4 8k;
    gzip_http_version  1.1;
    gzip_types   text/plain application/x-javascript text/css  application/xml;
    gzip_disable "MSIE [1-6]\.";


    proxy_redirect          off; 
    proxy_set_header        Host $host; 
    proxy_set_header        X-Real-IP $remote_addr;  
    client_max_body_size    100m; 
    client_body_buffer_size 128k; 

    proxy_connect_timeout 300;
    proxy_read_timeout 300;
    proxy_send_timeout 300;
    proxy_buffer_size 64k;
    proxy_buffers 4 64k;
    proxy_busy_buffers_size 128k;
    proxy_temp_file_write_size 128k;
        
    #proxy_temp_path  /tmp/nginx_temp_cache;
    #proxy_cache_path /tmp/nginx_cache/cache levels=1:2 keys_zone=cache_one:200m inactive=1d max_size=30g;
    
    upstream ws_server 
    {
    server 192.168.52.235:21982;
        server 192.168.52.235:21983;
        server 192.168.52.235:21984;
        server 192.168.52.235:21985;
    }


        server {
        listen       21980;
        server_name  localhost;
        charset utf-8;
        client_max_body_size 1024M;
   
       location /test_ws/ {
                  proxy_pass http://ws_server/;
                  proxy_redirect off;
                  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                  proxy_set_header X-Real-IP $remote_addr;
                  proxy_set_header Host $http_host;
       }

     }
}

通过wrk测试的命令:wrk-4.1.0]# ./wrk -c 200 -t 20 -d 100s http://192.168.52.21:21980/test_ws/ --latency
测试结果:

[root@lrplatformtokafkaserver wrk-4.1.0]# ./wrk -c 200 -t 20 -d 100s http://192.168.52.21:21980/test_ws/ --latency
Running 2m test @ http://192.168.52.21:21980/test_ws/
  20 threads and 200 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency    29.83ms   20.48ms 142.47ms   69.05%
    Req/Sec   356.55     51.31   760.00     70.01%
  Latency Distribution
     50%   25.72ms
     75%   41.67ms
     90%   59.14ms
     99%   89.28ms
  710405 requests in 1.67m, 151.73MB read
Requests/sec:   7098.97
Transfer/sec:      1.52MB
上一篇下一篇

猜你喜欢

热点阅读