高性能web服务与并发测试

并发测试工具

1.Apache ab 2.Python locust 3.nodejs loadtest 4.weighttp 5.Apache JMeter 6.wrk

并发Web开发

1.Python的并发服务:Flask gevent 多进程WSGI(非gunicorn),这篇博客已经对并发进行分析测试了,为了防止失联,把代码搬运了:

server.py

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
# coding: utf-8
# code by https://cpp.la, 2020-04-20
# flask + gevent + multiprocess + wsgi
 
from gevent import monkey
from gevent.pywsgi import WSGIServer
monkey.patch_all()
 
import datetime
import os
from multiprocessing import cpu_count, Process
from flask import Flask, jsonify
 
 
app = Flask(__name__)
 
@app.route("/", methods=['GET'])
def function_benchmark():
    return "hello"
 
def run(MULTI_PROCESS):
    if MULTI_PROCESS == False:
        WSGIServer(('0.0.0.0', 8080), app).serve_forever()
    else:
        mulserver = WSGIServer(('0.0.0.0', 8080), app)
        mulserver.start()
 
        def server_forever():
            mulserver.start_accepting()
            mulserver._stop_event.wait()
 
        for i in range(cpu_count()):
            p = Process(target=server_forever)
            p.start()
 
if __name__ == "__main__":
    # 单进程 + 协程
    run(False)
    # 多进程 + 协程
    # run(True)

2.RUST并发服务 rust actix

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
use actix_web::{get, web, App, HttpServer, Responder};

#[get("/")]
async fn index(web::Path((id, name)): web::Path<(u32, String)>) -> impl Responder {
    format!("Hello {}! id:{}", name, id)
}

#[actix_web::main]
async fn main() -> std::io::Result<()> {
    HttpServer::new(|| App::new().service(index))
        .bind("127.0.0.1:8080")?
        .run()
        .await
}

actix 测试结果:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
wrk -c100 -t16 -d5s http://127.0.0.1:8080
Running 5s test @ http://127.0.0.1:8080
  16 threads and 100 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency   692.92us  671.89us  27.97ms   98.85%
    Req/Sec     9.29k   829.44    20.49k    97.03%
  746546 requests in 5.10s, 58.38MB read
  Non-2xx or 3xx responses: 746546
Requests/sec: 146385.73
Transfer/sec:     11.45MB
./wrk -c1000 -t16 -d5s http://127.0.0.1:8080
Running 5s test @ http://127.0.0.1:8080
  16 threads and 1000 connections
  Thread Stats   Avg      Stdev     Max   +/- Stdev
    Latency     6.37ms    1.98ms  78.58ms   93.94%
    Req/Sec     9.54k     3.48k   86.90k    92.16%
  763414 requests in 5.10s, 59.70MB read
  Non-2xx or 3xx responses: 763414
Requests/sec: 149702.90
Transfer/sec:     11.71MB

测试机器为8核16线程的服务器,可以看到actix的性能还是相当强大的.另外突然想到了一个问题 Linux中本机和本机Socket通信会走网卡吗?

并发测试

起先用了loadtest测试,关于Apache ab,weighttp的性能loadtest中有提到,感觉并不理想,后来使用wrk发现性能更强,故使用wrk测试.wrk支持lua脚本的嵌入调用,支持post,get,等调用,亦可使用lua脚本同时对多个接口进行测试.

mul.lua多接口测试脚本

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
-- Resource: http://czerasz.com/2015/07/19/wrk-http-benchmarking-tool-example/
-- Module instantiation
local cjson = require "cjson"
local cjson2 = cjson.new()
local cjson_safe = require "cjson.safe"

-- Initialize the pseudo random number generator
-- Resource: http://lua-users.org/wiki/MathLibraryTutorial
math.randomseed(os.time())
math.random(); math.random(); math.random()

-- Shuffle array
-- Returns a randomly shuffled array
function shuffle(paths)
  local j, k
  local n = #paths

  for i = 1, n do
    j, k = math.random(n), math.random(n)
    paths[j], paths[k] = paths[k], paths[j]
  end

  return paths
end

-- Load URL paths from the file
function load_request_objects_from_file(file)
  local data = {}
  local content

  -- Check if the file exists
  -- Resource: http://stackoverflow.com/a/4991602/325852
  local f=io.open(file,"r")
  if f~=nil then
    content = f:read("*all")

    io.close(f)
  else
    -- Return the empty array
    return lines
  end

  -- Translate Lua value to/from JSON
  data = cjson.decode(content)


  return shuffle(data)
end

-- Load URL requests from file
requests = load_request_objects_from_file("./data/requests.json")

-- Check if at least one path was found in the file
if #requests <= 0 then
  print("multiplerequests: No requests found.")
  os.exit()
end

print("multiplerequests: Found " .. #requests .. " requests")

-- Initialize the requests array iterator
counter = 1

request = function()
  -- Get the next requests array element
  local request_object = requests[counter]

  -- Increment the counter
  counter = counter + 1

  -- If the counter is longer than the requests array length then reset it
  if counter > #requests then
    counter = 1
  end

  -- Return the request object with the current URL path
  return wrk.format(request_object.method, request_object.path, request_object.headers, request_object.body)
end

上面脚本依赖luacjson包,其目的是解析/data/requests.json构造对应的request请求.

/data/requests.json

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
[
  {
    "path": "/path-1",
    "body": "{\"quote\": \"\"}",
    "method": "POST",
    "headers": {
      "Content-Type": "application/json"
    }
  },
  {
    // path 为web服务的url路径
    "path": "/path-2",
    "body": "{\"quote\": \"\"}",
    "method": "POST",
    "headers": {
      "Content-Type": "application/json"
    }
  }
]

测试命令

./wrk -c100 -t16 -d5s -s ./mul.lua http://127.0.0.1:5000

-c:(concurrent)并发数

-t:(threads)线程数

-d:(duration)持续时间

其它具体使用方法可去看文档及示例: https://github.com/wg/wrk

疑问 测试actix时,cpu,内存,网络占用,都特别低,cpu使用百分之一不到,说明程序还远未达到硬件的性能瓶颈

Linux的性能测试工具 Perf

Licensed under CC BY-NC-SA 4.0
Built with Hugo
主题 StackJimmy 设计