Nginx负载均衡配置与性能优化

前言

项目上线后用户量越来越大,单台服务器扛不住了

所以用Nginx做了负载均衡,把请求分发到多台服务器上

记录一下配置过程和一些优化技巧

基础配置

安装Nginx

1
2
3
4
5
6
7
8
9
10
# Ubuntu/Debian
sudo apt update
sudo apt install nginx

# CentOS/RHEL
sudo yum install nginx

# 启动Nginx
sudo systemctl start nginx
sudo systemctl enable nginx

基本负载均衡配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
http {
# 上游服务器列表
upstream backend {
server 192.168.1.101:8080;
server 192.168.1.102:8080;
server 192.168.1.103:8080;
}

server {
listen 80;
server_name example.com;

location / {
proxy_pass http://backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
}

负载均衡策略

轮询(默认)

每个请求按时间顺序分配到不同的服务器

1
2
3
4
5
upstream backend {
server 192.168.1.101:8080;
server 192.168.1.102:8080;
server 192.168.1.103:8080;
}

权重轮询

根据服务器性能分配权重

1
2
3
4
5
upstream backend {
server 192.168.1.101:8080 weight=3; # 性能好的分更多请求
server 192.168.1.102:8080 weight=2;
server 192.168.1.103:8080 weight=1;
}

IP哈希

根据客户端IP分配,同一个IP总是分配到同一台服务器

1
2
3
4
5
6
upstream backend {
ip_hash;
server 192.168.1.101:8080;
server 192.168.1.102:8080;
server 192.168.1.103:8080;
}

适合需要保持会话的场景

最少连接

把请求分配给连接数最少的服务器

1
2
3
4
5
6
upstream backend {
least_conn;
server 192.168.1.101:8080;
server 192.168.1.102:8080;
server 192.168.1.103:8080;
}

哈希

根据自定义key进行哈希

1
2
3
4
5
6
upstream backend {
hash $request_uri consistent;
server 192.168.1.101:8080;
server 192.168.1.102:8080;
server 192.168.1.103:8080;
}

健康检查

被动健康检查

1
2
3
4
5
upstream backend {
server 192.168.1.101:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.102:8080 max_fails=3 fail_timeout=30s;
server 192.168.1.103:8080 max_fails=3 fail_timeout=30s;
}
  • max_fails:最大失败次数
  • fail_timeout:失败后的暂停时间

主动健康检查(需要商业版或第三方模块)

1
2
3
4
5
6
7
8
9
10
upstream backend {
server 192.168.1.101:8080;
server 192.168.1.102:8080;
server 192.168.1.103:8080;

# 使用第三方模块nginx_upstream_check_module
check interval=3000 rise=2 fall=3 timeout=1000 type=http;
check_http_send "HEAD /health HTTP/1.0\r\n\r\n";
check_http_expect_alive http_2xx http_3xx;
}

备份服务器

1
2
3
4
upstream backend {
server 192.168.1.101:8080;
server 192.168.1.102:8080 backup; # 备用服务器
}

主服务器正常时,备用服务器不接收请求;主服务器挂了才用备用

性能优化

1. 工作进程数

1
2
3
4
5
# 设置为CPU核心数
worker_processes auto;

# 绑定到CPU核心
worker_cpu_affinity auto;

2. 连接数

1
2
3
4
5
6
7
8
9
10
events {
# 每个工作进程的最大连接数
worker_connections 10240;

# 使用epoll(Linux)
use epoll;

# 允许一个进程同时接受多个新连接
multi_accept on;
}

3. 缓存

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
http {
# 开启缓存
proxy_cache_path /path/to/cache levels=1:2 keys_zone=my_cache:10m max_size=1g inactive=60m;

server {
location / {
proxy_cache my_cache;
proxy_cache_valid 200 304 10m;
proxy_cache_valid 301 1h;
proxy_cache_valid any 1m;

proxy_cache_key "$scheme$request_method$host$request_uri";

proxy_cache_use_stale error timeout invalid_header updating;

add_header X-Cache-Status $upstream_cache_status;
}
}
}

4. 压缩

1
2
3
4
5
6
7
8
9
10
11
http {
# 开启gzip
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_comp_level 6;
gzip_types text/plain text/css text/xml text/javascript
application/json application/javascript application/xml+rss
application/rss+xml font/truetype font/opentype
application/vnd.ms-fontobject image/svg+xml;
}

5. keepalive

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
http {
# 客户端keepalive
keepalive_timeout 65;
keepalive_requests 100;

upstream backend {
server 192.168.1.101:8080;
server 192.168.1.102:8080;

# 到后端的keepalive连接
keepalive 32;
keepalive_requests 100;
keepalive_timeout 60s;
}

server {
location / {
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_pass http://backend;
}
}
}

6. 限流

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
http {
# 限制请求速率
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;

# 限制连接数
limit_conn_zone $binary_remote_addr zone=conn_limit:10m;

server {
# 限制每个IP每秒10个请求
limit_req zone=api_limit burst=20 nodelay;

# 限制每个IP最多10个连接
limit_conn conn_limit 10;

location / {
proxy_pass http://backend;
}
}
}

SSL/TLS配置

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
server {
listen 443 ssl http2;
server_name example.com;

# 证书
ssl_certificate /path/to/cert.pem;
ssl_certificate_key /path/to/key.pem;

# SSL协议
ssl_protocols TLSv1.2 TLSv1.3;

# SSL加密套件
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:HIGH:!aNULL:!MD5:!RC4:!DHE;
ssl_prefer_server_ciphers on;

# SSL会话缓存
ssl_session_cache shared:SSL:10m;
ssl_session_timeout 10m;

# OCSP Stapling
ssl_stapling on;
ssl_stapling_verify on;

location / {
proxy_pass http://backend;
}
}

# HTTP重定向到HTTPS
server {
listen 80;
server_name example.com;
return 301 https://$server_name$request_uri;
}

监控和日志

访问日志

1
2
3
4
5
6
7
8
9
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'rt=$request_time uct="$upstream_connect_time" '
'uht="$upstream_header_time" urt="$upstream_response_time"';

access_log /var/log/nginx/access.log main;
}

错误日志

1
error_log /var/log/nginx/error.log warn;

状态监控

1
2
3
4
5
6
location /nginx_status {
stub_status on;
access_log off;
allow 192.168.1.0/24;
deny all;
}

访问/nginx_status可以看到:

  • 活动连接数
  • 已接受连接数
  • 已处理连接数
  • 请求总数
  • 读写等待的连接数

高可用配置

主备模式

用keepalived实现Nginx高可用:

1
2
# 安装keepalived
sudo apt install keepalived

主节点配置:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
vrrp_script check_nginx {
script "/etc/keepalived/check_nginx.sh"
interval 2
weight -20
}

vrrp_instance VI_1 {
state MASTER
interface eth0
virtual_router_id 51
priority 100
advert_int 1

authentication {
auth_type PASS
auth_pass 1234
}

virtual_ipaddress {
192.168.1.100
}

track_script {
check_nginx
}
}

检查脚本:

1
2
3
4
5
6
7
8
9
#!/bin/bash
if [ -z "`ps -C nginx --no-header | grep nginx`" ]; then
systemctl start nginx
sleep 3
if [ -z "`ps -C nginx --no-header | grep nginx`" ]; then
exit 1
fi
fi
exit 0

常用命令

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# 测试配置
sudo nginx -t

# 重载配置(不中断服务)
sudo nginx -s reload

# 重新打开日志文件
sudo nginx -s reopen

# 快速停止
sudo nginx -s stop

# 优雅停止
sudo nginx -s quit

总结

Nginx负载均衡配置不算复杂,但要调优好还是需要经验的

关键点:

  1. 根据业务选择合适的负载均衡策略
  2. 配置健康检查,及时剔除故障节点
  3. 优化工作进程数、连接数、keepalive等参数
  4. 开启缓存、压缩等功能提升性能
  5. 配置限流保护后端服务
  6. 做好监控和日志记录

暂时就先记录这么多