# This setting to make Nginx use HTTP2 and Rate Limit
# Set global rate limiting log level
limit_req_log_level warn;
# Create a shared memory zone for rate limiting
limit_req_zone $binary_remote_addr zone=global:10m rate=10r/s;
# Server block for the main domain
server {
listen 443 ssl http2 reuseport backlog=4096;
listen [::]:443 ssl http2 reuseport backlog=4096;
server_name example.com;
ssl_certificate /path/to/subdomain/certificate.crt;
ssl_certificate_key /path/to/subdomain/private-key.key;
# Enable rate limiting for the entire server
limit_req zone=global;
# Location block for the login page
location /login/ {
# Apply stricter rate limiting for the login page
limit_req zone=global burst=5 nodelay;
# ...
}
# Location block for static content
location /static/ {
# Allow higher rate limits for static content
limit_req zone=global burst=10 nodelay;
# ...
}
# Default location block
location / {
# ...
}
}
# Server block for a subdomain
# Any Nginx error with backlog=4096 and reuseport, remove and maintain only one server config
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name subdomain.example.com;
# Enable rate limiting for the entire subdomain server
limit_req zone=global;
# Location block for the subdomain's login page
location /login/ {
# Apply stricter rate limiting for the subdomain's login page
limit_req zone=global burst=5 nodelay;
# ...
}
# Location block for the subdomain's static content
location /static/ {
# Allow higher rate limits for subdomain's static content
limit_req zone=global burst=10 nodelay;
# ...
}
# Default location block
location / {
# ...
}
}
Last active
December 24, 2025 12:18
-
-
Save linuxmalaysia/4005502c8ebbc19ffd311b622794b944 to your computer and use it in GitHub Desktop.
NGINX RATELIMIT HTTP2 Example
Author
An excellent gist despite being 2 years old. a small suggestion to make it more understandable is to add the following after the limit_req_log_level line to generate the correct HTTP code of 429 Too Many Requests.
limit_req_status 429;
Without this, you keep getting a 503 Service Unavailable, which can be confused with the backend being stalled or not responding.
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Nginx Rate Limiting: Managing Traffic Flow for Optimal Performance (By Google Bard)
Nginx, a versatile and high-performance web server, implements rate limiting to control the flow of requests from clients, ensuring optimal server performance and preventing abuse or malicious attacks. This mechanism is particularly crucial for handling high-traffic applications and protecting against denial-of-service (DoS) attacks.
The Leaky Bucket Algorithm: The Foundation of Nginx Rate Limiting
Nginx employs the leaky bucket algorithm, a simple yet effective technique for regulating the rate at which requests are processed. This algorithm simulates the behavior of a leaky bucket, where water (representing requests) is poured into the bucket at a certain rate, and water (processed requests) leaks out at a controlled rate.
The leaky bucket algorithm operates in three main stages:
Bucket Creation: A bucket with a defined capacity is established. This capacity determines the maximum number of requests that can be stored in the queue at any given time.
Request Handling: Requests from clients are added to the bucket. If the bucket is full, new requests are discarded, preventing the queue from overflowing and overwhelming the server.
Request Processing: Requests are processed from the bucket at a specified rate, ensuring that the server doesn't get overwhelmed by a sudden surge of requests.
Key Components of Nginx Rate Limiting
Nginx rate limiting configuration involves several key components:
Rate Limit: The rate at which requests are processed, typically defined in requests per second (RPS) or requests per minute (RPM).
Zone: A shared memory area where rate limiting information is stored. This allows for consistent rate limiting across multiple Nginx workers.
Log Level: The severity level of rate limiting logs, ensuring that only important messages are recorded.
Benefits of Nginx Rate Limiting
Nginx rate limiting offers several advantages:
Prevents Abuse: Rate limiting thwarts malicious attempts to flood the server with requests, protecting against DoS attacks and resource exhaustion.
Manages Resources: Rate limiting ensures that server resources are allocated efficiently, preventing a single client from monopolizing resources and affecting other users' experience.
Enhances Security: Rate limiting can hinder brute-force attacks, password guessing attempts, and other unauthorized activities that aim to compromise security.
Improves Performance: By regulating the flow of requests, rate limiting prevents the server from becoming overloaded, ensuring optimal performance and responsiveness for all users.
Implementing Nginx Rate Limiting
Nginx rate limiting can be configured using the
limit_reqdirective in thenginx.conffile. This directive allows you to define the rate limit, zone, log level, and other configuration parameters.Conclusion
Nginx rate limiting is a robust tool for regulating traffic flow, safeguarding against abuse, and optimizing server performance. Implementing rate limiting shields your Nginx servers from malicious attacks, enhances resource usage, and delivers a stable, responsive experience for all users.
This configuration enforces rate limiting exclusively for HTTPS/2 connections by specifying the 'ssl http2' protocol in the 'listen' directive for each server block. This approach streamlines HTTP/2 connection performance by eliminating unnecessary checks for HTTP/1.x connections.
https://www.nginx.com/blog/rate-limiting-nginx/