Nginx Manager for cPanel & WHM: Complete Gui…
Discover how the Cpnginx Nginx Manager simplifies Nginx management for cPanel and WHM. Learn about dashboards, domain t…
Learn how to optimize Nginx for maximum performance. This complete guide covers Nginx tuning, kernel TCP/IP stack optimization, worker configuration, caching, compression, and HTTP3 setup. Boost your server’s speed using Cpnginx on cPanel.
Nginx is one of the most popular web servers powering millions of websites worldwide. Known for its lightweight architecture and ability to handle high traffic with minimal resources, Nginx is often the preferred choice for businesses that demand speed and reliability.
However, the default Nginx configuration isn’t optimized for heavy workloads or modern web environments. That’s where performance tuning and kernel optimization play a crucial role.
Nginx (pronounced “Engine-X”) is a high-performance, open-source web server and reverse proxy server. It’s widely used for serving static content, handling load balancing, and acting as a reverse proxy for backend servers.
Its asynchronous event-driven architecture allows it to handle thousands of concurrent connections with minimal CPU and memory usage — making it ideal for high-traffic environments.
While Nginx performs efficiently out-of-the-box, real-world traffic demands often require advanced tuning. Optimizing Nginx helps to:
Cpnginx is an advanced Nginx manager for cPanel servers that simplifies configuration using template-based Nginx configuration.
Instead of manually editing multiple configuration files, system administrators can apply optimized templates for web, PHP, caching, and SSL setups.
Cpnginx automates optimization tasks such as:
This makes Nginx tuning accessible even to non-experts using cPanel/WHM.
Let’s look at the most effective ways to improve Nginx speed and stability.
Kernel tuning directly affects how Nginx handles network and disk I/O. Adjusting these parameters helps the server efficiently manage connections, memory, and file descriptors.
Below are the most important sysctl settings to fine-tune your Linux kernel for Nginx:
Increases the maximum number of pending connections in the listen queue. This is crucial for high-traffic sites to prevent connection drops during peak load.
Get the Current value using the following command:
sysctl net.core.somaxconn
net.core.somaxconn = 1024
Increase the value of net.core.somaxconn as follows:
sysctl -w net.core.somaxconn=65536
Corresponding Nginx setting: The listen directive's backlog parameter in nginx.conf should be set to a value equal to or slightly lower than somaxconn.
server {
listen 80 backlog=65535;
# ...
}
Sets the maximum size of the receive queue for each network interface. Increasing this helps prevent packet loss under heavy network load.
Recommended value: net.core.netdev_max_backlog = 16384 (or higher).
Get the current value of net.core.netdev_max_backlog
sysctl net.core.netdev_max_backlog
net.core.netdev_max_backlog = 1000
Increase the value of net.core.netdev_max_backlog to improve nginx performance
sysctl -w net.core.netdev_max_backlog=16384
net.core.netdev_max_backlog = 16384
This enables the kernel to reuse sockets in the TIME_WAIT state for new outbound connections. This is especially useful for high-traffic servers that frequently close and re-establish connections to backends.
Find what values you have in net.ipv4.tcp_tw_reuse
0 - disable
1 - global enable
2 - enable for loopback traffic only
Recommended value: net.ipv4.tcp_tw_reuse = 1
Find the current value of net.ipv4.tcp_tw_reuse as follows,
sysctl net.ipv4.tcp_tw_reuse
net.ipv4.tcp_tw_reuse = 2
Change the value of net.ipv4.tcp_tw_reuse to increase nginx performance
sysctl -w net.ipv4.tcp_tw_reuse=1
This option limits the number of sockets that can be in the TIME_WAIT state at the same time. If this limit is exceeded, the oldest sockets are immediately destroyed.
Recommended value: net.ipv4.tcp_max_tw_buckets = 6000000 (or a higher value if needed).
See the current value of your server:
sysctl net.ipv4.tcp_max_tw_buckets
net.ipv4.tcp_max_tw_buckets = 65536
Set the value of net.ipv4.tcp_max_tw_buckets for Nginx kernel optimization as follows.
sysctl -w net.ipv4.tcp_max_tw_buckets=6000000
net.ipv4.tcp_max_tw_buckets = 6000000
Increases the maximum number of remembered connection requests that haven't received an acknowledgment.
See the current value of net.ipv4.tcp_max_syn_backlog
sysctl net.ipv4.tcp_max_syn_backlog
net.ipv4.tcp_max_syn_backlog = 1024
Increase the value of net.ipv4.tcp_max_syn_backlog for nginx optimization as follows,
sysctl -w net.ipv4.tcp_max_syn_backlog=65535
net.ipv4.tcp_keepalive_time, net.ipv4.tcp_keepalive_probes, net.ipv4.tcp_keepalive_intvl - These settings help to manage idle connections efficiently.
See the current values of your system.
sysctl net.ipv4.tcp_keepalive_intvl net.ipv4.tcp_keepalive_probes net.ipv4.tcp_keepalive_time
net.ipv4.tcp_keepalive_intvl = 75
net.ipv4.tcp_keepalive_probes = 9
net.ipv4.tcp_keepalive_time = 7200
Now set the values as follows,
sysctl -w net.ipv4.tcp_keepalive_time=600 -w net.ipv4.tcp_keepalive_intvl=60 net.ipv4.tcp_keepalive_probes=20
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 60
net.ipv4.tcp_keepalive_probes = 20
Sets the system-wide maximum number of file descriptors (file handles) that the kernel can allocate. A file descriptor is created for each active connection; therefore, a high-traffic server requires a sufficient number.
Recommended value: fs.file-max = 655350 (or higher). You also need to increase the open file limit for Nginx, which is often done with the worker_rlimit_nofile directive.
Get the current value of fs.file-max
sysctl fs.file-max
fs.file-max = 5000
If you see a very high value, that is good.
Set the kernel parameter value of fs.file-max to increase nginx performance
sysctl -w fs.file-max = 655350
Sets the maximum number of file descriptors that a single process can open. file-max is a system-wide limit, while nr_open is per-process.
Recommended value: Set this to a high value, like fs.nr_open = 655350
Get the current value of fs.nr_open:
sysctl fs.nr_open
fs.nr_open = 10485
Increase the value of fs.nr_open to improve nginx optimization and performance.
sysctl -w fs.nr_open=655350
This will configure the total memory available for all TCP sockets. The three values represent low, pressure, and high watermarks for memory usage.
Recommended value: net.ipv4.tcp_mem = 8388608 8388608 8388608 (for a system with plenty of RAM
See the current values of net.ipv4.tcp_mem your server:
sysctl net.ipv4.tcp_mem
net.ipv4.tcp_mem = 189306 252408 378612
Increase the value of net.ipv4.tcp_mem to do Nginx optimization and performance tuning.
sysctl -w net.ipv4.tcp_mem="8388608 8388608 8388608"
Define the memory buffer sizes for receiving (rmem) and writing (wmem) TCP data. Larger buffer sizes can improve throughput on high-speed or high-latency networks.
Recommended values:
See the current values
sysctl net.ipv4.tcp_rmem net.ipv4.tcp_wmem
net.ipv4.tcp_rmem = 4096 131072 6291456
net.ipv4.tcp_wmem = 4096 16384 4194304
Modify these values to add Nginx optimization
sysctl -w net.ipv4.tcp_rmem="4096 873800 16777216" -w net.ipv4.tcp_wmem="4096 655360 16777216"
net.ipv4.tcp_rmem = 4096 873800 16777216
net.ipv4.tcp_wmem = 4096 655360 16777216
Create or edit a configuration file in /etc/sysctl.d/ as /etc/sysctl.d/99-cpnginx-tcp.conf and add the following code.
net.core.somaxconn=65536
net.core.netdev_max_backlog = 16384
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_tw_buckets = 6000000
net.ipv4.tcp_max_syn_backlog=65535
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 60
net.ipv4.tcp_keepalive_probes = 20
fs.file-max = 655350
fs.nr_open=655350
net.ipv4.tcp_mem = 8388608 8388608 8388608
net.ipv4.tcp_rmem = 4096 873800 16777216
net.ipv4.tcp_wmem = 4096 655360 16777216
After making these changes, apply the values as follows:
sysctl --system
These changes will automatically be applied during the next reboot.
Nginx performance can degrade if the system cannot open enough files for connections.. Find the current limits of Open files for the nginx process. Use the following command.
cat /proc/$(cat /var/run/nginx.pid)/limits | grep open.files
Find the maximum open file limit of the Nginx worker process by using the following command.
ps --ppid $(cat /var/run/nginx.pid) -o %p|sed '1d'|xargs -I{} cat /proc/{}/limits|grep open.files
If you see a very low number of limits, then increase the limits inthe Systemd nginx unit file as follows,
Edit the Systemd unit file of nginx /lib/systemd/system/eenos-nginx.service and modify it as follows. Add LimitNOFILE=500000
[Service]
Type=forking
PIDFile=/run/nginx.pid
ExecStart=/usr/sbin/nginx -c /etc/nginx/nginx.conf
ExecReload=/bin/sh -c "/bin/kill -s HUP $(/bin/cat /run/nginx.pid)"
ExecStop=/bin/sh -c "/bin/kill -s TERM $(/bin/cat /run/nginx.pid)"
PrivateTmp=true
LimitNOFILE=500000
Reload and restart Nginx:
systemctl daemon-reload
systemctl restart nginx
Now we added some kernel changes for TCP connection, we can increase the parameters of worker_processes and worker_connections.
We may set these values in nginx.conf as follows,
worker_processes quto;
worker_connections 500000;
Purpose:
Defines the maximum size of the client request body (for example, POST data) that Nginx will store in memory before writing it to a temporary file on disk.
Explanation:
When a client uploads data (like form submissions or file uploads), Nginx first reads it into a memory buffer. If the data size exceeds this buffer, it is written to a temporary file on disk.
Keeping small uploads in memory reduces disk I/O and improves speed. However, too small a buffer will increase disk writes.
Recommended value:
Tip:
Increasing this value prevents Nginx from writing small POST requests to disk, thus enhancing performance.
Purpose:
Specifies the buffer size for reading the HTTP request header from the client.
Explanation:
Every HTTP request includes headers (like User-Agent, Cookies, etc.). Nginx allocates a small buffer to read them.
If the headers are larger than the buffer, Nginx uses additional buffers from the large_client_header_buffers directive.
Recommended value:
Tip:
Increasing this prevents “400 Bad Request” errors caused by large headers.
Purpose:
Sets the maximum size of the entire client request body — for example, the total size of a file upload.
Explanation:
If the client sends a request larger than this limit, Nginx returns a 413 (Request Entity Too Large) error.
This prevents users from uploading excessively large files that could affect server stability or disk space.
Recommended value:
Tip:
Make sure this value matches your backend (PHP, Node.js, etc.) upload limits for consistency.
Purpose:
Defines the number and size of buffers used for very large request headers, such as when clients send many cookies or long URLs.
Explanation:
If a request’s headers exceed client_header_buffer_size, Nginx uses these buffers instead.
Each buffer can store one header, and the total number of buffers determines how many large headers can be processed simultaneously.
Recommended value:
Tip:
Improper configuration here can lead to errors like 400 Bad Request - Request Header or Cookie Too Large.
Set these values in nginx.conf as follows,
http {
client_max_body_size 512m;
client_body_buffer_size 256k;
client_header_buffer_size 16k;
large_client_header_buffers 16 128k;
}
The multi_accept directive in Nginx controls how a worker process handles new incoming connections from clients.
By default, each Nginx worker process accepts only one new connection at a time before returning to handle existing requests. When multi_accept is enabled, the worker process will accept all available pending connections from the queue in a single event loop cycle.
This behavior affects how Nginx reacts under heavy load and how efficiently it handles bursts of incoming traffic.
Nginx uses an event-driven architecture where worker processes wait for events (like new connections or data availability).
This means fewer idle connections waiting in the queue and faster acceptance of new clients.
Recommended for:
Not recommended for:
When enabled, a worker may spend more CPU cycles accepting connections instead of processing them, which can slightly increase CPU load on lightly loaded systems.
For busy servers with multiple CPU cores and optimized kernel parameters (like net.core.somaxconn=65536), enabling multi_accept on can significantly improve Nginx’s ability to handle spikes in concurrent connections — especially for high-performance setups like Cpnginx on cPanel.
Allow worker processes to accept multiple new connections at once:
To enable this, add the following in the Nginx configuration:
events {
worker_connections 500000;
multi_accept on;
use epoll;
}
Timeout directives in Nginx define how long the server should wait for different stages of client interaction before closing the connection.
Properly tuning these values ensures responsiveness, prevents resource waste, and protects the server from slow clients or DoS attacks.
Purpose:
Defines how long Nginx waits to receive the client request headers (like GET /index.html, User-Agent, etc.).
Explanation:
When a client connects but takes too long to send the request headers, Nginx will close the connection after this timeout.
This prevents “slow header” attacks, where a client keeps a connection open indefinitely by sending headers very slowly.
Syntax:
client_header_timeout 10s;
Default:
60s
Recommended value:
Purpose:
Defines how long Nginx waits for the client to send the request body, such as POST data or file uploads.
Explanation:
If the client starts sending data but pauses for longer than this timeout, the connection is closed.
This helps prevent “slow POST” attacks (a variant of DoS) where the attacker sends data extremely slowly to exhaust server resources.
Default:
60s
Recommended value:
http {
client_body_timeout 20s;
}
Purpose:
Controls how long an idle keep-alive connection is kept open between the client and server.
Explanation:
Keep-alive connections allow multiple HTTP requests over a single TCP connection (reducing connection overhead).
After sending a response, Nginx keeps the connection open for this duration — if the client makes another request, it reuses the same connection.
If no new request is received within this time, the connection is closed.
Default:
75s
You can also specify two values:
keepalive_timeout 15s 15s;
Recommended value:
Purpose:
Specifies the maximum time Nginx will wait for the client to acknowledge or receive data when sending a response.
Explanation:
If a client stops reading the response (for example, due to a slow network or intentional delay), Nginx will close the connection after this timeout.
Default:
60s
Important:
This is not the total time to send the response, but the timeout between two successive write operations.
If no data is sent to the client within this time, the connection closes.
Recommended value:
http {
send_timeout 15s;
}
Example Optimized Configuration
http {
client_header_timeout 15s;
client_body_timeout 20s;
keepalive_timeout 30s 30s;
send_timeout 15s;
}
Gzip is a data compression method that reduces the size of files sent from the server to the client (browser).
When enabled in Nginx, it compresses text-based files such as HTML, CSS, JavaScript, XML, JSON, and others before sending them over the network.
The browser then decompresses the response automatically, displaying the page normally — but the transfer happens much faster.Add the following to nginx.conf
So instead of transferring large text files, only the compressed version is transmitted.
Compressed files are smaller, so they download faster, reducing page load time significantly.
Since compressed responses use less data, both the server and clients consume less bandwidth.
This helps reduce hosting and network costs, especially for high-traffic websites.
Faster page loads directly improve:
Google and other search engines consider page speed a ranking factor.
Gzip compression helps your website achieve higher Core Web Vitals scores — improving SEO visibility.
http {
gzip on;
gzip_min_length 1100;
gzip_buffers 4 32k;
gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript application/rss+xml application/atom+xml image/svg+xml;
}
Nginx logs every request it processes in an access log.
By default, each request is written directly to disk immediately.
Access log buffering changes this behavior:
This reduces disk I/O and improves server performance, especially on high-traffic websites.
Why Enable Access Log Buffering?
Example Configuration
access_log /var/log/domlogs/exampl.com main buffer=32k flush=5m;
Explanation:
This setup ensures minimal disk writes and stable performance on high-traffic domains.
Access log buffering is a simple but powerful Nginx optimization technique.
It improves server responsiveness, reduces disk load, and is essential for high-traffic websites. Combined with other optimizations like Gzip, kernel tuning, and caching, it helps Nginx achieve maximum performance.
Browser caching is a mechanism where the browser stores static resources (like images, CSS, JavaScript) locally after the first visit.
When a user revisits the website, the browser loads these resources from the local cache instead of requesting them from the server, reducing page load time and server load.
Nginx can instruct browsers to cache static files using HTTP headers:
Example configuration:
location ~* ^.+.(jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|iso|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|mp3|ogv|ogg|flv|swf|mpeg|mpg|mpeg4|mp4|avi|wmv|js|css|3gp|sis|sisx|nth)$ {
expires 30d;
add_header Pragma public;
add_header Cache-Control "public, must-revalidate, proxy-revalidate";
}
Browser caching is a simple but highly effective Nginx optimization:
open_file_cache is an Nginx feature that caches information about frequently accessed files.
It stores file descriptors, metadata, and directory listings in memory so Nginx doesn’t need to check the file system repeatedly for every request.
This reduces disk I/O and file system lookups, which improves performance on high-traffic websites.
Example Configuration
open_file_cache max=2000 inactive=20s;
open_file_cache_valid 60s;
open_file_cache_min_uses 5;
open_file_cache_errors off;
Explanation:
open_file_cache is a highly effective Nginx optimization for serving static content.
Thread pooling is a technique where Nginx offloads potentially blocking or slow operations (like disk I/O) to a pool of worker threads, instead of handling them directly in the main worker process.
In Nginx, this is particularly useful for file operations, such as reading large static files, which can block a worker process and reduce the server’s ability to handle many concurrent connections.
This ensures high concurrency and better performance under heavy load.
Example Configuration
# Define thread pool
thread_pool default threads=32 max_queue=65536;
http {
# Enable asynchronous file operations using the thread pool
aio threads=default;
server {
listen 80;
server_name example.com;
location / {
root /var/www/html;
# Use thread pool for file operations
sendfile on;
}
}
}
Explanation:
Recommended for:
Not necessary for:
HTTP/2 is the second major version of the HTTP protocol, designed to improve the performance of web communications.
Key features:
Benefits: Faster page loads, reduced latency, better utilization of single TCP connections.
HTTP/3 is the latest version of HTTP, using QUIC instead of TCP as the transport protocol.
QUIC is a transport layer protocol built on UDP, designed by Google to address limitations of TCP and improve modern web performance.
Key Features:
By enabling both, your server can serve:
Example Nginx vhost configuration:
server {
listen 443 quic reuseport;
listen [::]:443 quic reuseport;
listen 443 ssl http2;
listen [::]:443 ssl;
http2 on;
# HTTP3/QUIC Support
http3 on;
http3_hq on;
# Add the Alt-Svc header to advertise HTTP/3 support to clients
add_header Alt-Svc 'h3=":$server_port"; ma=86400';
server_name example.com;
# Add your SSL/TLS certificate and key paths
ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;
# QUIC requires TLS 1.3
ssl_protocols TLSv1.3 TLSv1.2;
ssl_early_data on; # Enable 0-RTT support for faster handshakes
# ... other server configurations ...
}
Proxy caching in Nginx is used when Nginx acts as a reverse proxy for backend servers (like Apache, PHP-FPM, or any application server).
Benefits:
Example Proxy Cache Configuration
http {
# Define a cache zone
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m inactive=60m max_size=1g;
server {
location / {
proxy_pass http://backend_server;
proxy_cache my_cache;
proxy_cache_valid 200 302 10m; # Cache successful responses for 10 minutes
proxy_cache_valid 404 1m; # Cache 404 responses for 1 minute
proxy_cache_use_stale error timeout updating; # Serve stale cache on backend errors
add_header X-Proxy-Cache $upstream_cache_status;
}
}
}
FCGI caching is used when Nginx is serving dynamic content via FastCGI, typically with PHP-FPM.
Benefits:
Example FCGI Cache Configuration
http {
# Define FCGI cache
fastcgi_cache_path /var/cache/nginx/fcgi levels=1:2 keys_zone=php_cache:100m inactive=60m max_size=1g;
server {
location ~ \.php$ {
include fastcgi_params;
fastcgi_pass unix:/run/php/php8.1-fpm.sock;
fastcgi_index index.php;
# Enable FCGI caching
fastcgi_cache php_cache;
fastcgi_cache_valid 200 302 10m;
fastcgi_cache_valid 404 1m;
fastcgi_cache_use_stale error timeout invalid_header updating;
add_header X-FastCGI-Cache $upstream_cache_status;
}
}
}
Optimizing Nginx is not a one-time task—it’s an ongoing process. With the right kernel tuning, buffer settings, and caching, you can significantly enhance server responsiveness.
Using Cpnginx, these optimizations become effortless through template-based configurations—delivering maximum performance with minimal manual tuning.
By tuning kernel parameters, enabling caching, adjusting worker limits, and enabling compression.
Use Cpnginx to apply template-based optimizations automatically.
Parameters like net.core.somaxconn, tcp_tw_reuse, and fs.file-max improve TCP handling and connection throughput.
It reduces response size, saving bandwidth and speeding up content delivery.
Yes, HTTP3 improves connection latency and performance on modern browsers.
Discover how the Cpnginx Nginx Manager simplifies Nginx management for cPanel and WHM. Learn about dashboards, domain t…