• Oct. 8, 2025, 7:45 a.m.

Learn how to optimize Nginx for maximum performance. This complete guide covers Nginx tuning, kernel TCP/IP stack optimization, worker configuration, caching, compression, and HTTP3 setup. Boost your server’s speed using Cpnginx on cPanel.

Nginx Optimization and Performance Tuning

Nginx is one of the most popular web servers powering millions of websites worldwide. Known for its lightweight architecture and ability to handle high traffic with minimal resources, Nginx is often the preferred choice for businesses that demand speed and reliability.

However, the default Nginx configuration isn’t optimized for heavy workloads or modern web environments. That’s where performance tuning and kernel optimization play a crucial role.

What is Nginx?

Nginx (pronounced “Engine-X”) is a high-performance, open-source web server and reverse proxy server. It’s widely used for serving static content, handling load balancing, and acting as a reverse proxy for backend servers.

Its asynchronous event-driven architecture allows it to handle thousands of concurrent connections with minimal CPU and memory usage — making it ideal for high-traffic environments.

Why Optimize Nginx?

While Nginx performs efficiently out-of-the-box, real-world traffic demands often require advanced tuning. Optimizing Nginx helps to:

  • Handle more concurrent connections
  • Improve content delivery speed
  • Reduce latency and resource usage
  • Prevent dropped connections during peak load
  • Achieve better performance under heavy web or API traffic

How Cpnginx Helps Optimize Nginx on cPanel

Cpnginx is an advanced Nginx manager for cPanel servers that simplifies configuration using template-based Nginx configuration.

Instead of manually editing multiple configuration files, system administrators can apply optimized templates for web, PHP, caching, and SSL setups.

Cpnginx automates optimization tasks such as:

  • Managing worker processes and connections
  • Enabling compression and caching
  • Integrating HTTP3 support
  • Applying kernel-level optimizations

This makes Nginx tuning accessible even to non-experts using cPanel/WHM.

Best Nginx Optimization Tricks

Let’s look at the most effective ways to improve Nginx speed and stability.

Kernel Optimization for Nginx

Kernel tuning directly affects how Nginx handles network and disk I/O. Adjusting these parameters helps the server efficiently manage connections, memory, and file descriptors.

Kernel TCP/IP Stack Optimization for Nginx

Below are the most important sysctl settings to fine-tune your Linux kernel for Nginx:

1. net.core.somaxconn

Increases the maximum number of pending connections in the listen queue. This is crucial for high-traffic sites to prevent connection drops during peak load.
Get the Current value using the following command:

sysctl net.core.somaxconn
net.core.somaxconn = 1024

Increase the value of net.core.somaxconn as follows:

sysctl -w net.core.somaxconn=65536

Corresponding Nginx setting: The listen directive's backlog parameter in nginx.conf should be set to a value equal to or slightly lower than somaxconn.

server {
  listen 80 backlog=65535;
  # ...
}

2. net.core.netdev_max_backlog

Sets the maximum size of the receive queue for each network interface. Increasing this helps prevent packet loss under heavy network load.

Recommended value: net.core.netdev_max_backlog = 16384 (or higher).

Get the current value of net.core.netdev_max_backlog

sysctl net.core.netdev_max_backlog
net.core.netdev_max_backlog = 1000

Increase the value of net.core.netdev_max_backlog to improve nginx performance

sysctl -w net.core.netdev_max_backlog=16384
net.core.netdev_max_backlog = 16384

3. net.ipv4.tcp_tw_reuse

This enables the kernel to reuse sockets in the TIME_WAIT state for new outbound connections. This is especially useful for high-traffic servers that frequently close and re-establish connections to backends.

Find what values you have in net.ipv4.tcp_tw_reuse

0 - disable

1 - global enable

2 - enable for loopback traffic only

Recommended value: net.ipv4.tcp_tw_reuse = 1

Find the current value of net.ipv4.tcp_tw_reuse as follows,

sysctl net.ipv4.tcp_tw_reuse
net.ipv4.tcp_tw_reuse = 2

Change the value of net.ipv4.tcp_tw_reuse to increase nginx performance

sysctl -w net.ipv4.tcp_tw_reuse=1

4. net.ipv4.tcp_max_tw_buckets

This option limits the number of sockets that can be in the TIME_WAIT state at the same time. If this limit is exceeded, the oldest sockets are immediately destroyed.

Recommended value: net.ipv4.tcp_max_tw_buckets = 6000000 (or a higher value if needed).

See the current value of your server:

sysctl  net.ipv4.tcp_max_tw_buckets
net.ipv4.tcp_max_tw_buckets = 65536

Set the value of net.ipv4.tcp_max_tw_buckets for Nginx kernel optimization as follows.

sysctl -w net.ipv4.tcp_max_tw_buckets=6000000
net.ipv4.tcp_max_tw_buckets = 6000000

5. net.ipv4.tcp_max_syn_backlog

Increases the maximum number of remembered connection requests that haven't received an acknowledgment.

See the current value of net.ipv4.tcp_max_syn_backlog

sysctl net.ipv4.tcp_max_syn_backlog
net.ipv4.tcp_max_syn_backlog = 1024

Increase the value of net.ipv4.tcp_max_syn_backlog for nginx optimization as follows,

sysctl -w net.ipv4.tcp_max_syn_backlog=65535

6. Kernel Keepalive Settings

net.ipv4.tcp_keepalive_time, net.ipv4.tcp_keepalive_probes, net.ipv4.tcp_keepalive_intvl - These settings help to manage idle connections efficiently.

See the current values of your system.

sysctl net.ipv4.tcp_keepalive_intvl net.ipv4.tcp_keepalive_probes net.ipv4.tcp_keepalive_time
net.ipv4.tcp_keepalive_intvl = 75
net.ipv4.tcp_keepalive_probes = 9
net.ipv4.tcp_keepalive_time = 7200

Now set the values as follows,

sysctl -w net.ipv4.tcp_keepalive_time=600 -w net.ipv4.tcp_keepalive_intvl=60 net.ipv4.tcp_keepalive_probes=20
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 60
net.ipv4.tcp_keepalive_probes = 20
  • net.ipv4.tcp_keepalive_time = 600: Reduces the initial idle time from two hours to 10 minutes.
  • net.ipv4.tcp_keepalive_intvl = 60: Reduces the interval between probes from 75 seconds to 60 seconds.
  • net.ipv4.tcp_keepalive_probes = 20: Increases the number of probes to accommodate the more frequent probing.

7. fs.file-max

Sets the system-wide maximum number of file descriptors (file handles) that the kernel can allocate. A file descriptor is created for each active connection; therefore, a high-traffic server requires a sufficient number.

Recommended value: fs.file-max = 655350 (or higher). You also need to increase the open file limit for Nginx, which is often done with the worker_rlimit_nofile directive.

Get the current value of fs.file-max

sysctl fs.file-max
fs.file-max = 5000

If you see a very high value, that is good.

Set the kernel parameter value of fs.file-max to increase nginx performance

sysctl  -w fs.file-max = 655350

8. fs.nr_open

Sets the maximum number of file descriptors that a single process can open. file-max is a system-wide limit, while nr_open is per-process.

Recommended value: Set this to a high value, like fs.nr_open = 655350

Get the current value of fs.nr_open:

sysctl fs.nr_open
fs.nr_open = 10485

Increase the value of fs.nr_open to improve nginx optimization and performance.

sysctl -w fs.nr_open=655350

9. net.ipv4.tcp_mem

This will configure the total memory available for all TCP sockets. The three values represent low, pressure, and high watermarks for memory usage.

  • Low: Below this, no memory pressure is applied.
  • Pressure: The system starts applying memory pressure to sockets.
  • High: No new sockets can be created.

Recommended value: net.ipv4.tcp_mem = 8388608 8388608 8388608 (for a system with plenty of RAM

See the current values of net.ipv4.tcp_mem your server:

sysctl net.ipv4.tcp_mem
net.ipv4.tcp_mem = 189306	252408	378612

Increase the value of net.ipv4.tcp_mem to do Nginx optimization and performance tuning.

sysctl -w net.ipv4.tcp_mem="8388608 8388608 8388608"

10. net.ipv4.tcp_rmem, net.ipv4.tcp_wmem

Define the memory buffer sizes for receiving (rmem) and writing (wmem) TCP data. Larger buffer sizes can improve throughput on high-speed or high-latency networks.

Recommended values:

  • net.ipv4.tcp_rmem = 4096 873800 16777216 and
  • net.ipv4.tcp_wmem = 4096 655360 16777216

See the current values

sysctl net.ipv4.tcp_rmem net.ipv4.tcp_wmem
net.ipv4.tcp_rmem = 4096	131072	6291456
net.ipv4.tcp_wmem = 4096	16384	4194304

Modify these values to add Nginx optimization

sysctl -w net.ipv4.tcp_rmem="4096 873800 16777216" -w net.ipv4.tcp_wmem="4096 655360 16777216"
net.ipv4.tcp_rmem = 4096 873800 16777216
net.ipv4.tcp_wmem = 4096 655360 16777216

How to make these changes permanent in the kernel?

Create or edit a configuration file in /etc/sysctl.d/ as /etc/sysctl.d/99-cpnginx-tcp.conf and add the following code.

net.core.somaxconn=65536
net.core.netdev_max_backlog = 16384
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_max_tw_buckets = 6000000
net.ipv4.tcp_max_syn_backlog=65535
net.ipv4.tcp_keepalive_time = 600
net.ipv4.tcp_keepalive_intvl = 60
net.ipv4.tcp_keepalive_probes = 20
fs.file-max = 655350
fs.nr_open=655350
net.ipv4.tcp_mem = 8388608 8388608 8388608
net.ipv4.tcp_rmem = 4096 873800 16777216
net.ipv4.tcp_wmem = 4096 655360 16777216

After making these changes, apply the values as follows:

sysctl --system

These changes will automatically be applied during the next reboot.

Increase The Maximum Number Of Open Files (nofile limit) on Linux

Nginx performance can degrade if the system cannot open enough files for connections.. Find the current limits of Open files for the nginx process. Use the following command.

cat /proc/$(cat /var/run/nginx.pid)/limits | grep open.files

Find the maximum open file limit of the Nginx worker process by using the following command.

ps --ppid $(cat /var/run/nginx.pid) -o %p|sed '1d'|xargs -I{} cat /proc/{}/limits|grep open.files

If you see a very low number of limits, then increase the limits inthe Systemd nginx unit file as follows,

Edit the Systemd unit file of nginx /lib/systemd/system/eenos-nginx.service and modify it as follows. Add LimitNOFILE=500000

[Service]
Type=forking
PIDFile=/run/nginx.pid
ExecStart=/usr/sbin/nginx -c /etc/nginx/nginx.conf
ExecReload=/bin/sh -c "/bin/kill -s HUP $(/bin/cat /run/nginx.pid)"
ExecStop=/bin/sh -c "/bin/kill -s TERM $(/bin/cat /run/nginx.pid)"
PrivateTmp=true
LimitNOFILE=500000

Reload and restart Nginx:

systemctl daemon-reload
systemctl restart nginx

Worker Processes and Worker Connections

Now we added some kernel changes for TCP connection, we can increase the parameters of worker_processes and worker_connections.

  • worker_processes: Number of worker processes. Usually set to the number of CPU cores.
  • worker_connections: Maximum simultaneous connections per worker.

We may set these values in nginx.conf as follows,

worker_processes  quto;
worker_connections 500000;

Tune Nginx Buffers

1. client_body_buffer_size

Purpose:
Defines the maximum size of the client request body (for example, POST data) that Nginx will store in memory before writing it to a temporary file on disk.

Explanation:
When a client uploads data (like form submissions or file uploads), Nginx first reads it into a memory buffer. If the data size exceeds this buffer, it is written to a temporary file on disk.
Keeping small uploads in memory reduces disk I/O and improves speed. However, too small a buffer will increase disk writes.

Recommended value:

  • 128k–512k for general websites
  • Higher (1M–2M) for applications handling large POST requests

Tip:
Increasing this value prevents Nginx from writing small POST requests to disk, thus enhancing performance.

2. client_header_buffer_size

Purpose:
Specifies the buffer size for reading the HTTP request header from the client.

Explanation:
Every HTTP request includes headers (like User-Agent, Cookies, etc.). Nginx allocates a small buffer to read them.
If the headers are larger than the buffer, Nginx uses additional buffers from the large_client_header_buffers directive.

Recommended value:

  • 1k–4k for simple sites
  • 8k–16k for sites with large cookies or many headers

Tip:
Increasing this prevents “400 Bad Request” errors caused by large headers.

3. client_max_body_size

Purpose:
Sets the maximum size of the entire client request body — for example, the total size of a file upload.

Explanation:
If the client sends a request larger than this limit, Nginx returns a 413 (Request Entity Too Large) error.
This prevents users from uploading excessively large files that could affect server stability or disk space.

Recommended value:

  • 10m–100m for typical websites
  • 500m–1g for servers handling large file uploads

Tip:
Make sure this value matches your backend (PHP, Node.js, etc.) upload limits for consistency.

4. large_client_header_buffers

Purpose:
Defines the number and size of buffers used for very large request headers, such as when clients send many cookies or long URLs.

Explanation:
If a request’s headers exceed client_header_buffer_size, Nginx uses these buffers instead.
Each buffer can store one header, and the total number of buffers determines how many large headers can be processed simultaneously.

Recommended value:

  • 4 16k for small workloads
  • 8–16 64k–128k for applications with heavy cookies or complex requests

Tip:
Improper configuration here can lead to errors like 400 Bad Request - Request Header or Cookie Too Large.

Set these values in nginx.conf as follows,

http {
client_max_body_size 512m;
client_body_buffer_size    256k;
client_header_buffer_size 16k;
large_client_header_buffers 16 128k;
}

Enable Multi-Accept

What is Multi-Accept?

The multi_accept directive in Nginx controls how a worker process handles new incoming connections from clients.

By default, each Nginx worker process accepts only one new connection at a time before returning to handle existing requests. When multi_accept is enabled, the worker process will accept all available pending connections from the queue in a single event loop cycle.

This behavior affects how Nginx reacts under heavy load and how efficiently it handles bursts of incoming traffic.

How It Works

Nginx uses an event-driven architecture where worker processes wait for events (like new connections or data availability).

  • When multi_accept off (default):
    Each worker accepts only one new connection per event notification.
  • When multi_accept on:
    A worker will accept all pending connections in the kernel’s listen queue immediately after being notified that at least one is ready.

This means fewer idle connections waiting in the queue and faster acceptance of new clients.

When to Enable Multi-Accept

Recommended for:

  • High-traffic or bursty websites
  • Servers handling thousands of concurrent connections
  • Systems with optimized kernel network settings (e.g., high net.core.somaxconn)

Not recommended for:

  • Low-traffic or resource-limited servers
  • Cases where CPU usage spikes cause performance drops

When enabled, a worker may spend more CPU cycles accepting connections instead of processing them, which can slightly increase CPU load on lightly loaded systems.

Pro Tip:

For busy servers with multiple CPU cores and optimized kernel parameters (like net.core.somaxconn=65536), enabling multi_accept on can significantly improve Nginx’s ability to handle spikes in concurrent connections — especially for high-performance setups like Cpnginx on cPanel.

Allow worker processes to accept multiple new connections at once:

To enable this, add the following in the Nginx configuration:

events {
    worker_connections 500000;
    multi_accept on;
    use epoll;
}

Configure Nginx Timeouts

Timeout directives in Nginx define how long the server should wait for different stages of client interaction before closing the connection.
Properly tuning these values ensures responsiveness, prevents resource waste, and protects the server from slow clients or DoS attacks.

1. client_header_timeout

Purpose:
Defines how long Nginx waits to receive the client request headers (like GET /index.html, User-Agent, etc.).

Explanation:
When a client connects but takes too long to send the request headers, Nginx will close the connection after this timeout.
This prevents “slow header” attacks, where a client keeps a connection open indefinitely by sending headers very slowly.

Syntax:

client_header_timeout 10s;

Default:
60s

Recommended value:

  • 10s for general web traffic
  • 30–60s for APIs or mobile apps with slower connections

2. client_body_timeout

Purpose:
Defines how long Nginx waits for the client to send the request body, such as POST data or file uploads.

Explanation:
If the client starts sending data but pauses for longer than this timeout, the connection is closed.
This helps prevent “slow POST” attacks (a variant of DoS) where the attacker sends data extremely slowly to exhaust server resources.

Default:
60s

Recommended value:

  • 15s–30s for typical web apps
  • 60s–120s for upload-heavy systems
http {
    client_body_timeout 20s;
}

3. keepalive_timeout

Purpose:
Controls how long an idle keep-alive connection is kept open between the client and server.

Explanation:
Keep-alive connections allow multiple HTTP requests over a single TCP connection (reducing connection overhead).
After sending a response, Nginx keeps the connection open for this duration — if the client makes another request, it reuses the same connection.
If no new request is received within this time, the connection is closed.

Default:
75s

You can also specify two values:

keepalive_timeout 15s 15s;
  • The first is the server timeout.
  • The second is the value sent in the Keep-Alive: timeout=60 response header to inform the client.

Recommended value:

  • 15s–30s for high-traffic websites (reduces open connections)
  • 60s–120s for applications requiring frequent client requests

4. send_timeout

Purpose:
Specifies the maximum time Nginx will wait for the client to acknowledge or receive data when sending a response.

Explanation:
If a client stops reading the response (for example, due to a slow network or intentional delay), Nginx will close the connection after this timeout.

Default:
60s

Important:
This is not the total time to send the response, but the timeout between two successive write operations.
If no data is sent to the client within this time, the connection closes.

Recommended value:

  • 10s–30s for normal websites
  • 60s–120s for large file downloads
http {
    send_timeout 15s;
}

Example Optimized Configuration

http {
    client_header_timeout 15s;
    client_body_timeout 20s;
    keepalive_timeout 30s 30s;
    send_timeout 15s;
}

Pro Tips

  • Lower values improve server security (reducing DoS risk).
  • Higher values improve user experience on slow connections (e.g., mobile networks).
  • Always balance based on your traffic pattern — static site vs. upload-heavy API vs. streaming server.

Enable Gzip Compression in Nginx for Optimization

Gzip is a data compression method that reduces the size of files sent from the server to the client (browser).
When enabled in Nginx, it compresses text-based files such as HTML, CSS, JavaScript, XML, JSON, and others before sending them over the network.

The browser then decompresses the response automatically, displaying the page normally — but the transfer happens much faster.Add the following to nginx.conf

How Gzip Works

  • The client (browser) sends an HTTP request to the server with this header:
    Accept-Encoding: gzip, deflate, br
  • Nginx sees that the browser supports gzip compression.
  • Nginx compresses the response content (e.g., index.html) using the Gzip algorithm.
  • The server sends the compressed data with a header:
    Content-Encoding: gzip
  • The browser decompresses it instantly and renders the page.

So instead of transferring large text files, only the compressed version is transmitted.

Why Gzip is Important

1. Faster Page Load Speed

Compressed files are smaller, so they download faster, reducing page load time significantly.


2. Reduced Bandwidth Usage

Since compressed responses use less data, both the server and clients consume less bandwidth.
This helps reduce hosting and network costs, especially for high-traffic websites.

3. Better User Experience

Faster page loads directly improve:

  • Bounce rate (users stay longer)
  • Conversion rates (faster checkout, better engagement)
  • Mobile performance (less data, faster rendering)

4. Improved SEO Rankings

Google and other search engines consider page speed a ranking factor.
Gzip compression helps your website achieve higher Core Web Vitals scores — improving SEO visibility.


http {
gzip on;
gzip_min_length  1100;
gzip_buffers  4 32k;
gzip_types  text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript application/rss+xml application/atom+xml image/svg+xml;
}

Best Practice

  • Keep gzip_comp_level between 4 and 6 for best CPU-to-speed balance.
  • Use Brotli (brotli on;) if available — it’s newer and offers better compression than Gzip.
  • Always check compression using tools like:
    • GTmetrix
    • WebPageTest
    • Chrome DevTools → Network tab → Content-Encoding

Access Log Buffering

Nginx logs every request it processes in an access log.
By default, each request is written directly to disk immediately.

Access log buffering changes this behavior:

  • Logs are first stored in memory (buffer).
  • They are written to disk periodically or when the buffer fills.

This reduces disk I/O and improves server performance, especially on high-traffic websites.

Why Enable Access Log Buffering?

  • Reduce Disk I/O
    • Writing every request to disk individually can generate a huge number of write operations.
    • Buffering batches of these writes reduces the load on the disk.
  • Improve Performance
    • Lower disk activity means Nginx workers spend more time handling requests instead of waiting for logs to be written.
    • Critical for high-concurrency websites.
  • Prevent Log Write Bottlenecks
    • On servers with thousands of requests per second, immediate logging can slow down response times.
    • Buffering prevents log writes from becoming a bottleneck.
  • Efficient for High Traffic Sites
    • Sites with heavy traffic, frequent static file requests, or APIs benefit the most.

Example Configuration

access_log /var/log/domlogs/exampl.com main buffer=32k flush=5m;

Explanation:

  • buffer=32k: Nginx stores 32 KB of log entries in memory before flushing.
  • flush=5m: Nginx writes the buffered logs to disk every 5 minutes, even if the buffer is not full.

This setup ensures minimal disk writes and stable performance on high-traffic domains.

Best Practices

  • Buffer Size
    • Typical range: 8k–64k depending on traffic.
    • Larger buffers reduce disk writes but consume more memory.
  • Flush Interval
    • Typical range: 1m–10m.
    • Shorter flush intervals ensure logs are updated more frequently, which is important for real-time monitoring.
  • Multiple Virtual Hosts
    • Configure access log buffering individually for each vhost if traffic varies significantly.
  • Memory vs. Disk Tradeoff
    • More memory for buffering → fewer writes → better performance.
    • Very large buffers may risk losing logs in case of a server crash (so balance carefully).

Access log buffering is a simple but powerful Nginx optimization technique.
It improves server responsiveness, reduces disk load, and is essential for high-traffic websites. Combined with other optimizations like Gzip, kernel tuning, and caching, it helps Nginx achieve maximum performance.

Enable Browser Caching for Static Files

Browser caching is a mechanism where the browser stores static resources (like images, CSS, JavaScript) locally after the first visit.
When a user revisits the website, the browser loads these resources from the local cache instead of requesting them from the server, reducing page load time and server load.

Why Browser Caching is Important

  • Faster Page Loads
    • Loading files from the browser cache is faster than downloading them again.
    • Improves user experience, especially for repeat visitors.
  • Reduced Server Load
    • Fewer HTTP requests to the server → less CPU, memory, and network usage.
  • Bandwidth Savings
    • Prevents unnecessary data transfer for static content (images, JS, CSS, fonts).
  • Better SEO & Core Web Vitals
    • Google considers site speed as a ranking factor.
    • Cached resources improve Largest Contentful Paint (LCP) and overall page performance.

How It Works in Nginx

Nginx can instruct browsers to cache static files using HTTP headers:

  • expires: Sets the cache duration for a file.
  • Cache-Control: Specifies caching policies like public/private, must-revalidate, max-age, etc.

Example configuration:

location ~* ^.+.(jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|iso|doc|xls|exe|pdf|ppt|txt|tar|mid|midi|wav|bmp|rtf|mp3|ogv|ogg|flv|swf|mpeg|mpg|mpeg4|mp4|avi|wmv|js|css|3gp|sis|sisx|nth)$ {
	expires 30d;  
	add_header Pragma public;
	add_header Cache-Control "public, must-revalidate, proxy-revalidate";        
	}

Best Practices

  • Version Static Files
    • Use versioning in filenames: app.v1.js, style.v2.css
    • Ensures browsers load updated files when content changes.
  • Combine with Gzip
    • Compress static files for faster transfer before caching.
  • Separate Long- and Short-lived Resources
    • Long-lived: images, fonts, icons
    • Short-lived: dynamic HTML or JSON
  • Use Cache Validation
    • must-revalidate ensures outdated files are refreshed when needed.

Browser caching is a simple but highly effective Nginx optimization:

  • Reduces load on the server
  • Speeds up page loads for returning visitors
  • Saves bandwidth
  • Improves SEO and user experience

Nginx Core File Cache

open_file_cache is an Nginx feature that caches information about frequently accessed files.
It stores file descriptors, metadata, and directory listings in memory so Nginx doesn’t need to check the file system repeatedly for every request.

This reduces disk I/O and file system lookups, which improves performance on high-traffic websites.

Why It’s Important

  • Reduce Disk I/O
    • Without caching, Nginx checks the disk for every file request (existence, permissions, timestamps).
    • open_file_cache keeps this information in memory → fewer disk reads.
  • Faster Response Times
    • Frequently accessed files (like HTML, CSS, JS, images) are served faster because metadata is cached.
  • Better Performance Under High Traffic
    • On busy servers, repeated filesystem checks can become a bottleneck.
    • open_file_cache minimizes this overhead.

Example Configuration

open_file_cache max=2000 inactive=20s;
open_file_cache_valid 60s;
open_file_cache_min_uses 5;
open_file_cache_errors off;

Explanation:

  • max=2000 → Cache up to 2000 files/directories
  • inactive=20s → Remove files from cache if not accessed for 20 seconds
  • open_file_cache_valid 60s → Periodically validate cached metadata every 60 seconds
  • open_file_cache_min_uses 5 → Only cache files accessed at least 5 times
  • open_file_cache_errors off → Do not cache failed file lookups

Best Practices

  • Set max according to expected traffic
    • High-traffic sites → larger max (e.g., 5000–10000)
    • Low-traffic sites → smaller max (e.g., 500–2000)
  • Tune inactive
    • Short-lived cache for frequently updated files
    • Longer inactive period for mostly static files
  • Use open_file_cache_min_uses wisely
    • Prevents rarely accessed files from taking cache space
  • Combine with sendfile and aio
    • Works best when sendfile on; and aio threads=default; are enabled for serving static content.

open_file_cache is a highly effective Nginx optimization for serving static content.

  • Caches file descriptors and metadata in memory
  • Reduces disk access and filesystem checks
  • Improves speed and efficiency, especially under heavy load

Thread Pooling in Nginx

Thread pooling is a technique where Nginx offloads potentially blocking or slow operations (like disk I/O) to a pool of worker threads, instead of handling them directly in the main worker process.

In Nginx, this is particularly useful for file operations, such as reading large static files, which can block a worker process and reduce the server’s ability to handle many concurrent connections.

How It Works

  • Normally, Nginx workers handle requests asynchronously using event-driven I/O (epoll or kqueue).
  • However, some operations, like reading large files from disk, are blocking, meaning the worker process has to wait for the operation to complete.
  • Thread pools allow Nginx to:
    • Place blocking tasks into a queue
    • Let idle threads in the pool process them asynchronously
    • Keep the main worker process free to handle new requests

This ensures high concurrency and better performance under heavy load.

Example Configuration

# Define thread pool
thread_pool default threads=32 max_queue=65536;

http {
    # Enable asynchronous file operations using the thread pool
    aio threads=default;

    server {
        listen 80;
        server_name example.com;
        location / {
            root /var/www/html;
            # Use thread pool for file operations
            sendfile on;
        }
    }
}

Explanation:

  • threads=32 → Number of threads in the pool
  • max_queue=65536 → Maximum number of tasks waiting in the queue
  • aio threads=default → Assign the thread pool named default for asynchronous I/O
  • sendfile on → Use optimized kernel-level file transfer along with threads

When to Use Thread Pools

Recommended for:

  • High-traffic servers
  • Serving large static files (videos, images, downloads)
  • Servers with fast disks and many concurrent requests

Not necessary for:

  • Low-traffic sites
  • Small files and lightweight applications
  • Systems where file I/O is not a bottleneck

Enable HTTP3 / QUICK protocol along with HTTP2

What is HTTP/2?

HTTP/2 is the second major version of the HTTP protocol, designed to improve the performance of web communications.

Key features:

  • Multiplexing – Multiple requests and responses can be sent over a single TCP connection simultaneously.
  • Header Compression – Reduces overhead by compressing HTTP headers (HPACK).
  • Server Push – Allows the server to proactively send resources to the client before they’re requested.
  • Stream Prioritization – Allows important resources to load faster.

Benefits: Faster page loads, reduced latency, better utilization of single TCP connections.

What is HTTP/3 (QUIC)?

HTTP/3 is the latest version of HTTP, using QUIC instead of TCP as the transport protocol.

QUIC is a transport layer protocol built on UDP, designed by Google to address limitations of TCP and improve modern web performance.

Key Features:

  • Faster Handshakes
    • QUIC reduces connection setup time, especially for HTTPS.
    • Supports 0-RTT, allowing clients to send data immediately if reconnecting.
  • Multiplexing Without Head-of-Line Blocking
    • Unlike HTTP/2 over TCP, if one packet is lost, other streams aren’t blocked.
    • Reduces latency for high-loss networks (e.g., mobile or wireless).
  • Improved Security
    • QUIC integrates TLS 1.3, ensuring encrypted connections by default.
  • Better Performance for Mobile and High-Latency Networks
    • Optimized for users on unreliable networks.

Why Enable Both HTTP/2 and HTTP/3

  • HTTP/2: Still widely supported by almost all browsers and older clients.
  • HTTP/3 / QUIC: Future-proof, faster on modern browsers, better handling of packet loss and mobile networks.

By enabling both, your server can serve:

  • HTTP/3-capable clients → over QUIC
  • Older clients → fallback to HTTP/2 or HTTP/1.1

How to Configure in Nginx

Example Nginx vhost configuration:

server {
    listen 443 quic reuseport;
    listen [::]:443 quic reuseport;
    listen 443 ssl http2;
    listen [::]:443 ssl;
    http2 on;    
    # HTTP3/QUIC Support  
    http3 on;    
    http3_hq on;
    # Add the Alt-Svc header to advertise HTTP/3 support to clients
    add_header Alt-Svc 'h3=":$server_port"; ma=86400';
    
    server_name example.com;

    # Add your SSL/TLS certificate and key paths
    ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem;

    # QUIC requires TLS 1.3 
    ssl_protocols TLSv1.3 TLSv1.2;
    ssl_early_data on; # Enable 0-RTT support for faster handshakes   

    # ... other server configurations ...
}

Performance Impact

  • Faster page loads for modern clients (HTTP/3)
  • Reduced latency on high-loss or mobile networks
  • Fallback ensures compatibility with older clients
  • Better security with TLS 1.3 integration

Best Practices

  1. Enable both HTTP/2 and HTTP/3 to serve all clients.
  2. Ensure TLS 1.3 is enabled for QUIC support.
  3. Monitor server logs and performance metrics after enabling HTTP/3 to check compatibility.
  4. Use reuseport and a sufficient UDP buffer size for QUIC traffic.

Configure Proxy Cache and FCGI Cache in Nginx

1. What is Proxy Cache?

Proxy caching in Nginx is used when Nginx acts as a reverse proxy for backend servers (like Apache, PHP-FPM, or any application server).

  • Nginx stores backend responses in a cache on disk or in memory.
  • For repeated requests, Nginx serves the cached response instead of forwarding the request to the backend.

Benefits:

  1. Reduces load on backend servers.
  2. Speeds up response times for repeated requests.
  3. Improves scalability under high traffic.

Example Proxy Cache Configuration

http {
    # Define a cache zone
    proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m inactive=60m max_size=1g;

    server {
        location / {
            proxy_pass http://backend_server;
            proxy_cache my_cache;
            proxy_cache_valid 200 302 10m;  # Cache successful responses for 10 minutes
            proxy_cache_valid 404 1m;       # Cache 404 responses for 1 minute
            proxy_cache_use_stale error timeout updating;  # Serve stale cache on backend errors
            add_header X-Proxy-Cache $upstream_cache_status;
        }
    }
}

2. What is FCGI Cache?

FCGI caching is used when Nginx is serving dynamic content via FastCGI, typically with PHP-FPM.

  • Nginx caches the rendered PHP pages or other dynamic content.
  • Subsequent requests are served directly from cache, bypassing PHP-FPM.

Benefits:

  1. Reduces CPU and memory load on PHP-FPM.
  2. Speeds up dynamic page delivery.
  3. Handles more concurrent users efficiently.

Example FCGI Cache Configuration

http {
    # Define FCGI cache
    fastcgi_cache_path /var/cache/nginx/fcgi levels=1:2 keys_zone=php_cache:100m inactive=60m max_size=1g;

    server {
        location ~ \.php$ {
            include fastcgi_params;
            fastcgi_pass unix:/run/php/php8.1-fpm.sock;
            fastcgi_index index.php;

            # Enable FCGI caching
            fastcgi_cache php_cache;
            fastcgi_cache_valid 200 302 10m;
            fastcgi_cache_valid 404 1m;
            fastcgi_cache_use_stale error timeout invalid_header updating;
            add_header X-FastCGI-Cache $upstream_cache_status;
        }
    }
}

Best Practices

  • Separate cache zones for different types of content.
  • Set appropriate cache durations depending on content change frequency.
  • Use use_stale options to improve reliability during backend failures.
  • Monitor cache hit ratio using headers like $upstream_cache_status and logs.
  • Avoid caching sensitive data like user-specific pages or checkout pages.

Performance Impact

  • Without cache: Every request hits the backend → slower, higher CPU/memory.
  • With cache: Repeated requests served from cache → much faster, lower load.
  • Reduces database queries and PHP processing significantly.

Conclusion

Optimizing Nginx is not a one-time task—it’s an ongoing process. With the right kernel tuning, buffer settings, and caching, you can significantly enhance server responsiveness.

Using Cpnginx, these optimizations become effortless through template-based configurations—delivering maximum performance with minimal manual tuning.

FAQs

1. What is the best way to optimize Nginx?

By tuning kernel parameters, enabling caching, adjusting worker limits, and enabling compression.

2. How can I optimize Nginx on cPanel?

Use Cpnginx to apply template-based optimizations automatically.

3. What kernel settings improve Nginx performance?

Parameters like net.core.somaxconn, tcp_tw_reuse, and fs.file-max improve TCP handling and connection throughput.

4. How does Gzip compression improve speed?

It reduces response size, saving bandwidth and speeding up content delivery.

5. Should I enable HTTP3 in Nginx?

Yes, HTTP3 improves connection latency and performance on modern browsers.

Check out the related articles and news

Nginx Manager for cPanel & WHM: Complete Gui…

Discover how the Cpnginx Nginx Manager simplifies Nginx management for cPanel and WHM. Learn about dashboards, domain t…