In production environments, instead of using the default Django server, which is not secure or optimized for real traffic, we typically use Gunicorn for better security and scalability. Gunicorn is a production-ready WSGI application server for Python, which runs Django applications by handling incoming HTTP requests and passing them to Django for processing using multiple worker processes. It efficiently
- Manages multiple worker processes
- Handles concurrent requests
- Interfaces Django with web servers (like Nginx)
- Is designed for stability and performance
Gunicorn, however, does not server static files (CSS and JavaScript). For this reason, it is commonly combined with Nginx, a high-performance web server and a reverse proxy. Nginx forwards dynamic requests to Gunicorn while serving static files directly, which significantly improves performance. Some of Nginx's key functionality include
- Serves static and media files efficiently
- Acts as a reverse proxy to Gunicorn
- Handles SSL/TLS (HTTPS)
- Provides load balancing and caching
- Extremely fast and memory-efficient
This post concentrates on providing a minimal working Nginx configuration for serving static files, applying basic rate limiting, and running both Gunicorn and Nginx inside Docker containers.
Collecting Static Files
To ensure static files are properly served, we first need to collect them into a directory that Nginx can use.
In Django settings file, we add:
STATIC_ROOT = BASE_DIR / "staticfiles"
This setting tells Django where to place all static files (CSS, JS, images) when running collectstatic. After this, Django knows the final destination for static assets.
To gather all static files, run:
python manage.py collectstatic
This command gathers static files from:
-
each Django app
- STATICFILES_DIRS
-
Django admin
and copies them all into STATIC_ROOT (src/staticfiles/).
Nginx Container
We create an nginx directory and a conf.d directory inside the directory.
The staticfiles directory is mapped to /app/static inside the container so that Nginx can serve static content directly.
nginx:
image: nginx:latest
container_name: nginx
ports:
- "80:80"
depends_on:
- web
volumes:
- ./src/nginx/conf.d:/etc/nginx/conf.d
- ./src/nginx/logs:/var/log/nginx
- ./src/staticfiles:/app/static
restart: always
networks:
- myproject-net
Nginx Configuration
Next, we configure Nginx. conf.d contains a default.conf file, which contains the Nginx configuration. Static files are served directly from /app/static/, while all other requests are forwarded to port 8000, where Gunicorn serves the Django application.
It is also a good idea to put rate limitation to your application (More details cab be found on Rate Limiting with NGINX – NGINX Community Blog).
limit_req_zone $binary_remote_addr zone=mylimit:10m rate=10r/s;
server {
listen 80;
location /static/ {
alias /app/static/;
}
location / {
limit_req zone=mylimit burst=20 nodelay;
proxy_pass http://web:8000;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
The configuration does the following
- limit_req_zone: Defines a shared memory zone used for request rate limiting.
- $binary_remote_addr: Uses the client’s IP address (in binary form) as the key, meaning rate limiting is applied per client IP.
- zone=mylimit:10m: Names the zone mylimit and allocates 10 mb of shared memory.
- rate=10r/s: Allows 10 requests per second per IP.
- listen 80: Nginx listens for incoming HTTP traffic on port 80.
- location /static/: Matches all requests starting with /static/. Serves files directly from /app/static/ inside the container and bypasses the backend app (Gunicorn).
- location /: matches all other requests.
- limit_req zone=mylimit: Applies the previously defined rate-limit zone.
- burst=20: Allows up to 20 requests to exceed the rate temporarily. Requests in the burst are not delayed. If the burst limit is exceeded, requests are immediately rejected (HTTP 503)
- proxy_pass http://web:8000: Forwards requests to the backend service. web is the Docker service name, port 8000 is where Gunicorn is listening.
- proxy_set_header Host $host: Passes the original Host header to the backend.
- proxy_set_header X-Real-IP $remote_addr: Sends the client’s real IP address to the backend.
- proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for: Appends the client IP to the X-Forwarded-for chain (A list of IP addresses representing the full proxy chain. There may be multiple depending on the number of proxies).
Web (Gunicorn) and Nginx Containers
The final configuration for web and nginx containers looks like the following. Port 8000 for web service is exposed only inside the Docker network.
services:
web:
container_name: web
command: sh -c "gunicorn project.wsgi:application --workers 4 --threads 10 --bind 0.0.0.0:8000"
image: project:latest
ports:
- "8000"
volumes:
- ./src:/src
networks:
- myproject-net
nginx:
image: nginx:latest
container_name: nginx
ports:
- "80:80"
depends_on:
- web
volumes:
- ./src/nginx/conf.d:/etc/nginx/conf.d
- ./src/nginx/logs:/var/log/nginx
- ./src/staticfiles:/app/static
restart: always
networks:
- myproject-net
networks:
myproject-net:
external: true
This Docker Compose setup defines a two-container architecture using Nginx as a reverse proxy and Gunicorn to run the Django application.
The web service runs the Django application using Gunicorn. It starts Gunicorn with four worker processes and ten threads per worker, listening on port 8000 inside the container. The application code is mounted from the host into the container, allowing changes to the source code without rebuilding the image.
The nginx service acts as the public entry point. It listens on port 80 of the host machine and forwards incoming HTTP requests to the web service. Nginx loads its configuration from a mounted directory on the host, stores its logs on the host for easier access, and directly serves static files from a mounted static directory instead of passing those requests to the Django application. The container is configured to restart automatically if it stops unexpectedly.
Both services are connected to an externally managed Docker network.