Author: Karen
Categories: TECH
Tags: Django Docker Nginx

In production environments, instead of using the default Django server, which is not secure or optimized for real traffic, we typically use Gunicorn for better security and scalability. Gunicorn is a production-ready WSGI application server for Python, which runs Django applications by handling incoming HTTP requests and passing them to Django for processing using multiple worker processes. It efficiently 

  • Manages multiple worker processes
  • Handles concurrent requests
  • Interfaces Django with web servers (like Nginx)
  • Is designed for stability and performance

Gunicorn, however, does not server static files (CSS and JavaScript). For this reason, it is commonly combined with Nginx, a high-performance web server and a reverse proxy. Nginx forwards dynamic requests to Gunicorn while serving static files directly, which significantly improves performance. Some of Nginx's key functionality include

  • Serves static and media files efficiently
  • Acts as a reverse proxy to Gunicorn
  • Handles SSL/TLS (HTTPS)
  • Provides load balancing and caching
  • Extremely fast and memory-efficient

This post concentrates on providing a minimal working Nginx configuration for serving static files, applying basic rate limiting, and running both Gunicorn and Nginx inside Docker containers.

Collecting Static Files

To ensure static files are properly served, we first need to collect them into a directory that Nginx can use.

In Django settings file, we add:

STATIC_ROOT = BASE_DIR / "staticfiles"

This setting tells Django where to place all static files (CSS, JS, images) when running collectstatic. After this, Django knows the final destination for static assets.

To gather all static files, run:

python manage.py collectstatic

This command gathers static files from:

  • each Django app

  • STATICFILES_DIRS
  • Django admin

and copies them all into STATIC_ROOT (src/staticfiles/).  

Nginx Container 

We create an nginx directory and a conf.d directory inside the directory. 

The staticfiles directory is mapped to /app/static inside the container so that Nginx can serve static content directly.

nginx:
    image: nginx:latest
    container_name: nginx
    ports:
      - "80:80"
    depends_on:
      - web
    volumes:
      - ./src/nginx/conf.d:/etc/nginx/conf.d
      - ./src/nginx/logs:/var/log/nginx
      - ./src/staticfiles:/app/static
    restart: always
    networks:
      - myproject-net

Nginx Configuration 

Next, we configure Nginx. conf.d contains a default.conf file, which contains the Nginx configuration. Static files are served directly from /app/static/, while all other requests are forwarded to port 8000, where Gunicorn serves the Django application.

It is also a good idea to put rate limitation to your application (More details cab be found on Rate Limiting with NGINX – NGINX Community Blog).

limit_req_zone $binary_remote_addr zone=mylimit:10m rate=10r/s;

server {
    listen 80;

    location /static/ {
        alias /app/static/;
    }

    location / {
        limit_req zone=mylimit burst=20 nodelay;
        proxy_pass http://web:8000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

The configuration does the following

  • limit_req_zone: Defines a shared memory zone used for request rate limiting.
  • $binary_remote_addr: Uses the client’s IP address (in binary form) as the key, meaning rate limiting is applied per client IP.
  • zone=mylimit:10m: Names the zone mylimit and allocates 10 mb of shared memory.
  • rate=10r/s: Allows 10 requests per second per IP.
  • listen 80: Nginx listens for incoming HTTP traffic on port 80.
  • location /static/: Matches all requests starting with /static/. Serves files directly from /app/static/ inside the container and bypasses the backend app (Gunicorn).
  • location /: matches all other requests.
  • limit_req zone=mylimit: Applies the previously defined rate-limit zone.
  • burst=20: Allows up to 20 requests to exceed the rate temporarily. Requests in the burst are not delayed. If the burst limit is exceeded, requests are immediately rejected (HTTP 503)
  • proxy_pass http://web:8000: Forwards requests to the backend service. web is the Docker service name, port 8000 is where Gunicorn is listening.
  • proxy_set_header Host $host: Passes the original Host header to the backend.
  • proxy_set_header X-Real-IP $remote_addr: Sends the client’s real IP address to the backend.
  • proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for: Appends the client IP to the X-Forwarded-for chain (A list of IP addresses representing the full proxy chain. There may be multiple depending on the number of proxies).

Web (Gunicorn) and Nginx Containers

The final configuration for web and nginx containers looks like the following. Port 8000 for web service is exposed only inside the Docker network.

services:
  web:
    container_name: web
    command: sh -c "gunicorn project.wsgi:application --workers 4 --threads 10 --bind 0.0.0.0:8000"
    image: project:latest
    ports:
      - "8000"
    volumes:
      - ./src:/src
    networks:
      - myproject-net

  nginx:
      image: nginx:latest
      container_name: nginx
      ports:
        - "80:80"
      depends_on:
        - web
      volumes:
        - ./src/nginx/conf.d:/etc/nginx/conf.d
        - ./src/nginx/logs:/var/log/nginx
        - ./src/staticfiles:/app/static
      restart: always
      networks:
        - myproject-net


networks:
  myproject-net:
    external: true

This Docker Compose setup defines a two-container architecture using Nginx as a reverse proxy and Gunicorn to run the Django application.

The web service runs the Django application using Gunicorn. It starts Gunicorn with four worker processes and ten threads per worker, listening on port 8000 inside the container. The application code is mounted from the host into the container, allowing changes to the source code without rebuilding the image. 

The nginx service acts as the public entry point. It listens on port 80 of the host machine and forwards incoming HTTP requests to the web service. Nginx loads its configuration from a mounted directory on the host, stores its logs on the host for easier access, and directly serves static files from a mounted static directory instead of passing those requests to the Django application. The container is configured to restart automatically if it stops unexpectedly.

Both services are connected to an externally managed Docker network.

Author: Karen
Categories: TECH
Tags: Django Python Docker

Event streaming is a way of processing data as a continuous flow of events rather than as one-time requests. Each event represents something that happened (e.g., “user signed up”, “order placed”), and systems can react to these events in real time or near real time. Event streaming enables systems to publish, consume, store, and process events continuously and asynchronously, making it well suited for real-time features such as notifications, dashboards, and live updates.

Pub/Sub Model

The pub/sub model is a messaging pattern where publishers send messages and subscribers (clients) receive them. Redis provides such mechanism using channels, where messages published to a channel are delivered to all active subscribers.

Messages in Redis pub/sub are not persisted, meaning that if a subscriber is offline, it will miss any messages published during that time. There is also no built-in replay support. Despite these limitations, Redis pub/sub is extremely efficient and well suited for real-time notifications, such as chat systems or live status updates.

Streaming API (View)

In this post we will implement a Django Server-Sent Events (SSE) endpoint that streams messages from a Redis pub/sub channel to connected clients in real-time.

The stream_events view subscribes to the demo_stream Redis channel and continuously listens for new messages. Whenever a message arrives, it is yielded as an SSE-formatted response using Django’s StreamingHttpResponse.

import redis
from django.http import StreamingHttpResponse
import time

def stream_events(request):
    r = redis.Redis(host="redis", port=6379, db=0)
    pubsub = r.pubsub()
    pubsub.subscribe("demo_stream")

    def event_stream():
        try:
            print("connected")
            while True:
                message = pubsub.get_message(timeout=1)
                if message and message["type"] == "message":
                    data = message["data"].decode("utf-8")
                    print(data)
                    yield f"data: {data}\n\n"
                time.sleep(0.01)
        except GeneratorExit:
            # Client disconnected
            pass
        finally:
            pubsub.close()
    response = StreamingHttpResponse(
        event_stream(), content_type="text/event-stream")
    response['Cache-Control'] = 'no-cache'
    return response

This implementation keeps the HTTP connection open and continuously pushes updates to the client whenever new messages are published to Redis.

Publishing Messages

Messages can be published to Redis using redis-cli or a Python Redis client, and any client visiting /stream/ will receive them live.

From the Redis CLI running inside Docker, messages can be published with 

docker exec -it redis redis-cli publish demo_stream "Test Event"

while from the Python Redis client, a sample example looks like this:

import redis, time

r = redis.Redis(host="127.0.0.1", port=6379, db=0)
for i in range(5):
   msg = f"Message #{i}"        
   r.publish("demo_stream", msg)
   print("Published:", msg)     
   time.sleep(1)

Each published message is immediately pushed to all connected SSE clients.

Accessing the Stream

The streaming url is 

path('stream/', stream_events, name="stream")

When visiting /stream/ in the browser (or via a compatible client), any published messages will appear in real time, as long as the client remains connected.

Note on Sync vs Async

This view runs in synchronous mode. If the Django application is running under an async server (like Daphne or Uvicorn), then a synchronous streaming view that uses blocking operations may block the event loop. This can cause the entire application to become unresponsive. To avoid this, make sure that either:

  • the whole application runs in synchronous mode, or

  • the view is rewritten to be fully async and non-blocking.

Author: Karen
Categories: TECH
Tags: Terraform AWS

A good starting point for creating EC2 instances is to create them in the default VPC and subnet provided by AWS.

Deploying an Amazon EC2 instance within the default VPC subnet significantly reduces setup complexity while still providing a secure, scalable, and reliable infrastructure. The default VPC is automatically configured with essential components such as subnets in each Availability Zone, route tables, security groups, network ACLs, and an internet gateway. This allows to launch EC2 instances quickly without needing deep networking expertise, making it ideal for rapid production deployments. 

Additionally, using the default VPC subnet ensures built-in connectivity and high availability while following AWS best practices. Instances launched in a default subnet can access the internet immediately (when assigned a public IP). As workloads grow, these deployments in the default VPC can seamlessly integrate with load balancers, auto scaling groups, and managed services.

In this article we will go over the steps of setting up a minimal production-ready EC2 environment with Terraform which can be used for deploying your application. Before diving into the actual implementation, it helps to understand a few core AWS networking concepts.

Virtual Public Cloud (VPC)

A VPC is your own private network inside AWS. You can think of it as your company’s private “internet neighborhood,” isolated from others. Inside the VPC you define IP ranges, create subnets, and control all networking.

Subnet

A subnet is a smaller network segment inside your VPC.
Subnets can be:

  • Public: can reach the internet (through an Internet Gateway)

  • Private: cannot directly reach the internet

In the default VPC, all subnets are public because they have routes to the Internet Gateway.

Route Table

A route table contains rules that determine how network traffic is directed:

  • Internet-bound traffic is routed through the Internet Gateway

  • Internal traffic stays within the VPC

The default VPC includes a default route table that already has a 0.0.0.0/0 route to the internet gateway, making all its subnets public.

Internet Gateway (IGW)

An IGW allows resources inside your VPC to connect to the internet. Default VPC has an IGW attached.

Security Groups

Security groups are virtual firewalls attached to EC2 instances or other resources, defining what inbound and outbound traffic is allowed.

Terraform template

We select AWS as a provider, then fetch default vpc and subnet. Existing resources are referenced using the data keyword, while new resources are created using the resource keyword.

For the security group, we open port 80 and 22 for web traffic and ssh access repsectively, and outbound traffic is allowed to anywhere.

We then create a t2.micro EC2 instance, referencing correct subnet, security group and key_pair. The key pair is used for SSH access from your local machine.

To generate a key pair if one does not already exist, run:

ssh-keygen -t rsa -b 4096 -f ~/.ssh/my-key (passphrases may be skipped).

this command generates my-key and my-key.pub files in the ssh directory (you can give it any other name).

The public key is uploaded to EC2. You can verify this in the AWS console under:

Instance -> Details -> Key pair assigned at launch and it should list reference my-key

or by connecting to the instance and checking:

cat ~/.ssh/authorized_keys.

To ssh into you instance:

ssh -i ~\.ssh\my-key [user]@[public_ip_of_instance] (Bash)

or 

ssh -i $env:USERPROFILE\.ssh\my-key [user]@[public_ip_of_instance] (Powershell)

The corresponding terraform template is the following:

provider "aws" {
  region = "eu-west-2"  # Choose the AWS region to deploy into
}

# Fetch the region's default VPC
data "aws_vpc" "default" {
  default = true  # Tells AWS: use the default VPC
}

# Get all default subnets inside the default VPC
data "aws_subnets" "default" {
  filter {
    name   = "vpc-id"               # Filter subnets by VPC ID
    values = [data.aws_vpc.default.id]  # Use the default VPC's ID
  }
}

# Security group for the web instance
resource "aws_security_group" "web" {
  name        = "web-sg"        # Security group name
  description = "Allow HTTP and SSH"  # What this SG is for
  vpc_id      = data.aws_vpc.default.id  # Attach SG to default VPC

  ingress {
    description = "HTTP"           # Allow web traffic
    from_port   = 80
    to_port     = 80
    protocol    = "tcp"            # HTTP uses TCP
    cidr_blocks = ["0.0.0.0/0"]    # Allow from anywhere
  }

  ingress {
    description = "SSH"            # Allow SSH access
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"            # SSH uses TCP
    cidr_blocks = ["0.0.0.0/0"]    # Allow SSH from anywhere (not ideal for prod)
  }

  egress {
    from_port   = 0
    to_port     = 0
    protocol    = "-1"             # -1 = all protocols
    cidr_blocks = ["0.0.0.0/0"]    # Allow all outbound traffic
  }
}

resource "aws_key_pair" "host_key" {
  key_name   = "my-key"
  public_key = file("~/.ssh/my-key.pub")
}


# EC2 instance running the web app
resource "aws_instance" "web" {
  ami                    = "ami-03a725ae7d906005d" # OS image (update for your region)
  instance_type          = "t2.micro"               # Instance size
  subnet_id              = data.aws_subnets.default.ids[0]  # Put instance in a default subnet
  vpc_security_group_ids = [aws_security_group.web.id] # Attach security group
  associate_public_ip_address = true  # Give the instance a public IP
  key_name = aws_key_pair.host_key.key_name

  tags = {
    Name = "web"  # Tag for identifying the instance
  }
}

IP Restriction

Opening SSH to all IPs (0.0.0.0/0) is not recommended, as it exposes the instance to brute-force attempts. A safer approach is to restrict SSH access to your own IP address. brute force attempts to connect to your instance, and even with key auth, there are risks involved. Let's replace cidr_blocks = ["0.0.0.0/0"] in

  ingress {
    description = "SSH"            # Allow SSH access
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"            # SSH uses TCP
    cidr_blocks = ["0.0.0.0/0"]    # Allow SSH from anywhere (not ideal for prod)
  }

with a variable-based configuration

variable "ssh_allowed_ips" {
  type = list(string)
}

ingress {
    description = "SSH"            # Allow SSH access
    from_port   = 22
    to_port     = 22
    protocol    = "tcp"            # SSH uses TCP
    cidr_blocks = var.ssh_allowed_ips
  }

Then define ssh_allowed_ips variable in terraform.tfvars. terraform.tfvars file is located in the same directory as your main.tf.

ssh_allowed_ips = [
  "203.0.113.42/32"
]

Check your computer IP (can be checked with What Is My IP Address - See Your Public Address - IPv4 & IPv6) and adjust  ssh_allowed_ips to match your own public address.

This method, however, may not be ideal if your PC’s IP address changes frequently. In that case, configuring an IP range instead of a single IP, using a VPN, or leveraging AWS Systems Manager may be better options.

If everything was done correctly, you should have an up and running EC2 instance. You should also be able to SSH to the instance from your PC only.

 Notes

  • Route tables belong to a VPC but are associated with subnets
  • Internet Gateways are VPC-level resources
  • A public EC2 instance cannot reside in a private subnet

Author: Karen
Categories: TECH
Tags: Django Python

Asynchronous APIs are designed to handle many concurrent requests efficiently by avoiding thread blocking I/O operations (such as database queries, network calls, or file access). Instead of assigning one worker per request, async APIs use an event loop to switch between tasks, allowing better resource utilization under high concurrency.

In this article we will compare the performance of a sync django view running with a gunicorn server and an async django view running with an async uvicorn server. In both cases we call a test api and although it is expected for the async view running in a event loop and not being constrained by CPU threads to be significaltly faster, the results show the opposite.

To test the performance, we use hey load testing tool, which can be installed on Linux systems using:

sudo apt install hey

Defining Views

We define two views, sync_view and an async_view, both calling the same external API endpoint.

import requests
from django.http import JsonResponse
import httpx


def sync_view(request):
    r = requests.get("https://jsonplaceholder.typicode.com/todos/1")
    return JsonResponse(r.json())


async def async_view(request):
    async with httpx.AsyncClient() as client:
        r = await client.get("https://jsonplaceholder.typicode.com/todos/1")
    return JsonResponse(r.json())

The routes are:

/sync/test → sync view
/async/test → async view

and the corresponding URL mappings are:

path('async/test', async_view, name="async-test"),
path('sync/test', sync_view, name="sync-test"),

Load Testing Setup

The tests were run inside Docker containers using:

  • Gunicorn for the sync view
  • Uvicorn for the async view

Sync View Testing

To test the sync view performance (sync/test). We first run the Django server with Gunicorn with 4 workers and 10 threads per worker, This means simutanously 40 requests can be served.

gunicorn project.wsgi:application --workers 4 --threads 10 --bind 0.0.0.0:8000

We send 150 requests 30 of them being concurrent, 300 requests with 50 concurrency and 1000 requests with 100 concurency and log the results.

Async View Testing 

To test the async view performance (async/test), we run the Django server with uvicorn. 

uvicorn project.asgi:application --workers 4 --host 0.0.0.0 --port 8000

Terminology

  • Requests Per Second (RPS) measures the number of requests a server is able to handle and complete per second under a given load.

  • Average latency shows how long a request takes on average.

  • P95 latency means that 95% of requests completed faster than a given value (for example, 1.22s).

Results

Testing results are presented in the table bellow. 

Test Mode RPS Avg latency P95 latency
150 / 30 Sync 41.3 0.45s 1.22s
  Async 35.0 0.69s 1.52s
300 / 50 Sync 86.2 0.46s 0.93s
  Async 38.1 0.94s 1.69s
1000 / 100 Sync 81.4 0.92s 1.88s
  Async 42.0 1.75s 3.04s

The results show that sync view consistently outperforms the async view, as the the sync view has higher RPS, lower average latency and better tail latency.

Takeaway

Async view does not automatically imply faster performance, and for single outbound API calls at moderate load, sync views can often be faster, simpler, and more predictable. 

In this case, the sync setup performs well because Gunicorn provides enough threads to efficiently wait for I/O without overwhelming the system.

Author: Karen
Categories: TECH
Tags: Django Python Django Rest Framework

Django Rest Framework (DRF) is a framework for building web APIs using Django, making the process of building reusable APIs very easy and efficient. It exposes Django models and business logic as RESTful services, handling common API tasks like serialization, authentication, permissions, and request/response formatting. Some of the key functionalities of DRF include:

  • Serialization: Convert complex data types (e.g., Django models, querysets) to and from JSON, XML, and other formats
  • Request & Response handling
  • Viewsets & Generic Views: Rapid API development with reusable, class-based views
  • Authentication: Built-in support for session, token, JWT (via extensions), OAuth, and custom authentication 
  • Permissions & Authorization: Fine-grained access control at global, view, or object level
  • Pagination: Easy handling of large datasets with customizable pagination styles
  • Filtering & Searching: Built-in filtering, search, and ordering support

This post focuses on building APIs and performing serialization using DRF’s built-in classes, gradually moving from low-level views to higher-level abstractions.

The most basic class implemented by DRF is the APIView class. It gives you full control over request handling while still providing DRF features such as authentication and permissions. Let's consider the following sample Django model:

class Book(models.Model):
    name = models.CharField(max_length=100)
    category = models.ForeignKey(Category, on_delete=models.CASCADE, null=True, blank=True)

Basic GET APIs with APIView

To create APIs for retrieving all books or a single book by ID, we can implement a GET handler like this:

from rest_framework import status
from rest_framework.views import APIView
from rest_framework.response import Response
from django.shortcuts import get_object_or_404
from events.models import Book


class BookView(APIView):

    def get(self, request, book_id=None):
        if book_id is not None:
            book = get_object_or_404(Book, id=book_id)
            return Response(self.serialize_response(book))

        books = Book.objects.all()
        return Response([self.serialize_response(b) for b in books])

    def serialize_response(self, book):
        return {
            "id": book.id,
            "name": book.name,
            "category": book.category_id,
        }

with URL configuration

path("books/<int:book_id>/", BookView.as_view(), name='book-list'),
path("books/", BookView.as_view(), name='book-details')

If book_id is provided, the API returns a single serialized book. Otherwise, it returns a list of all books.

Creating, Updating, and Deleting with APIView

To support creating, updating, and deleting books, we add POST, PUT, and DELETE methods:

from rest_framework import status
from rest_framework.views import APIView
from rest_framework.response import Response
from django.shortcuts import get_object_or_404
from events.models import Book


class BookView(APIView):

    def post(self, request):
        data = request.data

        book = Book.objects.create(
            name=data.get("name"),
            category_id=data.get("category"),
        )

        return Response(
            self.serialize_response(book),
            status=status.HTTP_201_CREATED
        )

    def put(self, request, book_id):
        book = get_object_or_404(Book, id=book_id)
        data = request.data

        book.name = data.get("name", book.name)
        book.category_id = data.get("category", book.category_id)
        book.save()

        return Response(
            self.serialize_response(book),
            status=status.HTTP_200_OK
        )

    def delete(self, request, book_id):
        book = get_object_or_404(Book, id=book_id)
        book.delete()
        return Response(status=status.HTTP_204_NO_CONTENT)

    def serialize_response(self, book):
        return {
            "id": book.id,
            "name": book.name,
            "category": book.category_id,
        }

DRF’s browsable API allows you to navigate to endpoints such as /books/ or /books/1/ and interact with these APIs directly via HTML forms.

Defining a ModelSerializer

Manually serializing data and handling object creation can become repetitive and error-prone. DRF Serializers solve this by converting complex data into JSON-friendly formats and validating incoming request data before saving it.

They act as the bridge between HTTP data and Django models.

from rest_framework import serializers
from events.models import Book

class BookSerializer(serializers.ModelSerializer):
    class Meta:
        model = Book
        fields = ["id", "name", "category"]

Refactored View Using Serializer

class BookSerializerView(APIView):
    # GET /books/ or /books/<id>/
    def get(self, request, book_id=None):
        if book_id is not None:
            book = get_object_or_404(Book, id=book_id)
            serializer = BookSerializer(book)
            return Response(serializer.data)

        books = Book.objects.all()
        serializer = BookSerializer(books, many=True)
        return Response(serializer.data)

    # POST /books/
    def post(self, request):
        serializer = BookSerializer(data=request.data)

        if serializer.is_valid():
            serializer.save()
            return Response(
                serializer.data,
                status=status.HTTP_201_CREATED
            )

        return Response(
            serializer.errors,
            status=status.HTTP_400_BAD_REQUEST
        )

    # PUT /books/<id>/
    def put(self, request, book_id):
        book = get_object_or_404(Book, id=book_id)
        serializer = BookSerializer(book, data=request.data)

        if serializer.is_valid():
            serializer.save()
            return Response(serializer.data)

        return Response(
            serializer.errors,
            status=status.HTTP_400_BAD_REQUEST
        )

    # DELETE /books/<id>/
    def delete(self, request, book_id):
        book = get_object_or_404(Book, id=book_id)
        book.delete()
        return Response(status=status.HTTP_204_NO_CONTENT)

You will notice that serialize_response() method is not used anymore, which is handled by BookSerializer.

The urls stay the same, we just change BookView to BookSerializerView.

path("books/<int:book_id>/", BookSerializerView.as_view(), name='book-list'),
path("books/", BookSerializerView.as_view(), name='book-details')

GenericAPIView and Mixins

To reduce boilerplate further, DRF provides GenericAPIView along with mixins that implement common CRUD behavior. GenericAPIView adds support for:

  • queryset
  • serializer_class
  • pagination_class
  • filter_backends

When combined with mixins, it provides reusable, well-tested implementations of CRUD operations, resulting in cleaner and more maintainable code.

List & Create View

class BookListCreateView(
    mixins.ListModelMixin,
    mixins.CreateModelMixin,
    generics.GenericAPIView
):
    queryset = Book.objects.all()
    serializer_class = BookSerializer

    def get(self, request, *args, **kwargs):
        return self.list(request, *args, **kwargs)

    def post(self, request, *args, **kwargs):
        return self.create(request, *args, **kwargs)

Retrieve, Update & Delete View

class BookDetailView(
    mixins.RetrieveModelMixin,
    mixins.UpdateModelMixin,
    mixins.DestroyModelMixin,
    generics.GenericAPIView
):
    queryset = Book.objects.all()
    serializer_class = BookSerializer
    lookup_url_kwarg = "book_id"

    def get(self, request, *args, **kwargs):
        return self.retrieve(request, *args, **kwargs)

    def put(self, request, *args, **kwargs):
        return self.update(request, *args, **kwargs)

    def delete(self, request, *args, **kwargs):
        return self.destroy(request, *args, **kwargs)

DRF requires explicit HTTP method definitions (get, post, etc.) so it knows how to dispatch requests. These methods act as a bridge between HTTP verbs and mixin-provided actions. The url configuration is updated as:

path("books/<int:book_id>/", BookDetailView.as_view(), name='book-list'),
path("books/", BookListCreateView.as_view(), name='book-details')

Author: Karen
Categories: TECH
Tags: Terraform Docker AWS

In previous posts we explored how to create an EC2 instance using Terraform. However, to make the application production-ready, a few more important steps are still needed. We need to:

  • Create an Elastic IP and attach to the instance
  • Register a domain (Route 53 preferrably) and point to the Elastic IP
  • Create Route 53 records, such as yourdomain.com www.yourdomain.com 
  • Launch the instance with a predefined script for installations (optional)
  • Configure HTTPS
  • Update the nginx container and default.conf to handle HTTPS traffic

Elastic IP

Elastic IP can be thought of  as a static IP and it is better to point your domain to the elastic IP rather than to the public IP because when the instance restarts, public IP can change. Using an Elastic IP ensures that your application remains accessible at the same address even if the EC2 instance is stopped and started again.

In Terraform you can create and attach an elastic IP to your instance like this:

resource "aws_eip" "app_eip" {
  domain = "vpc"

  tags = {
    Name = "app-eip"
  }
}

resource "aws_eip_association" "app_eip_assoc" {
  instance_id   = aws_instance.web.id
  allocation_id = aws_eip.app_eip.id
}

Now when the Elastic IP is created and attached to the instance, it is time to get a domain. Assuming the domain is purchased using AWS Route 53, you can reference it in Terraform and point it to your Elastic IP. Managing DNS records through Terraform also allows your infrastructure and configuration to remain fully reproducible.

Domain Handling

variable "domain_name" {
  description = "Root domain name"
  type        = string
}

data "aws_route53_zone" "zone" {
  name = var.domain_name
  private_zone = false
}

resource "aws_route53_record" "root" {
  zone_id = data.aws_route53_zone.zone.zone_id
  name    = var.domain_name
  type    = "A"
  ttl     = 300
  records = [aws_eip.app_eip.public_ip]
}

resource "aws_route53_record" "www_root" {
  zone_id = data.aws_route53_zone.zone.zone_id
  name    = "www.${var.domain_name}"
  type    = "A"
  ttl     = 300
  records = [aws_eip.app_eip.public_ip]
}
var.domain_name is simply a variable referenced in the terraform.tfvars, for example 
domain_name = "yourdomain.com"
This approach keeps your configuration flexible across environments. If your domain is unlikely to change, you can also hardcode it directly inside the Terraform file instead of using terraform.tfvars.
 
HTPPS Configuration
 
Once the EC2 instance is up and running (you can refer to Creating an EC2 Instance in the Default VPC Using Terraform for the complete Terraform configuration) it is time to configure HTTPS. Securing your application with HTTPS is critical, as it encrypts traffic between the client and the server and is expected by modern browsers.
On the instance, run the following commands:
  • sudo yum install certbot python3-certbot-nginx
  • sudo certbot certonly --standalone -d yourdomain.com -d www.yourdomain.com

After the certificates are generated, verify that /etc/letsencrypt/live/ exists on the host and contains the expected certificate files. These certificates will later be mounted into the nginx container.

Now we need to modify nginx's default.conf and later the container to properly handle HTTPS traffic.

In default.conf we define two server blocks. Port 80 now redirects traffic to port 443, ensuring that all requests use HTTPS. Port 443 is configured to handle secure traffic using the certificates generated by Certbot.

Previously created ssl_certificate files are referenced. 

limit_req_zone $binary_remote_addr zone=mylimit:10m rate=10r/s;

server {
    listen 80;
    server_name yourdomain.com www.yourdomain.com;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl;
    client_max_body_size 10M;
    server_name yourdomain.com www.yourdomain.com;

     # SSL certificate files
     ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
     ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;

    location /static/ {
        alias /app/static/;
        expires 1d;
        add_header Cache-Control "public";
    }

    location / {
        limit_req zone=mylimit burst=10 nodelay;
        proxy_pass http://web:8000;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    }
}

This setup is similar to the one described in Gunicorn and Nginx Setup for Serving the Web App, with the key difference being the addition of redirecting, HTTPS support and certificate handling.

Updating the Nginx Container

Now it time to modify the nginx container. Since the SSL certificates are generated on the host, they must be mounted into the container so Nginx can access them.

In Gunicorn and Nginx Setup for Serving the Web App post's setup, the container only exposed port 80. We now expose port 443 as well and mount the /etc/letsencrypt directory as read-only. The updated Docker Compose configuration looks like this:

nginx:
    image: nginx:latest
    container_name: nginx
    ports:
      - "80:80"
      - "443:443"
    depends_on:
      - web
    volumes:
      - /etc/letsencrypt:/etc/letsencrypt:ro
      - ./src/nginx/conf.d:/etc/nginx/conf.d
      - ./nginx/logs:/var/log/nginx
      - ./src/staticfiles:/app/static
    restart: always
    networks:
      - myproject-net

Assuming your web container is running on port 8000 and the nginx container is also successfully running, opening  yourdomain.com or www.yourdomain.com in the browser should now securely serve the web app over HTTPS.