Introduction

I’ve recently started learning Django, the popular web development framework for Python. Until now, as a PHP developer, I’ve been rather spoiled with the ease of deploying projects thanks to software such as cPanel or Plesk. Having reached the stage of wanting to deploy my first Django project, I realised that Docker would be essential to work with my existing VPS. This is a guide for how I’ve containerised a Django project that uses Pipenv for package management and includes an NPM build stage for Tailwind CSS.

This will work with any Node framework that requires packages to be installed and then assets compiled with a build command.

My Considerations

For this to be successful, I need to have a production-ready project. That means that I have a few priorities:

  • Keep the final Docker image size to a minimum.
  • Include no non-essential software in the final image.

These requirements greatly influenced how I structured the Dockerfile. You’ll see this come into play by my use of multi-stage builds and some other tricks for reducing bloat.

Without further ado, let’s jump in.

Dockerfile for Django with NPM

We’re using a Docker feature called multi-stage builds to create an environment from which an NPM command can run to build static assets. Once the assets are ready for production, we start a new build stage using Python as a base, and then we copy the files we built into this new stage without all of the NPM/Node bloat that was required to compile it.

# Stage 1: Use NPM to run build command
FROM node:21 AS node-build

WORKDIR /app

COPY package*.json ./
RUN npm install
COPY . .

RUN npm run build

# Stage 2: Use Python for running the app
FROM python:3.9-slim

WORKDIR /usr/src/django-docker

COPY Pipfile Pipfile.lock ./

RUN apt-get update && \
	# Installing packages for MySQL connections & CRON scheduler
    # This is for my specific Django app, but leaving here for completeness
	apt-get install -y pkg-config python3-dev default-libmysqlclient-dev build-essential cron && \
	rm -rf /var/lib/apt/lists/*

RUN pip install -U pipenv && \
	pipenv install --system

# This can also be ignored, this relates to CRON jobs in this project
COPY jobs/crontab.txt /etc/cron.d/django_jobs
RUN chmod 0644 /etc/cron.d/django_jobs && \
    crontab /etc/cron.d/django_jobs

# Take output files from Stage 1
COPY --from=node-build /app .

EXPOSE 8000
ENTRYPOINT [ "sh", "./entrypoint.sh" ]

The key lines to note in the above example are:

  • Line 2: Use node:21 as the base image and then name the stage node-build.
  • Line 10: Run npm run build to compile the assets for this project.
  • Line 13: Use python:3.0-slim to start a new stage (-slim is used as this is for production).
  • Line 34: Copy the output files from the node-build stage into the new stage.

Why Use This Approach

Multi-Stage Builds are More Reliable

When you use a base image such as python:3.9-slim, as I have above, and you want to install Node and NPM into that container, you have to run a series of commands to update and install packages and dependencies.

While the base image shouldn’t see any change, the repos from which these packages are installed are subject to daily changes. By relying on these third-party repos, we’re working against Docker, rather than with it. If you’re using Docker, use it to its full potential.

Multi-Stage Builds can Minimise Vulnerabilities

When building Docker containers for production environments, you want them to be as minimal as possible. The more packages and third-party software you have installed, the more potential vulnerabilities you introduce into that isolated environment.

In the case of a Django full-stack application, NPM is required only to build the assets. From there, it’s no longer required. Doing it this way, you create the output files you need for Django to serve, and then you leave all of the build dependencies and their vulnerabilities behind.

Multi-Stage Builds Create Smaller Images

As I mentioned above, once we’ve built the files, we discard Node and NPM as they’re no longer required. Because of this, their files do not exist in the final image. The result: A smaller file size.

Why should you care about your image size? This is more of a bonus, rather than a key selling point of the approach. However, having a smaller image size makes your containers more portable. Smaller transfers can improve deployment speeds and save time when pushing or pulling Docker images.

Leave a Reply

Your email address will not be published. Required fields are marked *