I wanted to package my Python application to a Docker container. By doing so, I’d be able to smoothly run my application wherever I wanted to – either on my home server or in the cloud environment.
To ‘containerize’ an application, one needs to write a cookbook describing the application’s image creation. An image is a single file containing all dependencies and configuration required for running the application. A container is created when Docker receives a command to run the image – a container is effectively an instance of its image.
The first thing to consider is the base operating system. There are various images available on the Docker hub. Alpine Linux is a minimalistic Linux image suitable for not only Python applications.
Any missing packages might be installed using apk command.
My example contains required packages for building Pillow library for image manipulation.
COPY command copies files and scripts from the local directory into the image file.
Python environment installation is kicked off using a pip command and supplied requirements.txt file.
When the image is run, CMD command kicks off Python interpreter with app.py as a parameter.
FROM python:3.7-alpine RUN apk --update add gcc libgcc musl-dev jpeg-dev zlib-dev COPY . /app WORKDIR /app RUN pip install -r requirements.txt CMD ["python" "app.py"]
The application’s file structure with the Docker and requirements file at on the root level and src directory with Python files.
├── Dockerfile ├── requirements.txt └── src
Because I needed to integrate Tensorflow which is not available for Alpine Linux, I had to replatform to Ubuntu 18.04
FROM ubuntu:18.04 RUN apt-get update RUN apt-get install -y python3-dev python3-pip gcc COPY . /app WORKDIR /app RUN pip3 install -r requirements.txt # Download Tensorflow model WORKDIR /app/src RUN python3 classify_image.py CMD ["python3", "tweet.py"]