What is Docker?

Docker is a platform that allows you to package your application and all its dependencies into a standardized unit called a container. This container can run consistently on any machine that has Docker installed.

Think of Docker containers like shipping containers in the real world. Just as a shipping container can hold any goods and be transported by any ship, truck, or train - a Docker container holds your application and can run on any computer with Docker.

The Problem Docker Solves

Have you ever heard "It works on my machine"? This common problem occurs because:

  • Different operating systems (Windows, Mac, Linux)
  • Different versions of programming languages
  • Different library versions
  • Different system configurations

Docker solves this by packaging everything your application needs into a container. The container runs the same way everywhere - on your laptop, your colleague's computer, or a production server.

Docker vs Virtual Machines

Virtual Machine:                    Docker Container:
┌─────────────────────┐            ┌─────────────────────┐
│      Your App       │            │      Your App       │
├─────────────────────┤            ├─────────────────────┤
│   Guest OS (Full)   │            │  Container Runtime  │
├─────────────────────┤            │    (Lightweight)    │
│     Hypervisor      │            └──────────┬──────────┘
├─────────────────────┤                       │
│      Host OS        │            ┌──────────┴──────────┐
├─────────────────────┤            │      Host OS        │
│     Hardware        │            ├─────────────────────┤
└─────────────────────┘            │     Hardware        │
                                   └─────────────────────┘

VMs: Heavy, slow to start         Containers: Light, start in seconds
Each VM has full OS               Containers share the host OS kernel

Containers are much lighter than VMs because they share the host operating system's kernel. This makes them faster to start and more efficient with resources.

Key Docker Concepts

Image

A read-only template containing instructions for creating a container. Think of it as a recipe or blueprint.

Container

A running instance of an image. You can create multiple containers from the same image.

Dockerfile

A text file with instructions to build an image. It defines what goes into your container.

Docker Hub

A registry where Docker images are stored and shared. Like GitHub but for Docker images.

Volume

A way to persist data outside of containers. Data in volumes survives container restarts.

Essential Docker Commands

# Check Docker version
docker --version

# Pull an image from Docker Hub
docker pull python:3.11

# List downloaded images
docker images

# Run a container
docker run python:3.11 python --version

# Run a container interactively
docker run -it python:3.11 bash

# Run a container in the background (detached)
docker run -d --name myapp python:3.11

# List running containers
docker ps

# List all containers (including stopped)
docker ps -a

# Stop a container
docker stop myapp

# Start a stopped container
docker start myapp

# Remove a container
docker rm myapp

# Remove an image
docker rmi python:3.11

# View container logs
docker logs myapp

# Execute a command in a running container
docker exec -it myapp bash

Your First Dockerfile

Let's create a Docker image for a simple Python application:

Project Structure

my-python-app/
├── app.py
├── requirements.txt
└── Dockerfile

app.py

# app.py
from flask import Flask

app = Flask(__name__)

@app.route('/')
def hello():
    return 'Hello from Docker!'

if __name__ == '__main__':
    app.run(host='0.0.0.0', port=5000)

requirements.txt

flask==3.0.0

Dockerfile

# Use an official Python runtime as the base image
FROM python:3.11-slim

# Set the working directory inside the container
WORKDIR /app

# Copy requirements first (for better caching)
COPY requirements.txt .

# Install dependencies
RUN pip install --no-cache-dir -r requirements.txt

# Copy the rest of the application
COPY . .

# Expose the port the app runs on
EXPOSE 5000

# Command to run when the container starts
CMD ["python", "app.py"]

Build and Run

# Build the image
docker build -t my-python-app .

# Run the container
docker run -p 5000:5000 my-python-app

# Visit http://localhost:5000 in your browser!

Understanding Dockerfile Instructions

# FROM - Base image to start from
FROM python:3.11-slim

# WORKDIR - Set working directory (created if doesn't exist)
WORKDIR /app

# COPY - Copy files from host to container
COPY . .

# RUN - Execute commands during build (creates a new layer)
RUN pip install -r requirements.txt
RUN apt-get update && apt-get install -y curl

# ENV - Set environment variables
ENV FLASK_ENV=production
ENV DATABASE_URL=postgres://localhost/db

# EXPOSE - Document which port the container listens on
EXPOSE 5000

# CMD - Default command to run (can be overridden)
CMD ["python", "app.py"]

# ENTRYPOINT - Command that always runs (harder to override)
ENTRYPOINT ["python"]
CMD ["app.py"]  # Default argument to ENTRYPOINT

Docker Compose

Docker Compose lets you define and run multi-container applications. Perfect for apps that need a database, cache, or other services.

docker-compose.yml

# docker-compose.yml
version: '3.8'

services:
  web:
    build: .
    ports:
      - "5000:5000"
    environment:
      - DATABASE_URL=postgresql://user:pass@db:5432/myapp
      - REDIS_URL=redis://redis:6379
    depends_on:
      - db
      - redis
    volumes:
      - .:/app  # Mount current directory for development

  db:
    image: postgres:15
    environment:
      - POSTGRES_USER=user
      - POSTGRES_PASSWORD=pass
      - POSTGRES_DB=myapp
    volumes:
      - postgres_data:/var/lib/postgresql/data

  redis:
    image: redis:7-alpine
    ports:
      - "6379:6379"

volumes:
  postgres_data:

Docker Compose Commands

# Start all services
docker-compose up

# Start in background
docker-compose up -d

# Stop all services
docker-compose down

# View logs
docker-compose logs

# Rebuild images
docker-compose build

# Run a command in a service
docker-compose exec web bash

Docker for Python Development

A production-ready Dockerfile for a Python web application:

# Dockerfile for Production
FROM python:3.11-slim as base

# Prevents Python from writing .pyc files
ENV PYTHONDONTWRITEBYTECODE=1
# Keeps Python from buffering stdout/stderr
ENV PYTHONUNBUFFERED=1

WORKDIR /app

# Install system dependencies
RUN apt-get update && apt-get install -y \
    gcc \
    libpq-dev \
    && rm -rf /var/lib/apt/lists/*

# Create non-root user for security
RUN adduser --disabled-password --gecos '' appuser

# Install Python dependencies
COPY requirements.txt .
RUN pip install --no-cache-dir -r requirements.txt

# Copy application code
COPY --chown=appuser:appuser . .

# Switch to non-root user
USER appuser

EXPOSE 8000

# Use gunicorn for production
CMD ["gunicorn", "--bind", "0.0.0.0:8000", "--workers", "4", "app:app"]

Docker Volumes: Persisting Data

# Named volume (Docker manages storage)
docker run -v mydata:/app/data myapp

# Bind mount (use host directory)
docker run -v /path/on/host:/app/data myapp

# In docker-compose.yml
services:
  db:
    image: postgres:15
    volumes:
      - postgres_data:/var/lib/postgresql/data  # Named volume
      - ./init.sql:/docker-entrypoint-initdb.d/init.sql  # Bind mount

volumes:
  postgres_data:  # Define named volume

Docker Networking

# List networks
docker network ls

# Create a network
docker network create mynetwork

# Run container on a network
docker run --network mynetwork --name web myapp
docker run --network mynetwork --name db postgres

# Containers on the same network can communicate by name
# In web container: postgres://db:5432/myapp

# In docker-compose, services are automatically on the same network
services:
  web:
    ...
    # Can connect to 'db' by name
  db:
    ...

Docker Best Practices

  • Use specific image tags: Use python:3.11-slim not python:latest
  • Use slim/alpine images: Smaller images = faster builds and deploys
  • Order Dockerfile for caching: Put rarely changing instructions first
  • Don't run as root: Create a non-root user for security
  • Use .dockerignore: Exclude unnecessary files from the build
  • One process per container: Each container should do one thing
  • Use multi-stage builds: Keep production images small
  • Don't store secrets in images: Use environment variables or secrets management

.dockerignore Example

# .dockerignore
.git
.gitignore
__pycache__
*.pyc
.env
.venv
venv
*.md
.pytest_cache
.coverage
htmlcov

Multi-Stage Builds

Use multi-stage builds to create smaller production images:

# Multi-stage Dockerfile
# Stage 1: Build
FROM python:3.11 as builder

WORKDIR /app
COPY requirements.txt .
RUN pip wheel --no-cache-dir --no-deps --wheel-dir /app/wheels -r requirements.txt

# Stage 2: Production
FROM python:3.11-slim

WORKDIR /app

# Copy only the wheels from builder stage
COPY --from=builder /app/wheels /wheels
RUN pip install --no-cache /wheels/*

COPY . .

CMD ["python", "app.py"]

# Result: Much smaller production image!

Master Docker with Expert Mentorship

Our Full Stack Python program includes comprehensive Docker training. Learn to containerize Python applications, use Docker Compose, and deploy to production with personalized guidance.

Explore Full Stack Python Program

Related Articles