Episode code: 001-02
Series: Building a Development Environment for Web Development
Episode 2: Adapting Docker
Description¶
In this video we construct a Dockerfile for our web application.
Our web application is a simple guest book that allows us to post a message that will be stored on the database along with a timestamp, and the name of the web server that handled the request; this will be interesting later when we start making incremental improvements.
The web application only deals with plain text to begin with, just to make things as simple as possible.
Video¶
Instructions¶
You can watch the video above on YouTube (click the thumbnail to open), and follow along below.
You will need to install the following tools if you have not installed them already: Docker,
uv, andcurl. If you want to avoid installing these tools, you can use this guide for installing a virtual private server on Linode: VPS Installation
Project Files¶
Let us first take a look at the project files:
The .gitignore file is pretty simple, just making sure we ignore the usual temporary files, etc.
Note
In the example above I included *.swp and *.swo files; these are temporary files that the vim editor stores while working on files.
If you are using a different editor or Integrated Development Environment (IDE) such as VS Code, check your directory for any temporary files you might want to add to your .gitignore file.
We store a provisioning script in dbfiles/provisioning.sql to prepare the database with the single table we need.
CREATE TABLE IF NOT EXISTS book (
id serial primary key,
ts timestamp default Now(),
tx text,
host text
);
The web application itself is stored in guestbook.py.
import os
import psycopg
from psycopg_pool import ConnectionPool
from flask import Flask, request, make_response
app = Flask(__name__)
dbpool = ConnectionPool()
def get_conn_cm():
return dbpool.connection()
@app.get("/")
def list_view():
response_text = ''
with get_conn_cm() as conn, conn.cursor(row_factory=psycopg.rows.dict_row) as cur:
cur.execute("SELECT * FROM book;")
for row in cur:
response_text += f'{row["id"]}, {row["tx"]}, {row["ts"]}, {row["host"]}\n'
response = make_response(response_text, 200)
response.headers["Content-Type"] = "text/plain; charset=utf-8"
return response
@app.post("/")
def create_view():
request_data = request.form.to_dict()
response_text = ''
with get_conn_cm() as conn, conn.cursor(row_factory=psycopg.rows.dict_row) as cur:
cur.execute("INSERT INTO book (tx, host) values (%s, %s) RETURNING *;", (request_data['tx'], request_data['host']))
row = cur.fetchone()
response_text += f'{row["id"]}, {row["tx"]}, {row["ts"]}, {row["host"]}\n'
response = make_response(response_text, 201)
response.headers["Content-Type"] = "text/plain; charset=utf-8"
return response
@app.delete("/<int:item_id>")
def delete_item(item_id):
with get_conn_cm() as conn, conn.cursor(row_factory=psycopg.rows.dict_row) as cur:
cur.execute("DELETE FROM book WHERE id=%s RETURNING *;", (item_id,))
response = make_response('', 204)
return response
if __name__ == "__main__":
web_host = os.getenv('WEB_HOST', '127.0.0.1')
web_port = os.getenv('WEB_PORT', '3000')
print(f'Hosting params: {web_host}:{web_port}')
app.run(host=f'{web_host}', port=f'{web_port}')
We use Flask because it is simple, and let us contain the whole application in a single, easy-to-understand file.
We define two endpoints: / that accepts GET requests for listing the guest book entries, and POST requests for adding new messages.
The second endpoint /<id> only accepts DELETE requests.
There are many things that could be improved with this implementation, which will become clear when we start testing the application. But for now, this will serve our purpose.
The application makes use of a connection pool for connecting to the PostgreSQL database, provided by the psycopg-pool package.
This essentially means that the application will hold a set of connections to the database that will be reused for future requests, rather than making new connections for each request.
This is both faster and more efficient.
A very nice detail with the psycopg-pool package is that it can be configured via environment variables, so we do not need messy, hard-coded configuration in the source file, or having to read settings from a file or environment variables ourselves.
We use environment variables to configure address and port for the web server, but also providing some sane defaults.
The environment variables are defined in the env-local file shown below.
RTE=dev
POSTGRES_HOST=127.0.0.1
POSTGRES_DB=pgdb
POSTGRES_USER=pguser
POSTGRES_PASSWORD=pgpassword
PGHOST=127.0.0.1
PGPORT=5432
PGDATABASE=pgdb
PGUSER=pguser
PGPASSWORD=pgpassword
The variables beginning with POSTGRES_ are the environment variables expected by the PostgreSQL Docker container we will need to run the project.
The variables beginning with PG are the environment variables expected by the psycopg-pool connection pool.
Installing Dependencies¶
Before we can run the project, we need to install the project dependencies.
I will be using the uv tool in this series, but you can also install the packages using pip.
If you do not have uv installed already, you can follow the instructions here: Installing uv
To install the packages using uv we must first create an environment:
Then we can install the packages - it will only take a few seconds depending on your internet connection.
You should now see a .venv directory in your project directory along with two files: pyproject.toml and uv.lock.
The .venv directory should not be checked into git; just check in pyproject.toml and uv.lock, the you can always recreate the environment using the command uv sync
Starting PostgreSQL on Docker¶
Our project need a PostgreSQL server for running the database that will store our guest book entries. We will be using Docker to run the database, so if you do not have Docker installed already, please see: Install Docker
Note
If you are on macOS, you can install OrbStack instead of Docker Desktop; OrbStack uses fewer system resources and also lets you run special virtual machines.
Using OrbStack, you can run the same docker commands in the terminal.
Using Home Brew you can install OrbStack with the command brew install orbstack.
If you are using Linux, you can most likely install Docker via your package manager.
To check if Docker is already installed, you can run the command below.
To start the database using Docker, run this command from the project directory:
docker run --rm --name postgres --env-file ./env-local -p 127.0.0.1:5432:5432 -v ./dbfiles:/dbfiles postgres
On first run, this command it might take a little while downloading the Docker image for the PostgreSQL database. This happens automatically.
Your terminal will now be occupied running the database server, so you need to start another terminal.
In the video I use tmux to run several terminals, but you can also just open another terminal window.
Before starting the web server, we need to run the provisioning script we introduced earlier. First, we need to open a shell inside the database server.
Your shell prompt should now be just #
Now we need to run the command line database client.
You should now see the psql prompt.
From here we can run the provisioning script we listed earlier in the tutorial.
To see if everything went well you can run the \db command.
You can also issue SQL commands, e.g.:
The table is obviously empty, but present.
To exit the database command line client, run the command \q
Next, exit the Docker container using the exit command.
Run The Web Application Locally¶
We can now run our web application. For now, we will run the application locally on your machine.
Again, this will occupy your shell, so you will need to open yet another terminal or tmux window.
We can now make HTTP requests to our web application, and for this we will use curl.
curl -X POST -d "tx=hello%20there&host=somehost" 127.0.0.1:3000
You should see the information returned, and if you run the command below you can list the guest book entries.
You can now shut down the web server by pressing Ctrl+C in its terminal window. Leave the PostgreSQL database running for now, as we will need it later.
Preparing A Dockerfile¶
Finally, we can begin on our Dockerfile.
Note
In Docker terminology, you write a Dockerfile that you build into a Docker Image. You can then run a Docker container based on that Docker image.
We will keep things simple and improve on it in a later video.
FROM python:3.14.3-alpine3.22
ENV PYTHONUNBUFFERED=1
RUN apk add uv
WORKDIR /app
COPY pyproject.toml uv.lock /app/.
RUN uv sync
COPY . /app/.
ENTRYPOINT ["sh", "entrypoint.sh"]
Let us go through the file step by step.
The FROM directive specifies the base image to use.
We will be using a Docker image that already contains an Alpine Linux 3.22 system, and Python 3.14.3.
While we could use the tag :latest to always get the latest version, you probably want to control the versions manually, so changes in new versions will not break your application.
Next, the ENV directive sets the environment variable PYTHONUNBUFFERED to 1.
This ensures that Python will flush the output buffer so we can see the output immediately in the log.
This is important, because the log will be our primary source of information if we need to troubleshoot the system.
The RUN directive then installs the uv package manager using Alpine Linux's apk package manager.
The WORKDIR directive sets the working directory; we will copy our files to this directory, and it will also be the current directory to our web application.
We then use COPY to add our dependency files pyproject.toml and uv.lock to the working directory /app and install the environment using RUN uv sync.
We then copy the project directory into the Docker image. This will contain some files we do not need, but we will address this issue in a moment.
Note
You might wonder why we copy files in two steps; this is because Docker builds images in layers, and since our dependencies will change less frequent than our source files, we can take advantage of these layers for caching if we keep things that change infrequently at the top of the Dockerfile.
Finally, the ENTRYPOINT is set to execute a shell script when the container is started.
We need to write this entrypoint script, but it will be very simple for now, as shown below.
For now, the entrypoint script simply starts the guest book app.
This time, we do not need to provide uv with the environment file, as we will be doing this in a different way.
To avoid copying unnecessary files into the Docker image, we will also make a .dockerignore file.
The .dockerignore file is very similar to our .gitignore file, but we are also adding any env- files and the Dockerfile.
This is because while we want these files under version control, we do not need them in the Docker image.
Note
A brief note on env files: these files contains passwords and other information, and usually you would not want to store them under version control.
However, we will take a different approach when we get to production environments, and there is no problem storing dummy passwords for development and test.
Building the Docker Image¶
We can now build the Docker image.
This will build the Docker image, tagging it with the name guestbook:latest
The . at the end specifies that Docker should find the Dockerfile in the current (project) directory.
If everything goes well, you should be able to see your Docker image listing the images.
We are now able to use our Docker image.
Running The Docker Image¶
We can now run the guest book application via Docker.
docker run --rm --name guestbook --network=host --env-file ./env-docker -p 127.0.0.1:3000:3000 guestbook:latest
To get the list of guest book entries, run the command: curl 127.0.0.1:3000 like we did running the guest book application without Docker.
Summary¶
We have now run our guest book application locally and via a Docker container.
While this works, it is not ideal to have to issue all these configuration details each and every time we run the application.
This is where docker-compose enters the picture, as we shall see in the next video in the series.
