A Django project template for a RESTful Application using Docker – PyCon IE slides

Just putting here the slides for my presentation in the PyCon Ireland, that are a follow up from this blog post. I’ll try to include the video if/when available.

I hand drawn all the slides myself, so it was a lot of fun work!


A Django project template for a RESTful Application using Docker

I used what I learn and some decisions to create a template for new projects. Part of software development is mainly plumbing. Laying bricks together and connecting parts so the important bits of software can be accessing. That’s a pretty important part of the work, but it can be quite tedious and frustrating.

This is somehow a very personal work. I am using my own opinionated ideas for it, but I’ll explain the thought process behind them. Part of the idea is to add to the discussion on how a containerised Django application should work and what is the basic functionality that is expected for a fully production-ready service.

The code is available here. This blog post covers more the why’s, while the README in the code covers more the how’s.


Screen Shot 2017-07-30 at 15.22.42.png


  • a Django RESTful project template
  • sets up a cluster of Docker containers using docker-compose
  • include niceties like logging, system tests and metrics
  • code extensively commented
  • related blog post explaining the why and opening the discussion to how to work with Docker

Docker containers

Docker is having a lot of attention in the last few years, and it promises to revolutionise everything. Part of it is just hype but nevertheless is a very interesting tool. And as every new tool, we’re still figuring out how it should be used.

The first impression when dealing with Docker is to treat it as a new way of a virtual machine. Certainly, it was my first approach. But I think is better to think of it a process (a service) wrapped in a filesystem.

All the Dockerfile and image building process is to ensure that the filesystem contains the proper code and configuration files to then start the process. This goes against daemonising and other common practices in previous environments, where the same server handled lots of processes and services working in unison. In the Docker world, we replace this with several containers working together, without sharing the same filesystem.

The familiar Linux way of working includes a lot of external services and conveniences that are no longer required. For example, starting multiple services automatically on boot makes no sense if we only care to run one process.

This rule has some exceptions, as we’ll see later, but I think is the best approach. In some cases, a single service requires multiple processes, but simplicity is key.


The way Docker file system work is by adding a layer on top of another layer. That makes the build system to series of steps that execute a command that changes the filesystem and then exits. The next step builds on top of the other.

This has very interesting properties, like the inherent caching system, which is very useful for the build process. The way to create a Dockerfile is to set the parts that are least common to change (e.g. dependencies) on top, and put at
the end of it the ones that are most likely to need to be updated. That way, build times are shorter, as only the latest steps need to be repeated.
Another interesting trick is to change the order of the steps in the Dockerfile while actively developing, and move them to their final place once the initial setup (packages installed, requirements, etc) is stable.

Another property is the fact that each of these layers can only add to a filesystem, but never subtract. Once a build operation has added something, removing it won’t free space of the container. As we want to keep our containers as minimal as possible, care on each of the steps should be taken. In some cases, this means to add something, use it for an operation, and then remove it in a single step. A good example is compilation libraries. Add the libraries, compile, and then remove them as they won’t be used (only the generated binaries).

Alpine Linux

Given this minimalistic approach, it’s better to start as small as possible. Alpine is a Linux distribution focused on minimalistic containers and security. Their base image is just around 4MB. This is a little misleading, as installing Python will bring it to around 70MB, but it’s still much better than something like an Ubuntu base image, at around 120MB start and very easy to get to 1GB if the image build is done in a traditional way, installing different services and calling apt-get with abandon.

This template creates an image around 120MB size.

Embed from Getty Images

Running a cluster

A single container is not that interesting. After all is not much more than a single service, probably a single process. A critical tool to work with containers is to be able to set several of them to work in unison.

The chosen tool in this case is docker-compose which is great to set up a development cluster. The base of it is the docker-compose.yaml file, that defines several “services” (containers) and links them all together. The docker-compose.yaml file contains the names, build instructions, dependencies and describes the cluster.

Note there are two kinds of services. One is the container that runs and ends, producing some result as an operation. For example, run the tests. It starts, runs the tests, and then ends.
The other one is to run a long running service. For example, run a web server. The server starts and it doesn’t stop on its own.
In the docker-compose there are both kind of services. server and db are long running services, while test and system-test are operations, but most of them are services.

It is possible to differentiate grouping them in different files, but dealing with multiple docker-compose.yaml files is cumbersome.

The different services defined, and their relationships, are described in this diagram. All the services are described individually later.


As it is obvious from the diagram, the main one is server. The ones in yellow are operations, while the ones in blue are services.
Note that all services exposes their services in different ports.

Codebase structure

All the files that relates to the building of containers of the cluster are in the ./docker subdirectory, with the exception of the main Dockerfile and docker-compose.yaml, that are in the root directory.

Inside the ./docker directory, there’s a subdir for each service. Note that, because the image is the same, some services like dev-server or test inherits the files under ./docker/server

The template Django app

The main app is in the directory ./src and it’s a simple RESTful application that exposes a collection of tweets, elements that have a text, a timestamp and an id. Individual tweets can be retrieved and new ones can be created. A basic CRUD interface.

It makes use of the Django REST framework and it connects to a PostgreSQL database to store the data.

On top of that, there are unit tests stored in Django common way (inside ./src/tweet/tests.py. To run the tests, it makes usage of pytest and pytest-django. Pytest is a very powerful framework for running tests, and it’s worth to spend some time to learn how to use it for maximum efficiency.

All of this is the core of the application and the part that should be replaced for doing interesting stuff. The rest of it is plumbing to making this code to run and to have it properly monitored. There are also system tests, but I’ll talk about them later.

The application is labelled as templatesite. Feel free to change the name to whatever makes sense for you.

The services

The main server

The server service is the core of the system. Though the same image is used for multiple purposes, the main idea is to set up the Django code and make it run in a web server.

The way this is achieved is through uwsgi and nginx. And yes, this means this container is an exception about running a single process.


As shown, the nginx process will serve the static files, as generated by Django collectstatic command, and redirect everything else towards a uWSGI container that runs the Django code. They are connected by a UNIX socket.

Another decision has been to create a single worker on the container. This follows the minimalistic approach. Also, Prometheus (see below) doesn’t like to be round robin behind a load balancer in the same server, as the metrics reported are inconsistent.

It is also entirely possible to run just uWSGI and create another container that runs nginx and handles the static files. I chose not to because this creates a single HTTP server node. Exposing HTTP with uWSGI is not as good as with nginx, and you’ll need to handle the static files externally. Exposing uWSGI protocol externally is complicated and will require some weird configuration in the nginx frontend. This makes a totally self-contained stateless web container that has the whole functionality.

The Database

The database container is mainly a PostgreSQL database, but a couple of details have been added to its Dockerfile.

Embed from Getty Images

After installing the database, we add our Django code, install it, and then run the migrations and load the configured fixtures. All of this at build time. This makes the base image to contain an empty test database and a pre-generated general database, helping for quick setup of tests. To get a fresh database, just bring down the db container and restart it. No rebuild is needed unless there are new migrations or new fixtures.

In my experience with Django, as project grow and migrations are added, it slowly takes more and more time to run the tests if the database needs to be regenerated and fixtures to be loaded again. Even if the --keepdb option is used from tests, sometimes a fresh start is required.

Another important detail is that this database doesn’t store data in a persistent volume, but just inside the container. This is aimed not to work as a persistent database, but to run quickly and to be able to be regenerated into a known state with ease. If you need to change the start data, change the fixtures loaded. Only put inside things you are ok losing.

As part of the setup, notice that the following happens. The database needs to be started and then another process, the Django manage.py, loads the fixtures. Then the database is turned down and the container exists. This is one of the cases where multiple processes need to run in a container. The turn down is important, as ending the PostgreSQL process abruptly can lead to data corruption. Normally on the next startup of the database, it will be corrected, but it will take a little time. It’s better to end the step cleanly.


Logging events is critical for a successful operation in a production environment and typically is done very late in the development process. I try to introduce logging as I run my unit tests and add different information while developing, which it helps quite a lot in figuring out what’s going on on production servers.

A big part of the pain of logging is to set up properly the routing of the logs. In this case, there’s a dedicated log container running syslog where all the INFO and up logs from the server are directed. The collected logs are stored in a file on the container that can be easily checked.

All the requests are also labelled with a unique id, using the Django log request id middleware. The id can also be forwarded through the X-REQUEST-ID HTTP header, and it will be returned in the responses. All the logs from a request will include this ID, making easy to follow what the request has done.

When running the unit tests, the DEBUG logs are also directed to the standard output, so they’ll show as part of the pytest run. Instead of using print in your unit test while debugging a problem, try to use logging and keep what it makes sense. This will keep a lot of useful information around when a problem arises in production.


Another important part of successful production service. In this case, it is exposing metrics to a Prometheus container.
It uses the prometheus-django module and it’s exposing a dashboard.

Embed from Getty Images

There’s also included a Grafana container, called metrics-graph. Note that these two containers are being pulled from their official images, instead of including a tailored Dockerfile. The metrics container has some minimal configuration. This is because the only requirement is to expose the metrics in a Prometheus format, but creating dashboards or making more detailed work on metrics is out of the scope for this.

The good thing about Prometheus is that you can cascade it. It works by fetching the data from our web service itself (through the /metrics URL), and at the same time it exposes the same URL with all the data it pools. This makes possible to very easily create a hierarchical structure where a container picks information about a few servers and then exposes the information to another one, that groups the data from a few Prometheus containers.

Prometheus query language and metrics aggregation are very powerful, but at the same time is very confusing initially. The included console has a few queries for interesting data in Django.

Handling dependencies

The codebase includes two subdirectories, ./deps and ./vendor. The first one is to include your direct dependencies. That mean your own code, that lives in a different repo. This allows you to set a git submodule and use it as a imported module. There’s a README file to show some tips on using git submodules, as they are a little tricky.

The idea behind this is to avoid the usage of git pulling from a private repo inside a requirements file, which requires setup of authentication inside the container (adding ssh keys, git support, etc). I think is better to handle that at the same level as your general repo, and then import all the source code directly.

./vendor is a subdirectory to contain a cache of python modules in wheel format. The service build-deps builds a container with all the stated dependencies in the requierements.txt file and precompile them (among all sub-dependencies) in convenient wheel files. Then the wheel files can be used to speed up the setup of other containers. This is optional, as the dependencies will be installed in any case, but greatly speeds up rebuilding containers or tweaking requirements.

Testing and interactive development

The container test runs the unit tests. It doesn’t depend on any external service and can be run in isolation.

The service dev-server starts the Django development web server. This reloads the code as it changes, as it mounts the local directory. It also logs the runserver logs into standard output.

The container system-test independently run tests generating external HTTP requests against the server. They are under the ./system-test subdir and are run using pytest.


Docker container can define a healthcheck to determine whether a container is properly behaving, and take action if necessary. The application includes a URL root for the heathcheck that currently is checking if the access to the database is correct. More calls to external services can be included in this view.

Embed from Getty Images

The healthcheck pings using curl the local nginx URL, so it also tests the routing of the request is correct.

Other ideas

  • Though this project aims to have something production-ready, the discussion on how to do it is not trivial. Expect details to require to be changed depending on the system to bring the container to a production environment.
  • Docker has some subtleties that are worth paying attention, like the difference between up and run. It pays up to try to read with care the documentation and understand the different options and commands.
  • Secrets management has been done through environment variables. The only real secrets in the project are the database password and the Django secret key. Secret management may depend on what system you use in production. Using docker native secret support for Docker swarm create files inside the container that can be poured into the environment with the start scripts. Something like adding to docker/server/start_server.sh

export SECRET=`cat /run/secret/mysecret`

The Django secret key is injected as part of the build as an ARG. Check that’s consistent in all builds at docker-compose.yaml. ¬†The database password is stored in an environment variable.

  • The ideal usage of containers in CI should be to work with them locally, and when pushed to the repo, trigger a chain of build container, run tests and promote the build until deployment, probably in several stages. One of the best usages of containers is to be able to set them up and not change them all along the process, which was as easy as it sounds with traditional build agents and production servers.
  • All ideas welcome. Feel free to comment here or in the GitHub repo.

UPDATE: I presented this template in the PyCon Ireland 2017. The slides are available in this blog post.

Django and Rails and Grails, Oh my!

On the PyCon Ireland I give a talk comparing between Django, Ruby on Rails and Grails framework… I just forget to put a link on this blog!

The presentation can be found at Prezi, and there is even a video, if someone wants to make funny comments on my exotic accent ūüėõ A problem with the projector doesn’t allow me to display the slides, so I felt a little weird taking the laptop and pointing at the screen, but the people making the video has make their homework and shows the proper slides on place. Nice!



The original idea was to show the same simple application (a simple posting service) make with the three frameworks, but not being able to display on the projector really ruined it. Anyway, the code can be downloaded here, if you want to take a look.

Let me know what do you think!

Migrating data to a new db schema with Django 1.2

This week I had to make a migration of data on a database from an old schema to a new one. The database is part of a Ruby on Rails application, so the changes are part of new features, and we have also taken the opportunity to clean up a little the database and make some changes to be more flexible in the future. As we want to be consistent, we need to migrate all the old data to the new database.

After talking with people with more experience than me on Rails (I have only use it for this project) about how to perform a migration, and as this week the brand new Django version, supporting multiple DBs was release, I decided to use the Django ORM to perform it.


My initial idea about the multiple database support on Django was that each of the models will have some kind of meta information that will determine which database if going to use. So, the idea will be to create models for the old database and models for the new database, each one with its own meta information.

Well, Django doesn’t work exactly this way… You can use that approach if each of the tables on the databases are named differently, because is smart enough to know that a table is only on one database, but the problem was that some of the tables we are using keep the same name, but change the way the information is stored.

In fact, the Django approach it’s more powerful than that, and allow a lot of different techniques, but you have to make some code. The key point is using a ‘router’. A router is a class that, using¬†standardized¬†methods, will return the¬†appropriate¬†database to look when you’re going to read, write, make a relationship or sync the db, according to the model performing the operation. As you can write those methods, you can basically do whatever you can imagine on the databases. For example, write always to the master database and read from a random (or consecutive) slave database. Or write the models of one application on one database and of the other three applications on another.

The router class then is added to the settings.py file. You can even generate several routers and apply them in order.

Getting the models

As the database model hasn’t been designed using Django, but Ruby on Rails, I had to connect to the databases (old and new) and let Django discover the models for me. The easy part is to generate the first models, just using

python manage.py inspectdb --database=DATABASE

Specifying the correct database, and storing each results in two different files, one for the old models and another for the new models (I have called them, in a moment of original inspiration as new_models.py and old_models.py). Then, rename each model to begin with Old or New, so each model name is unique. Then I created a models.py file that will import both, to follow Django conventions. I could also combine both, but having both models as different files seems to make more sense to me.

Then, as you can imagine, the problems began.

First, there is one table that has a composed primary key. Django doesn’t like that, but, as that table is new and doesn’t need the old data, I have just ignored and delete the model.

Another problem is that Rails doesn’t declare relationships as, well, relationships. It doesn’t create the fields as foreign keys on the database, but just as plain integers, and then the code will determine that there are relationships. So, as Django analyze the database, it will determine that the codes are not foreign keys, but plain integers. You have to manually change all those integers to foreign keys to the correct table, if you want to use the ORM properly.¬†Apparently¬†there are some plugins for Rails¬†to define the relationships as foreign keys on the database.

To add a little¬†confusion, Rails declare the names of the tables as plurals (for example, for an model called ‘Country’, the table will be called ‘Countries’), so the name of the models will be plural. I’m used to deal with singular model names in Django, so I tend to use the singular name instead of the plural when using the models, which will raise an error, of course. Anyway, you can avoid it changing the name in the models.

Routing the models

The router is easy, it will just route a model depending on the first letters on the model name. Models¬†beginning¬†with ‘New’ will go to the new database and every other model will go to the old (default) database, both to write and to read. I have started old models with ‘Old’.¬†So the code is like this:

class Router(object):
    """A router to control all database operations on models in
    the migration. It will direct any Model beggining with 'New' to the new database and
    any with 'Old' to the default database"""

    def db_for_read(self, model, **hints):
        if model.__name__.startswith('New'):
            return 'new'
        return 'default'
    def db_for_write(self, model, **hints):
        if model.__name__.startswith('New'):
            return 'new'
        return 'default'

To avoid problems, the access to the old database is made with a read-only database user. That will avoid accidentally deleting any data.

Migration script

The migration script imports the Django settings and then mix all the data from the old database and then generate the new data for the new database. Using the Django ORM is easy, but there are some problems.

Django ORM is slooooooooow. Really really slow, and that’s a bad thing for migrations‚ĄĘ as they usually have lots of stored data. So there are some ideas to keep in mind:

  • Raw copy of tables can be performed using raw SQL, so try to avoid just copying from one table in the old database to the same table in the new database using Django, as it can take lots of time, and I mean LOTS. I began copying a table with about 250 thousands records. Time with Django, over 2 hours, time dumping in SQL, about 20 seconds.
  • Use manual commits, if the database allows it. It’s not a “magical option”, it’s still slow, but can help.
  • As usually the migration will be performed only once, try to work on development with a small subset of the information, or at least try to import one table at a time, and don’t recreate it once it’s on the new database. When you’re happy with your code, you can run it again from the beginning, but it’s awful to wait 5 minutes to realize that you have a typo error on one line, and another 5 minutes to discover the next typo error two lines below that.

Another thing to keep in mind is that the relationships are not shared over databases, so you need to recreate them. For example, imagine that we have this two models, when we store comic book characters. The Publisher table is going to keep the same shape, but the character table will now include the secret identity name.

class NewPublisher(models.Model):
    id = models.IntegerField(primary_key=True)
    name = models.CharField(max_length=50)
    class Meta:
         db_table = u'publisher'

class OldPublisher(models.Model):
    id = models.IntegerField(primary_key=True)
    name = models.CharField(max_length=50)
    class Meta:
         db_table = u'publisher'

class NewCharacter(models.Model):
    id = models.IntegerField(primary_key=True)
    publisher = models.ForeignKey('NewPublisher')
    nickname = models.CharField(max_length=50)
    secret_identity = models.CharField(max_length=50)
    class Meta:
         db_table = u'character'

class OldCharacter(models.Model):
    id = models.IntegerField(primary_key=True)
    publisher = models.ForeignKey('OldPublisher')
    name = models.CharField(max_length=50)
    class Meta:
         db_table = u'character'

The publisher table is identical, so the private keys are the same. Let’s say that all the secret identities are going to be “Clark Kent”, so the code to migrate the data will be like:

for old_character in OldCharacter.objects.all():
    new_character = NewCharacter(nickname=old_character.name,
          secret_identity='Clark Kent',

You cannot use the relationship directly, and say that publisher = old_character.publisher, because that will try to assign an OldCharacter to a field that should be a NewCharacter. Django will raise an exception, but it’s good to keep that in mind. All those checks will help in the end to have a better control over the data in the new database and will ensure that all the data is consistent.


Migrate data from one database to another is always painful. One can argue that it SHOULD be painful, as you’re dealing with lots of information that should remain in good shape and that should always been taken with respect and caution.

Having that into mind, I must say that Django has made it a little less painful. I think also that the functionality for multi-db support it’s quite powerful and can be adapted to several uses. One thing that has always been more complicated that it should and now can be quite easy is migrating from one kind of database to another (from MySQL to Postgres, for example), keeping the data intact.

Anyway, I still think that including some kind of meta information to specify the database (or database behavior) per model could be a good idea.

But, by far, the worst problem is the way that Django is slow working with large sets of information. Adding some kind of bulk insert will be a huge improvement of the framework. Yes, you can always read the data from the database using the Django ORM and compose the INSERT statements by hand on a file to then load them, which is several orders of magnitude faster, but the key point of using the ORM should be not having to use SQL.

Deployment of Django project using CherryPy and Cherokee

Recently, we have deployed into a test production our latest Django project. Choosing which deployment to use it’s not easy, as there are a lot of different tools for the job, but as we expect some load on the system, we have been spending some time in getting a good deployment that will allow us to be confident on the grow of the system.

After some research, we decided to use CherryPy and Cherokee.

Why CherryPy?

  • It’s pure Python, and easily integrated with Django. You can do it by yourself (it’s not very difficult), or you can take the easy way and use the django-cpserver tool. Anyway, we probably will end up customizing django-server to suit your needs, as there are some things that seems to be lacking (like logging support)
  • It’s fast! Also it’s mature and stable. All three are must-have characteristics for any web server.

Why Cherokee?

  • Its admin system. It’s really easy to use (instead of the usual .conf file, which can be a headache) and also quite safe, as it will only be active when a command is called, with a one-time-only password. It’s a great feature!
  • It allows us to make our deployment. It’s described bellow. Basically, we are using it as a HTTP reverse proxy that will balance the load between several CherryPy instances. It will also serve our static files.
  • Low memory footprint.
  • Cherokee is able to manage all the CherryPy processes, so you don have to worry about launching and controlling them. You don’t have to daemonize the CherryPy processes also, making all the process much easier.
  • It’s blazingly fast! It’s specially good serving static files. The speed is in the same league than ngix.

Not everything is great. We have also some problems, already solved or that will be solved in the future.

  • As I said, django-cpserver doesn’t allow you to specify logging parameters to CherryPy. We will have to tweak it, although this is not difficult.
  • We have a problem with this configuration with CherryPy 3.1 Ocasionally, when you ask for a page in Chrome or IE (but not in Firefox), you’ll get a blank page and a 408 error (request timeout). This will indicate that the connection between the client and the server has been lost, and it’s happening only when the option Keep Alive in the Cherokee server is activated. Asking for the page again will reload the data, as a new connection will be open. Apparently, this was due a bug in CherryPy that has been corrected in version 3.2 (which is now on Release Candidate state). Version 3.2.0RC1 seems to work perfectly fine.

Our deployment configuration is described in the following diagram.


The static data is directly served by Cherokee, while the Django application is supported by CherryPy. There are several processes of Django-CherryPy working, one of each core the server has, to increase the response avoiding problems with the GIL working over multiple cores, as well as making the architecture much scalable. Cherokee is conecting to the CherryPy instances over TCP/IP, so it gives quite flexibility. It’s also possible to use a Unix domain socket path, which could be even (a little) faster. As Cherokee allow to define an Information Source as a local interpreter command, you just write an script creating a new CherryPy instance, and Cherokee will manage to set up the processes and to kill everything when you stop the server. Cool.

We have also used memcached to increase the response of Django. It’s quite easy to use. Just install memcached on the machine, python-memcached to allow Python access it, and with a few configuration touches on your settings.py file, you’re ready. At the moment, as our views won’t change often, our approach is to use the @cache_page decorator, to cache completely the view. We’re not caching the complete site as that will not cache everything that gets URL parameters, which we’ve been using for generic_views.

All the architecture is highly scalable. Right now¬†everything¬†is on the same machine, but can be¬†spread on different machines. Each process (Cherokee, CherryPy instances and Database) can be separated without much effort. Also, more instances of Django+CherryPy can be created. The only part that can be difficult to duplicate is the Cherokee server, but it’s difficult it will come to be the bottleneck of the system, unless the system grows REALLY¬†big.

The system seems to be really fast and we’re having great testing results for the moment. In a couple of weeks the system is supposed to be public, and we expect quite a lot of hits, so performance it’s quite important, but, for the moment, we are quite satisfied with our deployment.

PD: For a tutorial on configuring Cherokee and CherryPy, you could check this great post.