A Django project template for a RESTful Application using Docker

I used what I learn and some decisions to create a template for new projects. Part of software development is mainly plumbing. Laying bricks together and connecting parts so the important bits of software can be accessing. That’s a pretty important part of the work, but it can be quite tedious and frustrating.

This is somehow a very personal work. I am using my own opinionated ideas for it, but I’ll explain the thought process behind them. Part of the idea is to add to the discussion on how a containerised Django application should work and what is the basic functionality that is expected for a fully production-ready service.

The code is available here. This blog post covers more the why’s, while the README in the code covers more the how’s.

GO CHECK THE CODE IN GITHUB!

Screen Shot 2017-07-30 at 15.22.42.png

Summary

  • a Django RESTful project template
  • sets up a cluster of Docker containers using docker-compose
  • include niceties like logging, system tests and metrics
  • code extensively commented
  • related blog post explaining the why and opening the discussion to how to work with Docker

Docker containers

Docker is having a lot of attention in the last few years, and it promises to revolutionise everything. Part of it is just hype but nevertheless is a very interesting tool. And as every new tool, we’re still figuring out how it should be used.

The first impression when dealing with Docker is to treat it as a new way of a virtual machine. Certainly, it was my first approach. But I think is better to think of it a process (a service) wrapped in a filesystem.

All the Dockerfile and image building process is to ensure that the filesystem contains the proper code and configuration files to then start the process. This goes against daemonising and other common practices in previous environments, where the same server handled lots of processes and services working in unison. In the Docker world, we replace this with several containers working together, without sharing the same filesystem.

The familiar Linux way of working includes a lot of external services and conveniences that are no longer required. For example, starting multiple services automatically on boot makes no sense if we only care to run one process.

This rule has some exceptions, as we’ll see later, but I think is the best approach. In some cases, a single service requires multiple processes, but simplicity is key.

Filesystem

The way Docker file system work is by adding a layer on top of another layer. That makes the build system to series of steps that execute a command that changes the filesystem and then exits. The next step builds on top of the other.

This has very interesting properties, like the inherent caching system, which is very useful for the build process. The way to create a Dockerfile is to set the parts that are least common to change (e.g. dependencies) on top, and put at
the end of it the ones that are most likely to need to be updated. That way, build times are shorter, as only the latest steps need to be repeated.
Another interesting trick is to change the order of the steps in the Dockerfile while actively developing, and move them to their final place once the initial setup (packages installed, requirements, etc) is stable.

Another property is the fact that each of these layers can only add to a filesystem, but never subtract. Once a build operation has added something, removing it won’t free space of the container. As we want to keep our containers as minimal as possible, care on each of the steps should be taken. In some cases, this means to add something, use it for an operation, and then remove it in a single step. A good example is compilation libraries. Add the libraries, compile, and then remove them as they won’t be used (only the generated binaries).

Alpine Linux

Given this minimalistic approach, it’s better to start as small as possible. Alpine is a Linux distribution focused on minimalistic containers and security. Their base image is just around 4MB. This is a little misleading, as installing Python will bring it to around 70MB, but it’s still much better than something like an Ubuntu base image, at around 120MB start and very easy to get to 1GB if the image build is done in a traditional way, installing different services and calling apt-get with abandon.

This template creates an image around 120MB size.

Running a cluster

A single container is not that interesting. After all is not much more than a single service, probably a single process. A critical tool to work with containers is to be able to set several of them to work in unison.

The chosen tool in this case is docker-compose which is great to set up a development cluster. The base of it is the docker-compose.yaml file, that defines several “services” (containers) and links them all together. The docker-compose.yaml file contains the names, build instructions, dependencies and describes the cluster.

Note there are two kinds of services. One is the container that runs and ends, producing some result as an operation. For example, run the tests. It starts, runs the tests, and then ends.
The other one is to run a long running service. For example, run a web server. The server starts and it doesn’t stop on its own.
In the docker-compose there are both kind of services. server and db are long running services, while test and system-test are operations, but most of them are services.

It is possible to differentiate grouping them in different files, but dealing with multiple docker-compose.yaml files is cumbersome.

The different services defined, and their relationships, are described in this diagram. All the services are described individually later.

docker-template-diagram.png

As it is obvious from the diagram, the main one is server. The ones in yellow are operations, while the ones in blue are services.
Note that all services exposes their services in different ports.

Codebase structure

All the files that relates to the building of containers of the cluster are in the ./docker subdirectory, with the exception of the main Dockerfile and docker-compose.yaml, that are in the root directory.

Inside the ./docker directory, there’s a subdir for each service. Note that, because the image is the same, some services like dev-server or test inherits the files under ./docker/server

The template Django app

The main app is in the directory ./src and it’s a simple RESTful application that exposes a collection of tweets, elements that have a text, a timestamp and an id. Individual tweets can be retrieved and new ones can be created. A basic CRUD interface.

It makes use of the Django REST framework and it connects to a PostgreSQL database to store the data.

On top of that, there are unit tests stored in Django common way (inside ./src/tweet/tests.py. To run the tests, it makes usage of pytest and pytest-django. Pytest is a very powerful framework for running tests, and it’s worth to spend some time to learn how to use it for maximum efficiency.

All of this is the core of the application and the part that should be replaced for doing interesting stuff. The rest of it is plumbing to making this code to run and to have it properly monitored. There are also system tests, but I’ll talk about them later.

The application is labelled as templatesite. Feel free to change the name to whatever makes sense for you.

The services

The main server

The server service is the core of the system. Though the same image is used for multiple purposes, the main idea is to set up the Django code and make it run in a web server.

The way this is achieved is through uwsgi and nginx. And yes, this means this container is an exception about running a single process.

django-docker-server.png

As shown, the nginx process will serve the static files, as generated by Django collectstatic command, and redirect everything else towards a uWSGI container that runs the Django code. They are connected by a UNIX socket.

Another decision has been to create a single worker on the container. This follows the minimalistic approach. Also, Prometheus (see below) doesn’t like to be round robin behind a load balancer in the same server, as the metrics reported are inconsistent.

It is also entirely possible to run just uWSGI and create another container that runs nginx and handles the static files. I chose not to because this creates a single HTTP server node. Exposing HTTP with uWSGI is not as good as with nginx, and you’ll need to handle the static files externally. Exposing uWSGI protocol externally is complicated and will require some weird configuration in the nginx frontend. This makes a totally self-contained stateless web container that has the whole functionality.

The Database

The database container is mainly a PostgreSQL database, but a couple of details have been added to its Dockerfile.

After installing the database, we add our Django code, install it, and then run the migrations and load the configured fixtures. All of this at build time. This makes the base image to contain an empty test database and a pre-generated general database, helping for quick setup of tests. To get a fresh database, just bring down the db container and restart it. No rebuild is needed unless there are new migrations or new fixtures.

In my experience with Django, as project grow and migrations are added, it slowly takes more and more time to run the tests if the database needs to be regenerated and fixtures to be loaded again. Even if the --keepdb option is used from tests, sometimes a fresh start is required.

Another important detail is that this database doesn’t store data in a persistent volume, but just inside the container. This is aimed not to work as a persistent database, but to run quickly and to be able to be regenerated into a known state with ease. If you need to change the start data, change the fixtures loaded. Only put inside things you are ok losing.

As part of the setup, notice that the following happens. The database needs to be started and then another process, the Django manage.py, loads the fixtures. Then the database is turned down and the container exists. This is one of the cases where multiple processes need to run in a container. The turn down is important, as ending the PostgreSQL process abruptly can lead to data corruption. Normally on the next startup of the database, it will be corrected, but it will take a little time. It’s better to end the step cleanly.

Logs

Logging events is critical for a successful operation in a production environment and typically is done very late in the development process. I try to introduce logging as I run my unit tests and add different information while developing, which it helps quite a lot in figuring out what’s going on on production servers.

A big part of the pain of logging is to set up properly the routing of the logs. In this case, there’s a dedicated log container running syslog where all the INFO and up logs from the server are directed. The collected logs are stored in a file on the container that can be easily checked.

All the requests are also labelled with a unique id, using the Django log request id middleware. The id can also be forwarded through the X-REQUEST-ID HTTP header, and it will be returned in the responses. All the logs from a request will include this ID, making easy to follow what the request has done.

When running the unit tests, the DEBUG logs are also directed to the standard output, so they’ll show as part of the pytest run. Instead of using print in your unit test while debugging a problem, try to use logging and keep what it makes sense. This will keep a lot of useful information around when a problem arises in production.

Metrics

Another important part of successful production service. In this case, it is exposing metrics to a Prometheus container.
It uses the prometheus-django module and it’s exposing a dashboard.

There’s also included a Grafana container, called metrics-graph. Note that these two containers are being pulled from their official images, instead of including a tailored Dockerfile. The metrics container has some minimal configuration. This is because the only requirement is to expose the metrics in a Prometheus format, but creating dashboards or making more detailed work on metrics is out of the scope for this.

The good thing about Prometheus is that you can cascade it. It works by fetching the data from our web service itself (through the /metrics URL), and at the same time it exposes the same URL with all the data it pools. This makes possible to very easily create a hierarchical structure where a container picks information about a few servers and then exposes the information to another one, that groups the data from a few Prometheus containers.

Prometheus query language and metrics aggregation are very powerful, but at the same time is very confusing initially. The included console has a few queries for interesting data in Django.

Handling dependencies

The codebase includes two subdirectories, ./deps and ./vendor. The first one is to include your direct dependencies. That mean your own code, that lives in a different repo. This allows you to set a git submodule and use it as a imported module. There’s a README file to show some tips on using git submodules, as they are a little tricky.

The idea behind this is to avoid the usage of git pulling from a private repo inside a requirements file, which requires setup of authentication inside the container (adding ssh keys, git support, etc). I think is better to handle that at the same level as your general repo, and then import all the source code directly.

./vendor is a subdirectory to contain a cache of python modules in wheel format. The service build-deps builds a container with all the stated dependencies in the requierements.txt file and precompile them (among all sub-dependencies) in convenient wheel files. Then the wheel files can be used to speed up the setup of other containers. This is optional, as the dependencies will be installed in any case, but greatly speeds up rebuilding containers or tweaking requirements.

Testing and interactive development

The container test runs the unit tests. It doesn’t depend on any external service and can be run in isolation.

The service dev-server starts the Django development web server. This reloads the code as it changes, as it mounts the local directory. It also logs the runserver logs into standard output.

The container system-test independently run tests generating external HTTP requests against the server. They are under the ./system-test subdir and are run using pytest.

Healthcheck

Docker container can define a healthcheck to determine whether a container is properly behaving, and take action if necessary. The application includes a URL root for the heathcheck that currently is checking if the access to the database is correct. More calls to external services can be included in this view.

The healthcheck pings using curl the local nginx URL, so it also tests the routing of the request is correct.

Other ideas

  • Though this project aims to have something production-ready, the discussion on how to do it is not trivial. Expect details to require to be changed depending on the system to bring the container to a production environment.
  • Docker has some subtleties that are worth paying attention, like the difference between up and run. It pays up to try to read with care the documentation and understand the different options and commands.
  • Secrets management has been done through environment variables. The only real secrets in the project are the database password and the Django secret key. Secret management may depend on what system you use in production. Using docker native secret support for Docker swarm create files inside the container that can be poured into the environment with the start scripts. Something like adding to docker/server/start_server.sh

export SECRET=`cat /run/secret/mysecret`

The Django secret key is injected as part of the build as an ARG. Check that’s consistent in all builds at docker-compose.yaml.  The database password is stored in an environment variable.

  • The ideal usage of containers in CI should be to work with them locally, and when pushed to the repo, trigger a chain of build container, run tests and promote the build until deployment, probably in several stages. One of the best usages of containers is to be able to set them up and not change them all along the process, which was as easy as it sounds with traditional build agents and production servers.
  • All ideas welcome. Feel free to comment here or in the GitHub repo.

Compendium of Wondrous Links vol XI

wondrous_links

It has been a while since the last time. More food for though!

Python

 

The craft of development and tools

 

The environment of development

  • Computers are fast. Even if you see a lot of delays from time to time, they do a lot of stuff each second. Try to guess with this questionnaire.

 

Miscellanea

  • The perfect man. I find quite fascinating the lives of people so driven by competitiveness.
  • Dealing with incentives is always more complicated that it looks like. The Cobra effect.

Do not spawn processes on users requests

I’ve been playing recently an online game that has recently launched, that uses the following idea.

When a user starts a match, it spawns a process in the server that acts as the opponent, generating the actions against the user.

The game had a rough launch, with a lot of problems due it being played by a lot of people. And, IMHO, a lot of the problems can be traced to that idea.

I see it’s a seductive one. If a user generates an interaction with the service that takes time (for example, a match for this game), spawn a process/thread in the server that generates the responses in “real time“. The user then will be notified through polling or pushing the information, and can react to it. The process will receive the new information from the user and adjust the responses.

I know is seductive because I had it once, and I was very lucky to have someone around with more experience that show me how it will break under pressure. It’s not a sane architecture to scale.

Some bad ideas:

  • No limit on processes, meaning the servers can be overflown by context switching. Once you have several thousand processes  running on a server, you are in a bad place.
harry_potter_replicating_cups.gif
Replication out of control

  • The very definition of state on the server. You need to keep track of processes started on different servers (so no two servers perform the same job). High Availability is impossible, as losing one server will mean destroying the state on all those processes. For scalability, always look at stateless servers: read all the data, store the resulting data.
  • Start up times. Each time a process starts, there’s some time to boot. This can be a problem if processes are always being started and stopped, adding overhead to the system. Even starting a thread is not free (and will require probably starting internal work like connect to the DB, read from cache, etc)
  • Connections explosion. If each process needs to connect to other parts of the infrastructure (DB, logging, cache, etc) you can have a problem in number of connections.
  • Process monitoring. What if a process gets stuck? A request can be cancelled easily by a web server (if a request takes more than X, kill it), but an individual process or thread can be more complicated and require specific tooling.

Alternative: Pool of workers

Generate a defined number of processes that can perform the individual actions that generates a match. Each process will get an action from a queue, execute it, and store the resulting state. Any process can produce an action for any user.

vlcsnap-2012-09-24-19h37m32s14.png
A group of workers can be very efficient

For example, if a match is a set of 20 actions, each one happening every  minute, the start match request will introduce 20 actions in a queue, to be extracted at the proper time, introducing the proper delay on each action. Note that the queue needs to have a way of deliver delayed messages, not every messaging queue can (in particular, RabbitMQ doesn’t have a good support). Beanstalkd or Amazon SQS supports it.

Or, alternatively, a single action, that will end inserting the next step in the queue with the adequate delay. The action can be as simple as checking if it should change something and, if not, end.

The processes will be extracting the next action from the queue, and executing them. Note that here you minimise the time the worker is waiting for a new task to do. Each worker is active as much as possible, while any user has a pending task, ready to be executed.

The number of processes are limited, so you won’t have an explosion. You can test the system and have a good idea on the limit, when your throughput is not good enough to execute the actions within a reasonable delay, so you can stop the users from starting a new match. This is a better fallback option than allowing everyone to start one and then not giving a good experience.

A priority queue can be put in place, in that case, to inform the user: “You will be able to start your match in ~3 minutes

Or you can add more processes/servers to increase the throughput in a predictable manner.

Alternative: Whole match pregeneration

Another alternative is actually generating a set of actions and returning them in the first go, and display them at the proper times in the client side. If any adjustment is required due the actions of the user, redo all the results from that time on.

Emperor1.jpg
This match is proceeding as I have foreseen it

For example, a match starts, and returns the 20 server actions to the client, which shows them to the user one each minute. In the 3rd minute, the user performs an action, which makes the server to recalculate the  remainder of the match and return another 17 actions. This is a good strategy if generating actions in advance is possible and few interactions from the user are expected.

The bottom line

The main word here is stateless. It is a basic component of an scalable system, and it’s always worth it to keep in mind when designing a system to be used to more than a couple of users.

Compendium of Wondrous Links vol IX

wondrous_links

Welcome back to this totally non-regular compilation of interesting reads. Enjoy!

1427381663-20150326

 

 

Do you want to see the whole series?

Compendium of Wondrous Links vol VII

wondrous_links

Here we go again… This time I’m loosely grouping them, it has been a while and there are so many things!

  • An extremely though-provoking article about the possible positive things can have a negative effect. How Perks Can Divide Us. Corporate culture is something extremely complex.
  • Mora about culture and biases on Mirrorcracy.
  • I do not like the idea that people have a “level”, ignoring all the dynamics (I talked about this here). There are no B players.
  • A set of articles talking about management skills. Is a great compilation of the different aspect of management. Great read even if you’re not “an official manager”, as is capital to understand what are the things that a (good) manager should do and challenges that the role presents
  • Some ideas about productivity. I like this quote a lot: “There is no one secret to becoming more productive, but there are hundreds of tactics you can use to get more done”
  • We tend to idealise the work involved in some stuff we really like. In particular in creating video games. It’s okay not to follow your dreams, but I think it could apply to a lot of other aspects

Dragon's Lair

 

 

Future as a developer and the ever changing picture

A few weeks ago I came by a couple of articles my Marco Arment that share the theme of the current status of accelerated change within the development community as a way of stressing up, and being difficult to be up to date. After all, one gets tired of learning a new framework or language every size months. It gets to a point where is not funny or interesting anymore.

It seems like two different options are presented, that are available for developers after some time:

  • Keep up, meaning that you adopt rapidly each new technology
  • Move to other areas, typically management

Both are totally valid options, as I already said in this blog that I don’t like when good developers move to different areas (to me it’s sort of a surgeon deciding she had enough after a few years and move to manage the hospital). Though, obviously each person has absolutely every right to choose their career path.

But I think that it’s all mostly based in an biased and incorrect view of the field of technology and the real pace of changes.

In the last years, there has been an explosion of technologies, in particular for web. Ruby on Rails almost feels introduced at the same time as COBOL. NodeJS seemed to be in fashion for a while. The same with MongoDB or jQuery.

We all know that being stressed is not a great way of learn
We all know that being stressed is not a great way of learn

In the last 6 or 7 years there has been an incredible explosion in terms of open source fragmentation. Probably because GitHub (and other online repos) and the increase in communication through the Internet, the bar to create a web framework and offer it to the world has been lowered so much, that a lot of projects that would’ve been not exposed previously, has gotten more exposure. As a general effect, is positive, but it came with the negative effect that every year there is a revolution in terms of technologies, which forces everyone to catch up and learn the brand new tool that is the best for the current development, increasing the churning of buzz words.

But all this is nothing but an illusion. We developers tend to laugh at the common “minimum 3+ years of experience in Swift”, but we still get the notion that we should be experts in a particular language, DB or framework since day one. Of course, of the one on demand today, or we are just outdated, dinosaurs that should retire.

Software development is a young field, full of young people. That’s great in a lot of aspects, but we need to appreciate experience, even if it comes from using a different technology. It doesn’t look like it, but there’s still a lot of projects done in “not-so-fancy” technologies. That includes really old stuff like Fortran or COBOL, but also C++, Java, Perl, PHP or Ruby.

Technologies gets established by a combination of features, maturity, community and a little luck.  But once they are established, they’re quite resilient and don’t go away easily.  They are useful for quite a long time. Right now it’s not that difficult to pick a tool that is almost guaranteed to be around in the next 10-15 years. Also, most of the real important stuff is totally technology agnostic, things like write clean code, structure, debug ability, communication, team work, transform abstract ideas into concrete implementations, etc… That simply does not go away.

Think about this. iOS development started in 2008. Smartphones are radically different beasts than the ones available 6 years ago, probably the environment that has changed more. The basics are the same, though. And even if Swift has been introduced this year, it’s based in the same principles. Every year there has been tweaks, changing APIs, new functionalities. But the basic ideas are still the same. Today a new web development using LAMP is totally viable. Video games still relay on C++ and OpenGL. Java is still heavily used. I use all the time ideas mainly developed in the 70s like UNIX command line or Vim.

Just because every day we get tons of news about new startups setting up applications on new paradigms, that doesn’t mean that they don’t coexist with “older” technologies.

Of course, there are new tricks to learn, but it’s a day by day additive effort. Real revolution and change of paradigm is rare, and normally not a good sign. Changing from MySQL to PostgreSQL shouldn’t be considered a major change in career. Searching certain stability in the tools you use should be seen as good move.

We developers love to stress the part of learning everyday something new and constantly challenge ourselves, but that should be taken also in perspective with allowing time to breathe. We’ve created a lot of pressure on ourselves in terms of having to be constantly pushing with new ideas, investigating in side projects and devoting ourselves 100% of the time to software. That’s not only not realistic. It’s not good.

You only have to breathe.  And just worry on doing a good work and enjoy learning.

The amazing forgiveness of software

One of the things I like most about developing software is the fact that you can recover from most mistakes with very few long term impact.

Bugs are unavoidable, and most of the people involved on programming deeply understands that is something we all live with.  So,  there’s no hard feelings, once you find a bug, you fix it and immediately move on. Not only no one thinks that you’re a bad developer because you write bugs, but typically the impact of a bug is not that problematic.

Yes, there are some bugs that are just terrible. And there’s always the risk of losing data or do some catastrophic operation on production. But those are comparatively rare, and with the proper practices, the risk and damage can be reduced. Most on the day to day operation involves mistakes that have a much limited effect. Software development is more about being bold and move fast fixing your mess, than it is to play safe (within limits, of course).

This can affect production data! Show warning sign!
This can affect production data! Display warning sign!

Because the greatness of software is that you can break it on purpose and watch it explode, and then fix that problem. In a controlled environment. Without having to worry about permanent effects or high costs. And a good test is the one that ambushes the code and try to viciously stab it with a poisonous dagger. The one that can hurt. So you know that your system is strong enough against that attack. And then iterate. Quickly. It’s like having a permanent second chance to try again.

Not every aspect of live is that forgiving. I guess that a lot of doctors would love to be able to do the same.

Do you mind if we start the trial again and the jury forgets everything, your honor?
Do you mind if the jury forgets everything and we start the trial again, your honor?