Docker is now everywhere. Over the past few years, a lot of modern-day software has now moved to become packaged in a Docker container, and with good reason.
One of the biggest benefits touted about Docker containers is their speed. But you don’t get lightning-fast performance out of the box without some performance tuning.
Today, we’re going to discuss some of the tips, tricks, and areas to look into to ensure you are utilizing the real speed of containers. We’ll break down the following into two parts.
Part 1 will cover optimizing the speed of containers before we ship (build-time configuration):
Part 2 will discuss optimizing your containers when they’re running in production:
- Understanding the host/container relationship
- Getting performance data from your containers
- Leveraging APMs for easier performance data
Ready to go?
Before we dive into our Docker performance improvements, we should recap some fundamentals. It’s important to understand the nuances of how Docker works so we can ensure we’re leveraging its powerful features.
Simply put, Docker containers are a way of packaging and distributing software with simple instructions to run. Containers will always run predictably—no matter where you choose to execute them—as isolated and protected processes.
Some key points to remember about containers:
Containers (nearly always) have hosts. Containers need machines to run on, so we can’t throw our containers on any machine and expect them to run optimally out of the box. We need to think about what resources the host has and how it’s sharing these with our containers.
Containers are built from layers. Containers use the union file system. This means, among other things, that each step as you build a Docker image (as part of your Dockerfile) is cached for performance. Importantly though, these cached layers are additive, which means you can only add to them (more on this later).
The Art of Performance Debugging
Before we move on, when it comes to performance improvements, you should try to stick to the following guidelines:
Optimize the bottleneck. With all performance issues, you should take into account where your bottleneck is and optimize only at that point. Optimizing up or downstream of our bottleneck won’t have an effect on the end user or the consumer of your system.
Be data driven. When it comes to measuring performance, we should strive to be data driven. Gathering hard evidence (numbers) about your system’s behavior before and after you run any performance analysis is essential to being effective with your performance improvement effort.
With the intros done, we can now discuss some fun stuff: making containers super fast.
Part 1: Docker Build-Time Performance
When we work with containers, we’re typically packaging the software we’re working on into a container build. As developers, we run these build steps quite often. Every time we change the software, we’ll want to check that our new artifact is working as expected.
Then, when we’re satisfied that our software works, we’re likely to push our code through a deployment pipeline. With each step of the pipeline, we’ll be building, pushing, and pulling images. And because we’re repeating these build, push, and pull steps so many times, we should pay special attention to our container build steps.
In the next few sections, we’ll discuss how to improve the speed in which you go from working software on your local machine to a packaged, distributed, and easily runnable container.
Keeping Your Docker Images Lightweight
When we’re dealing with Docker images, the first step we need to take is typically to build ourselves a Dockerfile. A Dockerfile is simply a set of instructions of how to build our image. A Dockerfile specifies details such as the following:
- Files we should include
- Environment variables we need
- Command(s) to use when running our container
- Installation steps to run
- Networking details (such as exposed ports)
One part of the Dockerfile that has a big implication for our build-time performance is our file context. You might be wondering, Why are these build contexts so relevant? The answer is that container builds requires context.
Context is the specified files required to build your container. For instance, when you’ve performed a Docker run command, you might have seen the following output:
Sending build context to Docker daemon 2.048kB
Importantly, the output above shows us the size of our specified Docker context. The larger this context, the slower our Docker build is going to be.
Okay, so what do we do if we notice we have a particularly large build context for our container? Well, we can start by adding unneeded files to the .dockerignore file (which will exclude these from your build). The usual suspects for a slow build are large asset files, or additional library files that aren’t required for your build. Once created, you can easily see the size of your built Docker image by running the following:
This will return output similar to the following:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE hubuser/largeapp latest 450e3123 5 minutes ago 662 MB debian jessie 9a61b6b1 4 days ago 125.2 MB
It’s important to mention that, because Docker uses base images and a union file system, if you do need to install or modify a base image, you can easily push that image to the container registry and use the sent image as the new image.
Improving Network Latencies
There are aspects to the Docker build process that involve the internet (the network) in our case. If we have large images, our performance issues become magnified by the fact that we’re often pushing/pulling our image across the internet.
When you build a Docker image on your machine, it will check the base image specified that you want to build from. If the specified base image isn’t found locally, Docker will—by default—try to fetch the image from Dockerhub, which has a latency performance cost.
And it’s not just performance that we affect by relying too heavily on a service like Dockerhub. We also need to consider the availability of Dockerhub and the risks associated with building a strong dependency with such a service. If Dockerhub were compromised or went down, our images and software would be inaccessible.
To remedy the issue of latency for pushing/pulling Docker images, Docker allows you to create your own registry, which you can locate within your organization and on your own infrastructure. Doing so will increase the speed of pushing/pulling images, with the added bonus of extra redundancies in the case of an outage on Dockerhub.
Part 2: Docker Runtime Performance
That concludes our discussion about improving our container’s speed prior to deployment. By now, you should have a super lightweight image, and you could even have your own registry for fast download and upload of your built images.
We’re now at the point where we’ll want to consider how we’re getting our images into and running fast in production. We’ll want to ask ourselves the following questions:
- What host do we want to run our container on?
- How many containers do we want to run per host?
- Do we want to use a container orchestration tool (e.g., Kubernetes)?
- What configuration do we want to set our containers with?
- Can we leverage cloud tooling like AWS Fargate to do our heavy lifting?
Is It Your App, Infrastructure, or Docker?
Throughout this article, we’ll be discussing strictly how to optimize Docker performance. I should mention now that, often, it isn’t Docker that needs optimizing, but rather the infrastructure that it’s running on or the application that’s running inside the container. You can’t drastically improve poorly designed application problems just by adding Docker to the equation.
The following are good ways of assessing application performance:
- Using visualization tools, such as Flame Graphs. These tools show you how your software is currently executing and, importantly, how often each function is running and what other functions it’s calling.
- Logging. Application logs are metadata emitted by a running application that indicate your application’s performance. Careful instrumentation with application logs alongside good tooling for viewing and visualizing gives great insight into application performance.
Configuring Before You Docker Run
As we said at the start, Docker isn’t necessarily super performant out of the box. Like most software, it comes configured with a set of defaults you can override, usually at the point when you execute a Docker run.
Our Docker configuration is important; in the case of memory, if your host machine runs low, it can start killing processes to recover memory. When running in production, you’ll want to ensure that you have enough system resources—such as memory—to perform your desired workloads. Most cloud providers have the capability to set triggers (often called scaling rules) that launch/alter machines when under certain conditions (such as low memory).
Gathering Your Container Metrics
When it comes to measuring our containers’ performance, we’ll need metrics to help us understand our current performance. Luckily for us, Docker gives us some tools to extract data on our running containers (for the purposes of performance debugging).
Note that the following methods are CLI based. Using CLI methods would require you to SSH (or similar) onto your machine to run them. After going through the data sources themselves, we’ll talk about how you can access these metrics in a more scalable manner and make better use of your data.
The Docker Stats Command (Part 1)
Fist up, the docker stats command.
Docker provides us with a simple command called docker stats to get metrics about our currently running containers. Because docker stats is so easy to use, let’s go through what the docker stats command gives us, and then we can see what we’re able to understand through the data. I’ve broken down the output of the command into two parts so it’s easier to digest. Let’s begin with part one:
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % c796e97878c2 yourcontainer1 1.17% 35.71MiB / 1.952GiB 1.79% 2c02afe562b8 yourcontainer2 0.04% 9.344MiB / 1.952GiB 0.47% 0133d95251a1 yourcontainer3 0.00% 6.363MiB / 1.952GiB 0.32%
Take a good look at this output and have a guess at what you think each is measuring (and why). Now that you’re more familiar with the data that docker stats exposes, we can go through what these metrics are one by one:
- Container ID. This is the unique identifier of our container. Container IDs are useful to pass as a reference to other Docker commands for understanding specific information related to that container. We’ll also need to know our container ID if we want to exec into our container to take a look around.
- CPU. This is the percentage of the host CPU that is being utilized. Note that because this is the host CPU, the more containers you run on your host, the lower this figure can be. Containers on the same host often compete for system resources, depending on how you configure your container.
- Memory Usage/Limit. The memory usage metric is the absolute value of the memory the container is currently using. Secondly, this metric is the total amount available alongside the percentage of memory that the container is using.
The Docker Stats Command (Part 2)
NET I/O BLOCK I/O PIDS 1.76MB / 602MB 532kB / 0B 23 308MB / 3.08GB 147kB / 118MB 9 97.6kB / 596kB 28.9MB / 0B 19
The docker stats command also produces the following additional outputs:
- NET I/O. This is the data being sent and received over the network (network traffic).
- Block I/O. This is the amount of data reading/writing from block devices on the host. If we’re writing/reading lots of data, we might want to consider leveraging other cloud solutions, such as an in-memory cache or an object storage service such as S3.
- PIDS. This is the number of threads created by the running container. Depending on the type of work that we’re doing, we might want to consider offloading this processing work into other containers as part of a microservice architecture.
Source 2: Docker REST API
When you’ve exhausted what you can from the docker stats command, a good way to get additional information is through the Docker REST API. The Docker daemon that orchestrates the running of your container provides a neat little API that produces similar but much more detailed information as the docker stats command.
To get started with the REST API, you can call GET /container/(id)/stats. Due to the large volumes of data you’ll get from the REST API, you’ll want to pipe your data into a visualization or aggregation tool (more on this in the next section).
What We Must Ask Ourselves About Our Containers
With the above information, we start to get a high-level view of what types of behavior our container might be exhibiting and using the data to make tweaks to our application.
You might want to ask yourself the following questions when viewing this type of data:
- Does my application need a specific resource profile (i.e., lots of memory)?
- Would this processing be better off broken down into different services?
- Can I and should I be leveraging some cloud solutions for part of my workload?
- How many containers do I want to run? Do I need scaling rules?
Getting Advanced With Our Data
Using CLI tools will only get you so far. SSH’ing into your Docker machine to inspect the running processes isn’t sustainable, so instead we can use automation and visualization to improve the process. Tools such as APMs can help us out with this problem. By installing tooling on our machines, we can pipe data to a single location for viewing and visualization, and tools like Stackify’s Retrace allow us to do that.
Getting the Most Out of Your Containers
In part one, we went through how to ensure Docker build times are low and create your own registry. In part two, we discussed leveraging running container data for performance assessments.
Now you have the tools you need to start digging into understanding your containers’ performance in both build and runtime. You should now be able to start realizing the true power and performance of Docker containers!