This is the fourth part of the Docker security article. In this part, we will look at how to control the resource usage of Docker containers to ensure their stable performance and security. Learn about resource monitoring and throttling techniques that will help you optimize the performance of your applications and prevent potential threats.
A component of system security, in addition to integrity and confidentiality, is also ensuring system and data availability. When it comes to containers, it is very important to control the amount of Docker host resources that each container can use. In Linux, this control is possible through a mechanism called cgroups. It is a Linux kernel mechanism used to limit, isolate, and monitor system resources used by processes or groups of processes. Control groups have numerous subsystems that are responsible for managing various system resources and aspects, including:
blkio – controls access to block I/O devices, allowing you to control and limit I/O bandwidth,
cpu – CPU management,
cpuset (cpus) – allows you to assign specific CPUs.
devices – controls access to devices by process groups.
Memory – Controls memory usage by groups of processes, allowing you to limit memory usage and isolate it.
pids – controls the number of processes in a group, which allows you to limit the maximum number of processes.
Two scenarios may be helpful to illustrate this concept. First, if an application running in a container is attacked and attackers install, for example, a cryptocurrency miner, all server resources can be exhausted.
Second, there is a risk of vulnerability to Denial of Service (DoS) attacks (not to be confused with Distributed Denial of Service or DDoS attacks). An example would be an Aa ReDoS attack, which can lead to resource exhaustion and system downtime.
The constraints we choose can be defined when the container is launched. It is also important to note that there is an option to change resource usage limits for containers that are already running. Such parameters as:
–memory=(lub -m) – is responsible for determining the upper limit of RAM that can be allocated by the container (for example, –memory=32m means a limit of 32 MB of memory),
–memory-swap=– SWAP memory limit ,
–cpus=– Defines the maximum level of CPU usage. For example, if the Docker Host has 1 core, setting the CPU option to 0.5 (–cpus=0.5) will limit CPU usage to 50%,
–pids-limit= – Defines how many processes can be run in the context of a particular container (eg –pids-limit=5 means you can’t run more than 5 processes in a container).
A complete list of options is available in the Docker documentation.
It’s time for a practical test. We’ll start a new container (ubuntu-limits) on which we’ll install the stress-ng package (Listing 36, Figure 60). Stress-ng is a tool used to load and test an operating system in a variety of ways, including stressing the CPU, RAM, disks, and other areas where resources may be constrained. This is an improved version of the stress tool that offers complex and varied testing options. This allows accurate and flexible load generation to evaluate how the system responds to pressure, which is particularly useful for identifying performance issues and investigating system stability.
# Commands to be executed in the context of Docker Host. docker run -itd --name ubuntu-limits ubuntu:22.04 docker exec -it ubuntu-limits bash # Commands to be executed in the context of the ubuntu-limits container apt update && apt install -y stress-ng
Listing 36. Starting a new container and installing the stress-ng package.
Let’s check if the package is installed correctly using the stress-ng –version command. For convenience, we will also open a second terminal window in which we will monitor the level of system resource usage during stress tests.
Let’s start with a test aimed at checking the use of the processor. However, before we do that, we need to know how many cores the processor our container is using has. We can get this information using the nproc command (Fig. 61).
Why is this information important? Specifically, we’ll pass the stress-ng parameter to the –cpu-load program, which specifies what percentage of CPU-load stress-ng should give the CPU. The second option we will provide is –cpu which specifies the number of processes to use. So, if we set the first parameter (–cpu) to 1 and the second (–cpu-load) to 100, it will mean that one CPU will be fully used, which the docker stats command will display as a value close to 100 in the ” CPU”. However, if we change the value of the –cpu option to 2 while keeping the value of –cpu-load unchanged, the docker stats output should show a value close to 200% (which means the two cores are being used at 100% each).
In addition, during this test we will try to set the limit to 150% using the code docker update ubuntu-limits –cpus=1.5. We will see if this is reflected in the measurement results (Video 1).
Video 1. Stress test results.
Perfectly! Everything is going according to plan.
Now let’s set the second limitation related to RAM. We can do this by running the command in Listing 37 at the Docker host level.
docker update ubuntu-limits --memory=128m --memory-swap=256m
Listing 37. Setting memory limits.
We can see that the limit we set to 128 MB was assigned by running the docker stats command (Figure 62).
Another quick test (increased constraints; Listing 38):
docker update ubuntu-limits --memory=256m --memory-swap=256m docker stats --no-stream
Listing 38. The next iteration of the test.
The assumptions are consistent with the effects (Figure 63).
We will conduct two series of tests. The first involves an attempt to allocate memory that does not exceed the set limit (for example, 200 MB). The second, however, will involve trying to allocate a larger limit than the one previously set (eg 300MB or whatever value you choose).
We run the first test using the command in Listing 39.
stress-ng --vm 1 --vm-bytes 200M --timeout 1m --vm-keep
Listing 39. First iteration of memory tests.
After some time, we will notice that the level of allocated memory stabilizes at about 200 MB (Figure 64).
stress-ng --vm 1 --vm-bytes 300M --timeout 1m --vm-keep
Listing 40. The second iteration of the test.
Video 2. The second iteration of the test.
As you can see, the memory usage fluctuates. The command stress-ng –vm 1 –vm-bytes 300M –timeout 1m –vm-keep tries to allocate 300MB of memory that exceeds the available limit. Therefore, the memory management system in the Docker container will try to handle this situation in order not to exceed the set limit.
Let’s do another test. We will set a limit on the ability to run processes “inside” the container. By default, two processes are running in our container (Figure 65).
So let’s set a limit of, say, 5 processes: docker update ubuntu-limits –pids-limit=5.
Now we will try to run several processes in the background; for example, top &(Figure 67).
We’ve hit the limit…so effectively we can’t even check the list of running processes.
You have now learned the basics of setting limits.
In the end, remember one thing. Just for demonstration purposes, we actually set the limit only after the container is launched. However, this is not necessary – you can apply these settings already when starting the container, for example, with the command: docker run -it –pids-limit=5 –name ubuntu-test ubuntu:22.04 bash.
This recommendation only applies if the Docker Daemon you are using is running on a different machine than the Docker Host.
The Docker daemon will not always run on the same machine as the Docker client. There may be situations where we need to connect to the Docker Daemon from a local station running on a remote server.
Communication with the remote Docker Daemon host must be secure, as insufficient connection security can pose a serious security risk. The Docker Daemon has full control over the Docker Host operating system (in its default configuration). Remote management without proper safeguards can lead to unauthorized access, data loss, privacy violations and other security issues. Data transmission between the client and the Docker Daemon in the open, without encryption, can lead to its interception and manipulation by unauthorized persons.
SSH and TLS are two widely used methods for securing remote communications. SSH is easy to set up and use, offering strong encryption and authentication. In addition, TLS also provides strong encryption and authentication, but can be more complex to configure. However, TLS is more flexible and scalable, which is especially common in environments that require multiple certificate management and authentication of multiple users or services.
Let’s try to configure the connection using SSH at the first step.
As part of the tests, we create a copy of the virtual machine that serves as my Docker host. This process may consist of different steps, depending on the hypervisor you are using. It is important that in addition to the original Docker host – in my case it is an Ubuntu VM with IP address 172.16.169.183 – a second system running Docker must also be running. Again, in my case this is a clone of the original machine with the address 172.16.169.183. Both machines must be able to communicate at the network level.
ssh-keygen -t ed25519 -f .ssh/remote-dockerd -q -N "" # make sure you provide the correct IP address and username! ssh-copy-id -i .ssh/remote-dockerd.pub [email protected]
Listing 41. SSH key generation.
We have successfully set up a secure connection to the second server using SSH key-based authentication (Listing 41, Figure 69). It’s time to create a new Docker context and establish a connection to the remote server (Listing 42, Figure 70).
docker context show docker context ls # make sure to enter correct IP address docker context create --docker host=ssh://172.16.169.186 --description="remote dockerd" remote-dockerd docker ps docker context use remote-dockerd docker ps
Listing 42. Creating a new context and connecting to a Docker host remotely.
Success. The docker ps command has already been executed in the context of the second server.
In addition, we can also configure TLS-based connections. To do this, we need to run a few commands at the Docker host level (Listing 43).
export HOST=reynardsec.com openssl genrsa -aes256 -out ca-key.pem 4096 openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem openssl genrsa -out server-key.pem 4096 openssl req -subj "/CN=$HOST" -sha256 -new -key server-key.pem -out server.csr hostname -I echo subjectAltName = DNS:$HOST,IP:172.16.169.183,IP:127.0.0.1 >> extfile.cnf echo extendedKeyUsage = serverAuth >> extfile.cnf openssl x509 -req -days 365 -sha256 -in server.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out server-cert.pem -extfile extfile.cnf openssl genrsa -out key.pem 4096 echo extendedKeyUsage = clientAuth > extfile-client.cnf openssl req -new -key key.pem -out client.csr openssl x509 -req -days 365 -sha256 -in client.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out cert.pem -extfile extfile-client.cnf
The whole process consists of several steps:
export HOST=reynardsec.com – It defines the HOST environment variable with the value “reynardsec.com”.
openssl genrsa -aes256 -out ca-key.pem 4096 – 4096-bit RSA private key protected with AES-256 encryption and stored in ca-key.pem file.
openssl req -new -x509 -days 365 -key ca-key.pem -sha256 -out ca.pem – It creates a new CA certificate from the private key ca-key.pem which is valid for 365 days. The resulting certificate is stored in the ca.pem file.
openssl genrsa -out server-key.pem 4096 – It generates a 4096-bit RSA private key and stores it in the server-key.pem file.
openssl req -subj “/CN=$HOST” -sha256 -new -key server-key.pem -out server.csr– It creates a new Certificate Signing Request (CSR) from the private server-key.pem key. In this request, the Common Name (CN) is set to the value of the HOST environment variable, and the resulting CSR is stored in the server.csr file.
hostname -I– Displays all IP addresses configured on the machine’s network interfaces.
echo subjectAltName = DNS:$HOST,IP:172.16.169.183,IP:127.0.0.1 >> extfile.cnf – It adds additional domain names and IP addresses to the extfile.cnf file to be used as alternate names for the server.
echo extendedKeyUsage = serverAuth >> extfile.cnf – It specifies that the certificate will be used for server authentication and adds this information to the extfile.cnf file.
openssl x509 -req -days 365 -sha256 -in server.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out server-cert.pem -extfile extfile.cnf – It signs the server.csr certificate request with CA key and certificate, creating a server certificate valid for 365 days and stores it in the server-cert.pem file.
openssl genrsa -out key.pem 4096 – It generates another 4096-bit RSA private key and stores it in the key.pem file.
echo extendedKeyUsage = clientAuth > extfile-client.cnf – It creates an extfile-client.cnf configuration file specifying that the certificate will be used for client authentication.
openssl req -new -key key.pem -out client.csr – It generates a new Certificate Request (CSR) from the closed key key.pem and stores the resulting CSR in the client.csr file.
openssl x509 -req -days 365 -sha256 -in client.csr -CA ca.pem -CAkey ca-key.pem -CAcreateserial -out cert.pem -extfile extfile-client.cnf – It signs the client.csr certificate request with CA key and certificate, creating a client certificate that is valid for 365 days and stores it in the cert.pem file.
We noticed that the official Docker documentation is missing step number 12, which executes the openssl req -new -key key.pem -out client.csr command. This step is not skipped here 🙂
Now we will try to start Docker Daemon to listen for TLS secured connections (Figure 71).
It looks like we have successfully started the Docker Daemon. Now let’s try to connect to it from another host. We need to securely transfer the files to another ca.pem host. In my case, this will be the server with the address (Listing 44, Figure 72).cert.pemkey.pem172.16.169.186
# The command to be executed on the server where we generated the certificates scp ca.pem cert.pem key.pem [email protected]:~
Listing 44. Transferring keys and certificates to a remote server.
Now it’s time to log into our remote server and try to connect to the Docker host (Listing 45, Figure 73).
docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem -H=172.16.169.183:2376 ps
Listing 45. Trying to connect using TLS.
For demonstration purposes, we ran the Docker Daemon (dockerd) in “offline” mode. However, it is neither convenient nor practical. So we can also save the configuration permanently by editing the daemon.json file (Listing 46).
{ "icc": true, "tlsverify": true, "tlscacert": "/home/reynard/ca.pem", "tlscert": "/home/reynard/server-cert.pem", "tlskey": "/home/reynard/server-key.pem", "hosts": ["tcp://0.0.0.0:2376"] }
Listing 46. TLS configuration using the daemon.json file.
ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
Listing 47. The output line in the docker.service file
ExecStart=/usr/bin/dockerd --containerd=/run/containerd/containerd.sock
Listing 48. Modified docker.service file.
After implementing these changes, we need to restart the services and then restart the Docker service (Listing 49, Figure 75).
sudo systemctl daemon-reload sudo systemctl restart docker
Listing 49. Reloading the configuration and restarting the service.
It’s time to go back to our second server and check TLS connectivity again (Figure 76).
How many times already… success 🙂
In summary, SSH and TLS effectively secure connections with the Docker Daemon. However, TLS may be more suitable for larger, more complex, and dynamic environments.
You can work with a remote Docker daemon, but before proceeding, it is recommended that you undo your changes, that is, work in an environment where the Docker host and Docker client are running on the same host.
By default Docker uses json-filedriver logging to store logs from containers. Logs are stored in JSON format in a file located on the host. The default settings allow you to view the logs from the container using the docker logs command.
The json-file driver creates a JSON file for each container that records all the logs coming from that container. These files are stored in the path /var/lib/docker/containers/<container-id>/ where <container-id> is the ID of the given container.
Logs for a specific container can be viewed by running the docker logs command, for example: docker logs 84167e82e8cf (Figure 77).
From a security perspective, storing logs on the same machine as running containers is not a security best practice. It is recommended that you send the logs to a remote server dedicated to collecting them. Fortunately, Docker supports several different drivers that handle log collection, including ones that allow you to send data to a remote server. A list of drivers supported by default is available in the documentation (under the Logging Drivers tab).