This is the third part of the Docker security article. In this part, we will look at rootless mode, which allows you to run Docker containers without administrator rights. This provides an additional level of protection and reduces the risks associated with potential vulnerabilities. Learn how to properly configure and use rootless mode to increase the security of your containers.
Mitigating the risk of exploiting vulnerabilities in the Docker Daemon and running containers is critical. Docker offers a rootless mode that provides an additional layer of security. The main difference between this mode and the isolation methods described earlier is the lack of root privileges for the Docker Daemon on the Docker Host in “rootless” mode. The entire Docker system then runs in the so-called “user space”.
Sounds impressive, right? You might be wondering why we discussed other methods earlier when rootless mode seems to be better. Yes, this mode provides a higher level of security, but it is not without limitations. The current list of restrictions can be found on the following page:
We won’t go too deep into this, as it’s a dynamic topic that will evolve as Docker develops. However, it is worth noting that currently using “rootless” mode excludes the possibility of using AppArmor (I will talk about this tool in more detail in the next part of the article) and may require additional configuration steps if you plan to run containers with unusual settings. Before you decide to implement the “rootless” mode, read the list of restrictions.
WARNING! We are going to run the dockerd-rootless-setuptool.sh script that is distributed with the official Docker packages. If you installed Docker using a package provided by the developers of your distribution (such as docker.io), this script may not be available and you will have to download it from https://get.docker.com/.
Installing the “Rootless” mode consists of two main steps. First, we need to shut down the currently running Docker Daemon and then restart the server (Listing 23, Figure 38).
sudo systemctl disable --now docker.service docker.socket sudo reboot
Listing 23. Installing Rootless mode.
We then proceed to execute the dockerd-rootless-setuptool.sh script. If you installed the official Docker package, you should find this script in the /usr/bin directory.
Before starting the installation, we still need to install the necessary dependencies. In the case of Ubuntu, this is the uidmap package (Listing 24).
sudo apt install -y uidmap
Listing 24. Installing the uidmap package.
However, before proceeding with the installation of the “Rootless” mode, it is worth paying attention to one important aspect. There is a high chance that trying to start the installation process at this stage may result in an error or warning, the content of which is shown in Figure #39.
This means that every time we have to manually start the Daemon while issuing the command from the last line in Figure 39. This is an inconvenient and suboptimal solution. However, we can solve this problem using a workaround suggested by one of the Docker users (Listing 25).
sudo apt install systemd-container sudo machinectl shell reynard@
Listing 25. Installing the machinectl tool.
machinectl is a tool for interacting with machines and containers on a systemd compatible system.
Finally it’s time to start Rootless mode. We achieve this by entering the commands in Listing 26 (Figure 40):
cd /usr/bin dockerd-rootless-setuptool.sh install
Listing 26. Beginning the Rootless installation process.
Everything seems to work. We can see that the Docker Daemon is running and it is in Rootless mode! We can confirm this by checking the list of running processes (Figure 41).
All processes associated with the Docker daemon run as the reynard user. Of course, in practical applications it is advisable to additionally grant this user permissions to the sudo group or other privileged groups for this modification to take full effect.
You will most likely encounter the error shown in Figure 42 when you start the Rootless daemon.
To solve this problem, we need to briefly return to the output of the dockerd-rootless-setuptool.sh script. One of the last messages it returned looked like Figure 43.
As suggested, we need to set the appropriate environment variables. Most likely, the $PATH variable is already set on the system – we can check this with the echo $PATH command. Next, we need to set the DOCKER_HOST variable. If we simply run the command in Listing 26 in the console, that will fix the problem, but only temporarily.
export DOCKER_HOST=unix:///run/user/1001/docker.sock
Listing 27. Environment variables to set.
It is recommended to add this entry to, for example, a .bashrc or .zshrc configuration file (the shell configuration file). After you do this, you still need to reload the configuration ( source .bashrc).
Alternatively, it would be useful to run the following commands from Listing 28.
systemctl --user enable docker sudo loginctl enable-linger $(whoami)
Listing 28. Enabling the “linger” function.
The first one is responsible for starting the Docker daemon along with system startup. Although the loginctl enable-linger command is used on systemd-based systems. It is designed to allow the user to keep their services and applications running in the background after logging out. In the default configuration, when a user logs out, all their processes are terminated. Enabling the “delay” feature changes this behavior.
The last action we need to do is to choose the appropriate context in which the Docker client will run. We can do this by issuing the docker context use rootless command (Figure 44).
Now everything should work!
To sum up the rootless mode section of the article, this is definitely a solution worth considering and implementing. However, keep in mind that this will not solve all problems or address security vulnerabilities, such as in the Docker Daemon, the containers themselves, or the applications running in them. Rootless mode will help you limit the risk of exploiting potential vulnerabilities.
By default, containers running on a specific Docker host can communicate with each other over a standard network stack. This is because running containers are assigned to the default network interface, bridge. We can easily test this by running a second container, for example named ubuntu2 (using the command docker run -itd –name ubuntu2 ubuntu:22.04) and seeing if we can communicate with the container that was started earlier ( i.e. ubuntu1).
We will start communication using the netcat tool. Netcat will listen on port 4444/tcp to a container called ubuntu1. We will then try to connect to this container from the ubuntu2container, namely on port 4444/tcp. To facilitate this, we need to install the netcat package in both containers using the default Ubuntu repository. Container images have a very limited list of packages compared to standard installations. We have to run command apt update && apt install -y netcat on both containers i.e. ubuntu1 and ubuntu2.
Now we will open two terminals using two tabs for this. In the first tab (top) we will start netcat which will listen on port 4444/tcp. This will be a container called ubuntu1. Then from the second container we will try to connect to the ubuntu1 container. But before we do that, we need to check the IP addresses assigned to both machines. We usually do this using the ip addr command or the older ifconfig command. However, due to the limited number of packages, these commands are not available. Instead, we can use the less common hostname -I command.
The command docker container exec <container name> <command> runs <command> in the context of container <container name>. Thanks to this, we learned that the containers have assigned IP addresses 172.17.0.2 and 172.17.0.3, respectively (Figure 47).
It’s time to run netcat in listening mode on the first container. We can do this by running the netcat -nlvp 4444 command. In the second window of the second container, let’s prepare a command to run, which is: echo test | netcat 172.17.0.2 4444. The echo command sends the text “test” through the pipeline. This text will be sent to netcat server 172.17.0.2(ubuntu1) after establishing a connection on port 4444/tcp (Figure 48).
Immediately after executing the command in the second (lower) console, the text “test” was sent to container #1 (Fig. 49).
As this exercise demonstrated, we were able to establish a network connection between the two containers without any interference. It is difficult to predict all possible cases, but the person responsible for the security of the Docker environment should be aware that the default configuration allows such connections. This configuration is not recommended, so you should consider implementing at least one of the two recommendations for this.
The first, quite “radical” option is to globally disable the ability to communicate between containers using a standard network stack. We can do this by setting the icc parameter (short for inter-container communication) to false. The easiest way to set this option is in the daemon.json file, which we already had the opportunity to edit.
Listing 29 shows an example configuration file with the icc option disabled (cat /etc/docker/daemon.json):
{ "userns-remap": "default", "icc": false }
Listing 29. Disabling the icc option in the daemon.json file.
After implementing the changes, you need to restart the Docker Daemon configuration and then restart the containers to apply the new configuration (Figure 50).
Then we can check whether the implemented changes had the desired effect. This time by adding the -w 5 parameter to the netcat command run in the second console. It defines the time after which netcat should stop trying to connect if they do not complete successfully – in this case after 5 seconds (Figure 51).
As you can see, this time the test2 text didn’t make it to container #1. The changes we made in the configuration gave the desired effect!
Completely blocking the ability to establish connections between containers will not always be possible. Often, by design, our environment should provide communication between containers. For example, a container running an application needs to establish a connection to a database running in another container. So, are there other methods of segmenting the Docker internal network?
Let’s restore the previous configuration of the environment for a moment, that is, remove the entry associated with the icc parameter from the daemon.json file, or change the value of this field to true. We still need to restart the Docker daemon to apply the changes (Listing 30, Figure 52).
sudo systemctl restart docker
Listing 30. Restarting the Docker daemon.
Let’s now revive the two containers we had the chance to use earlier, namely ubuntu1 and ubuntu2. Then we will check what the network configuration of these containers looks like. We will do this using the docker inspect and docker network commands (Listing 31).
docker start ubuntu1 ubuntu2 docker inspect ubuntu1 --format '{{ .Name }}: {{ .NetworkSettings.Networks }}' docker inspect ubuntu1 --format '{{ .Name }}: {{ .NetworkSettings.Networks.bridge.NetworkID }}' docker inspect ubuntu2 --format '{{ .Name }}: {{ .NetworkSettings.Networks }}' docker inspect ubuntu2 --format '{{ .Name }}: {{ .NetworkSettings.Networks.bridge.NetworkID }}' docker network ls
Listing 31. Checking the container network configuration.
What new did we learn (Fig. 53)? Both containers are assigned the same network interface, i.e. bridge. Using the docker network ls command, we can see this interface in the list used by the Docker Daemon.
Docker’s bridge interface is the default network that allows containers to communicate with each other on the same host. The bridgecontainers can also communicate with the outside world through the interface. The bridge interface is created during Docker installation.
When starting a container, we have the option to specify which interface or interfaces it should be assigned to. The network parameter is used for this. Let’s do the following experiment: we will create two networks: network1 and network2 (Listing 31, Figure 54), we will assign two existing containers to network1, and we will add a newly created container named ubuntu3 to network2 (Listing 32, Figure 55).
docker network create network1 docker network create network2
Listing 31. Creating new networks.
docker network disconnect bridge ubuntu1 docker network disconnect bridge ubuntu2 docker network connect network1 ubuntu1 docker network connect network1 ubuntu2 docker network ls
Listing 32. Creating a new network.
Now it’s time for the final piece of the puzzle, creating a container called ubuntu3. Immediately upon its creation, we assign it to the specified network2 (Listing 33, Figure 56).
docker run -itd --network=network2 --name ubuntu3 ubuntu:22.04 docker inspect ubuntu3 --format '{{ .Name }}: {{ .NetworkSettings.Networks }}
Listing 33. Creating a third container.
So far, everything seems to be fine. The ubuntu1 and ubuntu2 containers run on network1, and ubuntu3 functions on network2. Let’s now check if the ubuntu1 container is able to establish an ICMP connection (ping) with the ubuntu2 and ubuntu3 containers (Listing 34, Figure 57).
# in the context of the Docker Host docker container exec ubuntu2 hostname -I docker container exec ubuntu3 hostname -I docker exec -it ubuntu1 bash # in the context of the ubuntu1 container apt install -y iputils-ping
Listing 34. Checking container IP addresses and installing the iputils-ping package.
Success! Container #1 can connect to container #2 (172.18.0.3), but can no longer connect to container #3 (172.19.0.2) (Figure 58). With this approach, we can isolate different “environments” if we run different projects in the same Docker daemon.
Docker allows you to run a container in read-only mode. This mode prevents files from being written to disk, new ones created, or existing ones modified, even in directories normally associated with places where writing is always practically possible (such as /tmp). To achieve this effect, we need to add the –read-only flag to the docker run command, for example like this (Listing 35, Figure 59):
# Commands to be executed in the context of Docker Host docker run -itd --read-only --name ubuntu-ro ubuntu:22.04 docker exec -it ubuntu-ro bash # Command to be executed in the context of the ubuntu-ro container touch /tmp/1
Listing 35. Running a container in read-only mode.
Using this mode seems like an interesting approach that can significantly limit the consequences of exploiting security vulnerabilities of an application running in a container. If an attacker is unable to create a new file on disk, this will not close all potential avenues of exploitation, but will greatly limit his capabilities. Of course, not every container will be able to work in this mode, but if it is possible, then it is definitely worth considering.