A deep dive into TCP, dealing with SYN-flood attacks

8 March 2024 15 minutes Author: Cyber Witcher

We will consider the structure of the TCP protocol, focus on the most widespread attack against it – SYN-flood and also study methods of countering this threat. We will also analyze in detail the key aspects of the TCP protocol that make it vulnerable to SYN-flood attacks, and discuss how these features affect overall network security.

  • Disclaimer: This article is intended for educational purposes. It discusses the workings of TCP and the mechanism of SYN-flood attacks to show how they affect networks and help readers understand how to protect against such threats.

Part 1. What is TCP and why is it needed?

TCP stands for Transmission Control Protocol. As the name implies, it is used to monitor data transmitted over the network: to warn and correct various incidents that may occur when data is transmitted over the network.

But what can happen when sending packets over the network?

1. Violation of the order of packet forwarding

The data we send is split into packets and routed using dynamic routing protocols, but it may happen that part of the packet takes a faster route, while another takes a lower bandwidth route and arrives later. For example, suppose we need to send four packets to a network node in sequence: a, b, c, and d. We expect the network node to receive them in the same sequence in which they were sent, but given the above, it is possible that the recipient will receive the sequence ACBD or, for example, DACB. Therefore, TCP must somehow restore the original sequence of received packets.

2. Packet loss

Due to a routing error or a poorly configured network, our packets can be sent along a very slow path, or end up in a loop where they wander until they run out of hops and are destroyed. In such a case, TCP must be able to claim lost or jammed data.

If we communicate with someone via audio or video communication, then such results are of little concern to us, but if we download, for example, an iso-image of some OS, then we should completely exclude such situations.

Also, TCP (like other transport layer protocols) is used to forward data from one port to another port, that is, from one application process to another.

TCP segments

TCP operation with segments

To transmit data over a network, TCP receives data from an application layer protocol, buffers it, and when ready to transmit a new portion of data, “detaches” the desired part of the byte, regardless of its value or internal structure. Each such part is called a TCP segment; it is important to note that TCP treats data as an unstructured stream of bytes. This differs from the UDP protocol, which creates datagrams based on logically separated blocks of data (application-generated messages). On the receiving side, the TCP module buffers the packet, receives it and passes it to the application layer protocol.

Note that a segment is called both a data unit as a whole (a data field and a TCP protocol header) and a separate data field.

TCP segment header

TCP segment structure

In the following presentation, we will need knowledge of the header fields, so we will provide their brief descriptions, to which the reader can return at any time:

  • Source Port and Destination Port (16 bits each) are the sender and receiver ports, respectively.

  • Sequence number (32 bits) is the number of the first byte in the segment, which indicates its offset relative to the entire byte stream.

  • Acknowledgment number (32 bits) is the number of the last byte received in the segment, increased by one.

  • Data Offset (4 bits) – indicates the length of the header (data offset from the beginning of the segment)

  • Reserved (3 bits) – reserved bits that can be used in the future.

  • Flags (9 bits) – flags that indicate what information the segment carries.

  • Window (16 bits) – the size of the window, which shows how many bytes of data the receiver is ready to accept.

  • Checksum (16 bits) – checksum. Some value calculated on a set of data by applying a hash function and used to check the integrity of the data.

  • Urgent Pointer (16 bits) – sequence number of the byte that ends the important data (taken into account only if the URG flag is set)

  • Options (12 bits) – options that contain connection parameters.

  • Padding is a variable-length dummy field used to size the header to 32-bit machine words.

  • Data is a variable-length field that directly contains the data to be transmitted.

Establishing a TCP connection

Establishing a TCP connection

Before the data transfer can begin, the sender and receiver must establish a connection through which the data transfer will take place. When establishing a connection, the sender and receiver exchange connection parameters, specifying the ACK number, window size, acknowledgment type, etc. (all these parameters are discussed below). All this happens in three stages:

  1. The sender sends a TCP segment with its parameters to the server, setting the SYN (synchronization) flag and enters the SYN-SENT state.

  2. The server, having received the SYN flag, begins to prepare the connection support infrastructure, asking the OS for various resources (counters, timers, buffers, etc.). After that, it sends a TCP segment with its parameters and the SYN and ACK (from the English acknowledgment) flags, passing to SYN-RECEIVED state.

  3. The sender, having received a SYN-ACK segment from the server, sends it a segment with the ACK flag and enters the ESTABLISHED state. The server, having received this segment, also enters this state and starts sending data.

Receipt methods

To solve problems related to TCP, it was decided to use packet acknowledgment. The sender sends the data, and the receiver confirms its receipt with an acknowledgment. If the sender did not receive confirmation in time, it resends the data; TCP implements two methods of packet authentication using the concept of a sliding window.

Концепція ковзного вікна

Sliding window

The sender has a series of numbered packets for which a window of a certain size is defined to control the packet transmission process. The first packet that enters the window is called the base packet. Everything to the left of it is considered transmitted. Packets within the window are allowed to be transmitted. Packets that go beyond the right edge of the window cannot be transmitted. When a basic packet is received, the window moves to the left by one packet. When the transmission window runs out, the sender stops the transmission and waits for reception. The receiver may also have a window that limits the number of packets that can be received.

The following figure is an idealized example that shows the operation of a sliding window on the sender side.

An idealized example of a “sliding” window

Why do you need a window at all? Why not send all packets together?

Let’s say you decide to download GTA San Andreas, which weighs 1.4 GB, on your smartphone. In the absence of a sliding window, all 1.4 GB will be sent from Google servers to the TCP module of your smartphone.

First, it is clear that if everyone followed this concept, the entire Internet would be flooded with a huge number of TCP segments, which would reduce the bandwidth of the communication channel several times.

Secondly, in this case, the smartphone would have to allocate a buffer of unlimited size (since it would not know in advance how much data it will receive); since the size of the buffer allocated by the operating system is limited, in this case segments that did not enter the buffer will be discarded. The server will mistakenly think something has happened to the packet and retransmit it, resulting in the packets being bombarded again and again by smartphones that have not yet processed the received segment.

Now that we are familiar with the sliding window, let’s move on to consider the receipt methods based on it.

Return N packets (Go-Back-N)

First, let’s consider the algorithm for working with segments on the receiving side. When a new packet arrives, the recipient checks two things:

  • Is the packet undistorted

  • Is it the next in order in the sequence of already received packets

If both of these steps are completed, the recipient sends a receipt with the number of the last package sent. It is worth noting that receipts in this transfer method are cumulative. This means that if the sender receives a receipt with the number of the nth package, it means that all previous packages have also been received. This is due to the fact that the receiver will receive undistorted packets, unless they have failed (the receiver can be said to have a single-packet window).

Now let’s look at the algorithm from the sender’s side. The sender sends packets that do not go outside the window and sets a timer with a timeout value equal to the maximum time to wait to receive a base packet. When the timeout expires, the sender assumes that the packet or acknowledgment was lost and retransmits that packet and all other packets sent.

It may happen that the base packet and some subsequent packets were successfully received but were lost. In this case, the receiver retransmits some packets. When the receiver sees the duplication, it understands what is happening and sends the last received packet (keeping in mind the accumulation). This method is less resource-intensive than the next one, but has obvious disadvantages. Since the receiver can discard the entire packet simply because it is faulty, the sender is forced to send packets every time after the timeout, filling the communication channel with redundant packets.

Selective acknowledgment

Selective Acknowledgment, as the name suggests, allows the recipient to send an acknowledgment for a specific packet. In this case, the received packets need to be stored somewhere. This is because the received packets cannot be immediately sent to the higher-level protocol because the packets may arrive out of order or there may be gaps between packets. Therefore, a sliding window is also organized on the receiving side. Depending on errors that occur during packet transmission and acknowledgments, the receive window may precede the transmit window.

Transfer and sample windows for selective receipt

The receiver no longer discards packets simply because they are out of sequence. Distortion or out of window can force the recipient to discard the packet.

In this case, the sending party organizes the timer as for the basic packet, but sends it again to all sent packets and after the timeout for a specific packet has expired.

A situation in which the package was safely received, but the receipt for it was lost, is also possible. Then the recipient will receive a duplicate package. The recipient should not ignore it – he should confirm receipt of the duplicate with a receipt.

Although this method is a little more resource-intensive than the previous one, it allows you to significantly increase the transmission speed and reduce the excessive transmission of packets. At the moment, most network nodes operating at the transport layer use exactly this method of packet blooming.

Byte numbering

For simplicity, in the examples we’ve seen so far, the packets are numbered with natural numbers starting from 1, but in reality, things are much more complicated: in the TCP protocol, when establishing a connection, each party reports its starting sequence number (the sequence number field) from which it numbered by bytes. The process continues. The number of each segment in this case is equal to the offset of the first byte of the segment relative to the entire byte stream + ISN.

Byte numbering

Let’s consider an example. Suppose the sender chooses ISN 32600. The number of the first segment is naturally 32600. This is because the offset of the start byte of the first segment is zero. The offset of the initial number from the beginning of the byte stream is 1460, so the number of the second segment will be 34060; note that 32600 plus 1460 is not the final number of bytes of the first segment, but the initial number of bytes of the second segment. This is the same as for arrays.

Segment numbering

It is also worth noting that the Acknowledgment Number field is used for the receipt, which indicates the number of the last byte increased by one.

For a better understanding of how the protocol works, the reader is recommended to independently sniff the traffic and see how the process of handshake and information exchange takes place.

Part 2. SYN-flood attack

The essence of the attack

Syn-flood attack is one of the types of DoS attacks. The principle is as follows: the attacker sends a huge number of connection establishment requests to the attacked server. The server, seeing segments with the SYN flag, allocates the necessary resources to support the connection and sends segments with the SYN and ACK flags in response, entering the SYN-RECEIVED state (this state is also called a half-open connection). The attacker does not send the corresponding ACK segments, but continues to bombard the server with SYN requests, thereby forcing the server to create more and more half-open connections. The server, of course, has limited resources, and therefore has a limit on the number of half-open connections. Given this limitation, when the limit of half-open connections is reached, the server starts rejecting new connection attempts; this is how Denial of Service is achieved.

The principle of SYN-flood attack

Implementation of the attack

We will use two virtual machines to carry out the attack. LMDE 6 will be the victim, and Kali 6.1 will be the attacker.

First, let’s set a limit on the number of half-open connections on the victim’s machine to 5 (by default 1024). To do this, you need to use the sysctl utility, which allows you to make changes to the running kernel.

The task of the maximum number of connections

The victim’s IP address is 192.168.31.175.

The IP address of the victim

To carry out the attack, we will use the hping3 utility, which is installed in the Kali distribution. In this case, we sent 15 packets to port 23 (telnet) with the SYN flag enabled using IP spoofing.

Carrying out an attack

After the attack, let’s look at the information from the victim’s open sockets using the ss utility; -a shows all sockets, -t – TCP sockets, -o – information about timers.

State of victim sockets

As we can see, our machine only supports 5 sockets in a half-open state, even though we sent 15 connection requests. If there was a legitimate user among them, then he would have received a denial of service, which is what we sought.

Denial of service by the client

Let’s examine the traffic that was captured during the attack.

Captured traffic

As we can see, we actually sent 15 SYN requests, but only 5 of them were answered by the victim. By the way, notice the red and black lines: our machine resends SYN,ACK responses because it thinks they were lost or garbled.

Methods of protection against SYN-flood attacks

How to protect against SYN-flood attacks? There are two approaches that can be combined.

First, you can increase the number of half-open TCP connections and decrease the amount of time a socket can be in the SYN-RECEIVED state.

Secondly, you can use SYN-cookie.

The idea of a SYN file is very simple: when receiving a SYN request, it does not create a new connection, but sends a SYN-ACK response to the client and encodes data about this connection in the Sequence Number field. If an ACK response is received from the client, connection data is retrieved from the Acknowledgment Number field. This is a good method because it avoids allocating resources immediately after receiving a SYN request. However, it also has obvious drawbacks. If a packet is lost or corrupted, it cannot be retransmitted because the connection information is not preserved.

In order to enable SYN-cookie, you need to use the sysctl utility and set the value of the net.ipv4.tcp_syncookies parameter to 2. Before this, this parameter was equal to zero.

Installation of SYN-cookie

Let’s try the attack again and sniff the traffic.

Traffic after enabling SYN-cookie

As we can see, our machine has already responded to all requests, but there was no re-shipment. If we look at the state of the sockets, we will see that no half-open connections have been created.

Стан сокетів після включення SYN-cookie

In this article, we got acquainted with the TCP protocol, learned about the SYN-flood attack, learned how to conduct it, and how to protect against it. This article looked at just one attack on the TCP protocol, although there are actually many more.

Here are brief descriptions of each of them:

  • TCP reset. An attack when an attacker sends a TCP segment with the RST flag to one of the connection participants, which is interpreted by the TCP module as an emergency connection closure.

  • TCP hijacking. An attack in which an attacker wedges himself into a connection, disguising his packets as legitimate user packets, thereby delivering his own data to the victim.

  • Repetition of TCP segments. An attack in which an attacker intercepts all traffic coming from one source, and then, initiating a connection, repeats them again.

Other related articles
Dos&DDosCyberwar
Read more
DDoS attack
DDoS attack, (distributed) denial of service attack - an attack on a computer system with a large number of computer resources.
926
Found an error?
If you find an error, take a screenshot and send it to the bot.