TCP tuning

From Wikipedia, the free encyclopedia

TCP tuning techniques adjust some parameters of TCP connection over high-bandwidth high-latency networks. Well tuned networks can perform up to 1000 times faster in some cases.[1]

Contents

[edit] Network and system characteristics

[edit] Bandwidth-delay product (BDP)

Bandwidth-delay product (BDP) is a term primarily used in conjunction with the TCP to refer to the number of bytes necessary to fill a TCP "path", i.e. it is equal to the maximum number of simultaneous packets in transit between the transmitter and the receiver. TCP has a concept of windows which are used for congestion control and for determining the optimum size of packet that is resilient to packet loss, packet truncation (due to link layer maximum transmission unit) or reordering.

High performance networks have very large BDPs. To give a practical example, in the case of two satellites located 0.5 light-seconds apart, communicating over a radio link with a bandwidth of 10 Gbit/s, there will be at most 0.5×1010 bits, i.e., 5 Gbit = 625 MB of data in the space between them. Operating systems and protocols designed as recently as a few years ago when networks were slower were tuned for BDPs of orders of magnitude smaller, with implications for tuning.

[edit] Buffers

The original TCP configurations supported buffers of up to 64K Bytes (64 KiB), which was adequate for slow links or links with small round trip times (RTTs). Larger buffers are required by the high performance options described below.

Buffering is used throughout high performance network systems to handle delays in the system. In general, buffer size will need to be scaled proportional to the amount of data "in flight" at any time. For very high performance applications that are not sensitive to network delays, it is possible to interpose large end to end buffering delays by putting in intermediate data storage points in an end to end system, and then to use automated and scheduled non-real-time data transfers to get the data to their final endpoints.

[edit] TCP speed limits

Maximum achievable throughput for a single TCP connection is determined by different factors. One trivial limitation is the maximum bandwidth on the slowest link on the path. But there are also other, less obvious limits for TCP thoughput. Bit errors can create a limitation for the connection as well as round-trip time.

[edit] Window size

In computer networking, RWIN (TCP Receive Window) is the amount of data that a computer can accept without acknowledging the sender. If sender has not received acknowledgement for the first packet it sent, it will stop and wait and if this wait exceeds a certain limit, it may even retransmit. This is how TCP achieves reliable data transfer.

Even if there is no packet loss in the network, the windowing can cause a limit for the throughput. Because TCP transmits data up to the window size before waiting the packets, full bandwidth of the network may not always get used. The limitation caused by window size can be calculated as follows:

 \textit{Throughput} \le \frac {\textit{RWIN}} {\textit{RTT}},

where RWIN is the maximum receive windows size and RTT is the round-trip time for the path.

[edit] Packet loss

When packet loss occurs in the network, an additional limit is imposed on the connection. The limit can be calculated according to the formula (Mathis et al.):

 \textit{Throughput} \le \frac {0.7 \textit{MSS}} {\textit{RTT} \sqrt{ \textit{Ploss} }},

where MSS is the maximum segment size and Ploss is the probability of packet loss. [2]

[edit] TCP Options for High Performance

A number of extensions have been made to TCP over the years to increase its performance over fast high-RTT links ("long fat networks", or LFNs for short).

TCP timestamps (RFC 1323) play a double role: they avoid ambiguities due to the 32-bit sequence number field wrapping around, and they allow more precise RTT estimation in the presence of multiple losses per RTT. With those improvements, it becomes reasonable to increase the TCP window beyond 64 kB, which can be done using the window scaling option (RFC 1323).

The TCP selective acknowledgment options (SACK, RFC 2018) allows a TCP receiver to precisely inform the TCP server about which segments have been lost. This increases performance on high-RTT links, when multiple losses per window are possible.

Path MTU discovery avoids the need for in-network fragmentation, which increases performance in the presence of losses.

[edit] References

  1. ^ High Performance Enabled SSH/SCP [PSC]
  2. ^ RFC 3155

[edit] External links