Round-trip delay time

In telecommunications, the round-trip delay time (RTD) or round-trip time (RTT) is the length of time it takes for a signal to be sent plus the length of time it takes for an acknowledgment of that signal to be received. This time delay therefore consists of the transmission times between the two points of a signal.

Network links with both a high bandwidth and a high RTT can have a very large amount of data (the bandwidth-delay product) "in flight" at any given time. Such "long fat pipes" require a special protocol design. One example is the TCP window scale option.

The RTT was originally estimated in TCP by:

RTT = (α · Old_RTT) + ((1 − α) · New_Round_Trip_Sample)[1]

Where α is constant weighting factor(0 ≤ α < 1). Choosing a value α close to 1 makes the weighted average immune to changes that last a short time (e.g., a single segment that encounters long delay). Choosing a value for α close to 0 makes the weighted average respond to changes in delay very quickly.

This was improved by the Jacobson/Karels algorithm, which takes standard deviation into account as well.

Once a new RTT is calculated, it is entered into the equation above to obtain an average RTT for that connection, and the procedure continues for every new calculation.

See also

 This article incorporates public domain material from the General Services Administration document "Federal Standard 1037C" (in support of MIL-STD-188).

References

  1. ^ Comer, Douglas. Internetworking with TCP/IP. Page 226. Upper Saddle River, N.J.: Prentice Hall, 2000. Print.