End-to-end principle
From Wikipedia, the free encyclopedia
The end-to-end principle is one of the central design principles of the Transmission Control Protocol (TCP) widely used on the Internet. It states that, whenever possible, communications protocol operations should be defined to occur at the end-points of a communications system, or as close as possible to the resource being controlled.
The concept first arose in a 1981 paper entitled End-to-end arguments in system design by Jerome H. Saltzer, David P. Reed, and David D. Clark. They argued that reliable systems tend to require end-to-end processing to operate correctly, in addition to any processing in the intermediate system. They pointed out that most features in the lowest level of a communications system have costs for all higher-layer clients, even if those clients do not need the features, and are redundant if the clients have to reimplement the features on an end-to-end basis.
This leads to the model of a "dumb, minimal, network" with smart terminals, a completely different model to the previous paradigm of the smart network with dumb terminals.
For example, in the TCP/IP protocol stack, IP is a dumb, stateless protocol that simply moves datagrams across the network, and TCP is a smart transport protocol providing error detection, retransmission, congestion control, and flow control end-to-end. The network itself (the routers) needs only to support the simple, lightweight IP; the endpoints (computers) run the heavier TCP on top of it when needed.
A second canonical example is that of file transfer. Every reliable file transfer protocol and file transfer program should contain a checksum, which is validated only after everything has been successfully stored on disk. Disk errors and software errors make an end-to-end checksum necessary. The key resource in file transfer is the file system. The end-to-end principle allows the software that accesses the file system to control the rate at which the transfer proceeds, and for retransmissions to be initiated with a minimum of delay because of the file system's proximity to the transfer controller.
According to the end-to-end principle, protocol features are only justified in the lower layers of a system if they are a performance optimization, hence, TCP retransmission for reliability is still justified, but efforts to improve TCP reliability should stop after peak performance has been reached.
The end-to-end principle has proved to work well for applications that require a high degree of data accuracy combined with high tolerance for delay, such as file transfer, and much less well for real-time applications such as telephony where low latency is more important than absolute data accuracy. The end-to-end model is also not appropriate for large multicast and broadcast networks, especially those with high loss such as wireless, because the overhead it imposes on retransmission is too high for most applications to bear.[citation needed]
[edit] Reference
- Jerome H. Saltzer, David P. Reed, and David D. Clark. End-to-end arguments in system design. ACM Transactions on Computer Systems 2, 4 (November 1984) pages 277-288. An earlier version appeared in the Second International Conference on Distributed Computing Systems (April, 1981) pages 509-512.
[edit] See also
[edit] External links
- The Rise of the Middle and the Future of End-to-End Reflections on the Evolution of the Internet Architecture (lots of references)
- E2E Argument (.pdf) Seminal paper
- E2E Argument (.txt)
- Active Networking and E2E