End-to-end principle

From Wikipedia, the free encyclopedia

The end-to-end principle is a classic design principle of computer networking,[nb 1] first explicitly articulated in a 1981 conference paper by Saltzer, Reed, and Clark.[Ref 1] [nb 2]

The end-to-end principle states that application-specific functions ought to reside in the end hosts of a network rather than in intermediary nodes  provided they can be implemented "completely and correctly" in the end hosts. Going back to Baran's work on obtaining reliability from unreliable parts in the early 1960s, the basic intuition behind the original principle is that the payoffs from adding functions to the network quickly diminish, especially in those cases where the end hosts will have to re-implement functions for reasons of "completeness and correctness" anyway (regardless of the efforts of the network).[nb 3] Moreover, there is an unfair performance penalty paid by all network clients when application functions of just a few clients are pushed into the intermediate nodes of a network.

The canonical example for the end-to-end principle is that of arbitrarily reliable file transfer between two communication end points in a distributed network of nontrivial size.[Ref 2] The only way two end points can obtain perfect reliability for this file transfer is by positive acknowledgment of end-to-end checksums over the final file in the destination storage locations on the destination machine. In such a system, lesser checksum and acknowledgment (ACK/NACK) protocols are justified only as a performance optimization, useful to the vast majority of clients, but are incapable of anticipating the reliability requirement of the transfer application itself (because said requirements may be arbitrarily high).

In debates about network neutrality, a common interpretation of the end-to-end principle is that it implies a neutral or "dumb" network.

Basic content of the principle

The fundamental notion behind the end-to-end principle is that for two processes communicating with each other via some communication means, the reliability obtained from that means cannot be expected to be perfectly aligned with the reliability requirements of the processes. In particular, meeting or exceeding very high reliability requirements of communicating processes separated by networks of nontrivial size is more costly than obtaining the required degree of reliability by positive end-to-end acknowledgements and retransmissions (referred to as PAR or ARQ).[nb 4] Put differently, it is far easier and more tractable to obtain reliability beyond a certain margin by mechanisms in the end hosts of a network rather than in the intermediary nodes,[nb 5] especially when the latter are beyond the control of and accountability to the former.[nb 6] An end-to-end PAR protocol with infinite retries can obtain arbitrarily high reliability from any network with a higher than zero probability of successfully transmitting data from one end to another.[nb 7]

The end-to-end principle does not trivially extend to functions beyond end-to-end error control and correction. E.g., no straightforward end-to-end arguments can be made for communication parameters such as latency and throughput. Based on a personal communication with Saltzer (lead author of the original end-to-end paper[Ref 2]) Blumenthal and Clark in a 2001 paper note:[Ref 15]
[F]rom the beginning, the end-to-end arguments revolved around requirements that could be implemented correctly at the end-points; if implementation inside the network is the only way to accomplish the requirement, then an end-to-end argument isn't appropriate in the first place. (p. 80)

The meaning of the end-to-end principle has been continuously reinterpreted ever since its initial articulation. Also, noteworthy formulations of the end-to-end principle can be found prior to the seminal 1981 Saltzer, Reed, and Clark paper.[Ref 2]

The basic notion: reliability from unreliable parts

In the 1960s, Paul Baran and Donald Davies in their pre-Arpanet elaborations of networking made brief comments about reliability that capture the essence of the later end-to-end principle. To quote from a 1964 Baran paper:[Ref 16]
Reliability and raw error rates are secondary. The network must be built with the expectation of heavy damage anyway. Powerful error removal methods exist. (p. 5)
Similarly, Davies notes on end-to-end error control:[Ref 17]
It is thought that all users of the network will provide themselves with some kind of error control and that without difficulty this could be made to show up a missing packet. Because of this, loss of packets, if it is sufficiently rare, can be tolerated. (p. 2.3)

Early trade-offs: experiences in the Arpanet

The Arpanet was the first large-scale general-purpose packet switching network  implementing several of the basic notions previously touched on by Baran and Davies, and demonstrating a number of important aspects to the end-to-end principle:

Packet switching pushes some logical functions toward the communication end points
If the basic premise of a distributed network is packet switching, then functions such as reordering and duplicate detection inevitably have to be implemented at the logical end points of such network. Consequently, the Arpanet featured two distinct levels of functionality – (1) a lower level concerned with transporting data packets between neighboring network nodes (called IMPs), and (2) a higher level concerned with various end-to-end aspects of the data transmission.[nb 8] Dave Clark, one of the authors of the end-to-end principle paper, concludes:[Ref 18] "The discovery of packets is not a consequence of the end-to-end argument. It is the success of packets that make the end-to-end argument relevant" (slide 31).
No arbitrarily reliable data transfer without end-to-end acknowledgment and retransmission mechanisms
The Arpanet was designed to provide reliable data transport between any two end points of the network  much like a simple I/O channel between a computer and a nearby peripheral device.[nb 9] In order to remedy any potential failures of packet transmission normal Arpanet messages were handed from one node to the next node with a positive acknowledgment and retransmission scheme; after a successful handover they were then discarded,[nb 10] no source to destination retransmission in case of packet loss was catered for. However, in spite of significant efforts, perfect reliability as envisaged in the initial Arpanet specification turned out to be impossible to provide  a reality that became increasingly obvious once the Arpanet grew well beyond its initial four node topology.[nb 11] The Arpanet thus provided a strong case for the inherent limits of network based hop-by-hop reliability mechanisms in pursuit of true end-to-end reliability.[nb 12]
Trade-off between reliability, latency, and throughput
The pursuit of perfect reliability may hurt other relevant parameters of a data transmission  most importantly latency and throughput. This is particularly important for applications that require no perfect reliability, but rather value predictable throughput and low latency  the classic example being interactive real-time voice applications. This use case was catered for in the Arpanet by providing a raw message service that dispensed with various reliability measures so as to provide faster and lower latency data transmission service to the end hosts.[nb 13]

The canonical case: TCP/IP

In the Internet the Internet Protocol  a connectionless datagram service with no delivery guarantees and effectively no QoS parameters  is used for nearly all communications. Arbitrary protocols may sit on top of IP. It turns out that some applications (such as voice, in many cases) do not need reliable retransmission, and so the only reliability in IP is in the checksum of the IP header (which is necessary to prevent bit errors from sending packets on wild routing paths.) End-to-end acknowledgment and retransmission is relegated to the connection-oriented TCP which sits on top of IP. The functional split between IP and TCP exemplifies proper application of the end-to-end principle to transport protocol design. In addition, to function properly networks must also have methods for shedding or rejecting loads that would cause the network to thrash and collapse (think "busy signal" on a telephone network.) The vast majority of applications on the Internet use TCP for communications. It was surprising that it took fully 7 years after TCP was standardized for Van Jacobsen and Karels to invent end-to-end congestion control algorithms for TCP, which adaptively and in a distributed fashion, scale back transmission rates to shed load from an overloaded Internet.

Another canonical case: the Mobile Internet

The mobile Internet is a typical case where end-to-end engineering is mandatory: a good network design and optimization cannot be performed without the end-to-end view. This is because on the end-to-end path (from the content server to the application running on the terminal, up to the end user) the radio leg adds a very specific behaviour that disturbs the performance. The whole transmission chain is affected because end-to-end mechanisms - specifically TCP - create strong coupling between each elements of the chain (server-side TCP and HTTP, congestion in the fixed network, radio quality fluctuations, radio cell changes due to mobility, congestion over the radio link, and the terminal TCP stack, the terminal OS, and last but not least the end-user behaviour). And in this case, the TCP design logic is not well-suited: TCP was designed to overcome congestion issues over the fixed internet, and it interprets packet loss as a sign of congestion; in a mobile network, packet loss may be due to a cell change, or temporary poor coverage, and TCP should not interpret this as a congestion and respond by decreasing the sending rate, but rather, should keep sending data steadily so that the radio pipe will fill as soon as the radio recovers.

Limitations of the principle

The most important limitation of the end-to-end principle is that its basic conclusion  that of putting functions in the application end points rather than the intermediary nodes  is not trivial to operationalize. Specifically:

  • it assumes a notion of distinct application end points as opposed to intermediary nodes that makes little sense when considering the structure of distributed applications;
  • it assumes a dichotomy between non-application-specific and application-specific functions (the former to be part of the operations between application end points and the latter to be implemented by the application end points themselves) while arguably no function to be performed in a network is fully orthogonal to all possible application needs;
  • it remains silent on functions that may not be implemented "completely and correctly" in the application end points and places no upper bound on the amount of application specific functions that may be placed with intermediary nodes on grounds of performance considerations, economic trade-offs, etc.

Notes

  1. See Denning's Great Principles of Computing
  2. The 1981 paper[Ref 1] was published in ACM's TOCS in an updated version in 1984.[Ref 2][Ref 3]
  3. The full quote from the Saltzer, Reed, Clark paper reads:[Ref 2]
    In a system that includes communications, one usually draws a modular boundary around the communication subsystem and defines a firm interface between it and the rest of the system. When doing so, it becomes apparent that there is a list of functions each of which might be implemented in any of several ways: by the communication subsystem, by its client, as a joint venture, or perhaps redundantly, each doing its own version. In reasoning about this choice, the requirements of the application provide the basis for the following class of arguments: The function in question can completely and correctly be implemented only with the knowledge and help of the application standing at the endpoints of the communication system. Therefore, providing that questioned function as a feature of the communication system itself is not possible, and moreover, produces a performance penalty for all clients of the communication system. (Sometimes an incomplete version of the function provided by the communication system may be useful as a performance enhancement.) We call this line of reasoning against low-level function implementation the end-to-end argument. (p. 278)
  4. In fact, even in local area networks there is a non-zero probability of communication failure  "attention to reliability at higher levels is required regardless of the control strategy of the network".[Ref 4]
  5. Put in economics terms, the marginal cost of additional reliability in the network exceeds the marginal cost of obtaining the same additional reliability by measures in the end hosts. The economically efficient level of reliability improvement inside the network depends on the specific circumstances; however, it is certainly nowhere near zero:[Ref 2]
    Clearly, some effort at the lower levels to improve network reliability can have a significant effect on application performance. (p. 281)
  6. The possibility of enforceable contractual remedies notwithstanding, it is impossible for any network in which intermediary resources are shared in a non-deterministic fashion to guarantee perfect reliability. At most, it may quote statistical performance averages.
  7. More precisely:[Ref 5]
    A correctly functioning PAR protocol with infinite retry count never loses or duplicates messages. [Corollary:] A correctly functioning PAR protocol with finite retry count never loses or duplicates messages, and the probability of failing to deliver a message can be made arbitrarily small by the sender. (p. 3)
  8. In accordance with the Arpanet RFQ[Ref 9] (pp. 47 f.) the Arpanet conceptually separated certain functions. As BBN point out in a 1977 paper:[Ref 10]
    [T]he ARPA Network implementation uses the technique of breaking messages into packets to minimize the delay seen for long transmissions over many hops. The ARPA Network implementation also allows several messages to be in transit simultaneously between a given pair of Hosts. However, the several messages and the packets within the messages may arrive at the destination IMP out of order, and in the event of a broken IMP or line, there may be duplicates. The task of the ARPA Network source-to-destination transmission procedure is to reorder packets and messages at their destination, to cull duplicates, and after all the packets of a message have arrived, pass the message on to the destination Host and return an end-to-end acknowledgment. (p. 284)
  9. This requirement was spelled out in the Arpanet RFQ:[Ref 9]
    From the point of view of the ARPA contractors as users of the network, the communication subnet is a self-contained facility whose software and hardware is maintained by the network contractor. In designing Interconnection Software we should only need to use the I/0 conventions for moving data into and out of the subnet and not otherwise be involved in the details of subnet operation. Specifically, error checking, fault detection, message switching, fault recovery, line switching, carrier failures and carrier quality assessment, as required to guarantee reliable network performance, are the sole responsibility of the network contractor. (p. 25)
  10. Notes Walden in a 1972 paper:[Ref 11]
    Each IMP holds on to a packet until it gets a positive acknowledgment from the next IMP down the line that the packet has been properly received. It is gets the acknowledgment, all is well; the IMP knows that the next IMP now has responsibility for the packet and the transmitting IMP can discard its copy of the packet. (p. 11)
  11. By 1973, BBN acknowledged that the initial aim of perfect reliability inside the Arpanet was not achievable:[Ref 12]
    Initially, it was thought that the only components in the network design that were prone to errors were the communications circuits, and the modem interfaces in the IMPs are equipped with a CRC checksum to detect "almost all" such errors. The rest of the system, including Host interfaces, IMP processors, memories, and interfaces, were all considered to be error-free. We have had to re-evaluate this position in the light of our experience. (p. 1)
    In fact, as Metcalfe summarizes by 1973,[Ref 13] "there have been enough bits in error in the Arpanet to fill this quota [one undetected transmission bit error per year] for centuries" (p. 7-28). See also BBN Report 2816 (pp. 10 ff.)[Ref 14] for additional elaboration about the experiences gained in the first years of operating the Arpanet.
  12. Incidentally, the Arpanet also provides a good case for the trade-offs between the cost of end-to-end reliability mechanisms versus the benefits to be obtained thusly. Note that true end-to-end reliability mechanisms would have been prohibitively costly at the time, given that the specification held that there could be up to 8 host level messages in flight at the same time between two end points, each having a maximum of more than 8000 bits. The amount of memory that would have been required to keep copies of all those data for possible retransmission in case no acknowledgment came from the destination IMP was too expensive to be worthwhile. As for host based end-to-end reliability mechanisms  those would have added considerable complexity to the common host level protocol (Host-Host Protocol). While the desirability of host-host reliability mechanisms was articulated in RFC 1, after some discussion they were dispensed with (although higher level protocols or applications were, of course, free to implement such mechanisms themselves). For a recount of the debate at the time see Bärwolff 2010,[Ref 8] pp. 56-58 and the notes therein, especially notes 151 and 163.
  13. Early experiments with packet voice date back to 1971, and by 1972 more formal ARPA research on the subject commenced. As documented in RFC 660 (p. 2),[Ref 6] in 1974 BBN introduced the raw message service (Raw Message Interface, RMI) to the Arpanet, primarily in order to allow hosts to experiment with packet voice applications, but also acknowledging the use of such facility in view of possibly internetwork communication (cf. a BBN Report 2913[Ref 7] at pp. 55 f.). See also Bärwolff 2010,[Ref 8] pp. 80-84 and the copious notes therein.

References

External links

This article is issued from Wikipedia. The text is available under the Creative Commons Attribution/Share Alike; additional terms may apply for the media files.