Communications protocol
From Wikipedia, the free encyclopedia
In the field of telecommunications, a communications protocol is the set of standard rules for data representation, signaling, authentication and error detection required to send information over a communications channel. An example of a simple communications protocol adapted to voice communication is the case of a radio dispatcher talking to mobile stations. The communication protocols for digital computer network communication have many features intended to ensure reliable interchange of data over an imperfect communication channel. Communication protocol is basically following certain rules so that the system works properly.
== Network protocol design principle to create a set of common network protocol design principles[citation needed]. These principles include effectiveness, reliability, and resiliency.
Contents |
[edit] Effectiveness
A communications protocol needs to be specified in such a way that engineers, designers, and in some cases software developers can implement and/or use it. In human-machine systems, its design needs to facilitate routine usage by humans. Protocol layering accomplishes these objectives by dividing the protocol design into a number of smaller parts, each of which performs closely related sub-tasks, and interacts with other layers of the protocol only in a small number of well-defined ways.
Protocol layering allows the parts of a protocol to be designed and tested without a combinatorial explosion of cases, keeping each design relatively simple. The implementation of a sub-task on one layer can make assumptions about the behavior and services offered by the layers beneath it. Thus, layering enables a "mix-and-match" of protocols that permit familiar protocols to be adapted to unusual circumstances.
For an example that involves computing, consider an email protocol like the Simple Mail Transfer Protocol (SMTP). An SMTP client can send messages to any server that conforms to SMTP's specification. Actual applications can be (for example) an aircraft with an SMTP server receiving messages from a ground controller over a radio-based internet link. Any SMTP client can correctly interact with any SMTP server, because they both conform to the same protocol specification, RFC2821, RT49764368.
This paragraph informally provides some examples of layers, some required functionalities, and some protocols that implement them, all from the realm of computing protocols.
- At the lowest level, bits are encoded in electrical, light or radio signals by the Physical layer. Some examples include RS-232, SONET, and WiFi.
- A somewhat higher Data link layer such as the point-to-point protocol (PPP) may detect errors and configure the transmission system.
- An even higher protocol may perform network functions. One very common protocol is the Internet protocol (IP), which implements addressing for large set of protocols. A common associated protocol is the Transmission control protocol (TCP) which implements error detection and correction (by retransmission). TCP and IP are often paired, giving rise to the familiar abbreviation TCP/IP.
- A layer in charge of presentation might describe how to encode text (ie: ASCII, or Unicode).
- An application protocol like SMTP, may (among other things) describe how to inquire about electronic mail messages.
These different tasks show why there's a need for a software architecture or reference model that systematically places each task into context.
The reference model usually used for protocol layering is the OSI seven layer model, which can be applied to any protocol, not just the OSI protocols of the International Organization for Standardization (ISO). In particular, the Internet Protocol can be analysed using the OSI model.
[edit] Reliability
Assuring reliability of data transmission involves error detection and correction, or some means of requesting retransmission. It is a truism that communication media are always faulty. The conventional measure of quality is the number of failed bits per bits transmitted. This has the useful feature of being a dimensionless figure of merit that can be compared across any speed or type of communication media.
In telephony, links with bit error rates (BER) of 10-4 or more are regarded as faulty (they interfere with telephone conversations), while links with a BER of 10-5 or more should be dealt with by routine maintenance (they can be heard).
Data transmission often requires bit error rates below 10-12. Computer data transmissions are so frequent that larger error rates would affect operations of customers like banks and stock exchanges. Since most transmissions use networks with telephonic error rates, the errors caused by these networks must be detected and then corrected.
Communications systems detect errors by transmitting a summary of the data with the data. In TCP (the internet's Transmission Control Protocol), the sum of the data bytes of packet is sent in each packet's header. Simple arithmetic sums do not detect out-of-order data, or cancelling errors. A bit-wise binary polynomial, a cyclic redundancy check, can detect these errors and more, but is slightly more expensive to calculate.
Communication systems correct errors by selectively resending bad parts of a message. For example, in TCP when a checksum is bad, the packet is discarded. When a packet is lost, the receiver acknowledges all of the packets up to, but not including the failed packet. Eventually, the sender sees that too much time has elapsed without an acknowledgement, so it resends all of the packets that have not been acknowledged. At the same time, the sender backs off its rate of sending, in case the packet loss was caused by saturation of the path between sender and receiver. (Note: this is an over-simplification: see TCP and congestion collapse for more detail)
In general, the performance of TCP is severely degraded in conditions of high packet loss (more than 0.1%), due to the need to resend packets repeatedly. For this reason, TCP/IP connections are typically either run on highly reliable fiber networks, or over a lower-level protocol with added error-detection and correction features (such as modem links with ARQ). These connections typically have uncorrected bit error rates of 10-9 to 10-12, ensuring high TCP/IP performance.
[edit] Resiliency
Re addresses a form of network failure known as topological failure in which a communications link is cut, or degrades below usable quality. Most modern communication protocols periodically send messages to test a link. In phones, a framing bit is sent every 24 bits on T1 lines. In phone systems, when "sync is lost", fail-safe mechanisms reroute the signals around the failing equipment.
In packet switched networks, the equivalent functions are performed using router update messages to detect loss of connectivity.
[edit] Standards organizations
Most recent protocols are assigned by the IETF for Internet communications, and the IEEE, or the ISO for other types. The ITU-T handles telecommunications protocols and formats for the public switched telephone network (PSTN). The ITU-R handles protocols and formats for radio communications. As the PSTN. radio systems, and Internet converge, the different sets of standards are also being driven towards technological convergence.
[edit] Protocol families
A number of major protocol stacks or families exist, including the following:
Open standards:
- Internet protocol suite (TCP/IP)
- Open Systems Interconnection (OSI)
- FTP
- UPnP (Universal Plug and Play)
- iSCSI
- Network File System (protocol)
Proprietary standards:
- AppleTalk
- DECnet
- IPX/SPX
- Server Message Block (SMB) and CIFS
- Systems Network Architecture (SNA)
- Distributed Systems Architecture (DSA)
- Apple Filing Protocol (AFP)
- RSYNC
- Unison
[edit] See also
- Protocol (computing)
- Connection-oriented protocol
- Connectionless protocol
- List of network protocols
- Network architecture
- Congestion collapse
- Tunneling protocol
- HTTP
- FTP
[edit] References
- Radia Perlman: Interconnections: Bridges, Routers, Switches, and Internetworking Protocols. 2nd Edition. Addison-Wesley 1999, ISBN 0-201-63448-1. In particular Ch. 18 on "network design folklore", which is also available online at http://www.informit.com/articles/article.aspx?p=20482
- Gerard J. Holzmann: Design and Validation of Computer Protocols. Prentice Hall, 1991, ISBN 0-13-539925-4. Also available online at http://spinroot.com/spin/Doc/Book91.html