Ethernet flow control

WireShark screenshot of an Ethernet "Pause" frame

Ethernet flow control is a mechanism for temporarily stopping the transmission of data on Ethernet family computer networks. The first flow control mechanism, the PAUSE frame, was defined by the IEEE 802.3x standard.

The follow-on priority-based flow control, as defined in the IEEE 802.1Qbb standard, provides a link-level flow control mechanism that can be controlled independently for each Class of Service (CoS), as defined by IEEE P802.1p. The goal of this mechanism is to ensure zero loss under congestion in data center bridging (DCB) networks.

Description

Ethernet is a popular family of computer network protocols. Flow control can be implemented at the data link layer. A sending station (computer or network switch) may be transmitting data faster than the other end of the link can accept it. The first flow control mechanism, the PAUSE frame, was defined by the Institute of Electrical and Electronics Engineers (IEEE) task force that defined full duplex Ethernet link segments. The IEEE standard 802.3x was issued in 1997.[1]

Pause frame

An overwhelmed network node can send a PAUSE frame, which halts the transmission of the sender for a specified period of time. A media access control (MAC) frame is used to carry the PAUSE command, with the Control opcode set to 0x0001 (hexadecimal).[2] Only stations configured for full-duplex operation may send PAUSE frames. When a station wishes to pause the other end of a link, it sends a PAUSE frame to either the unique 48-bit destination address of the this link or to the 48-bit reserved multicast address of 01-80-C2-00-00-01.[3]:Annex 31B.3.3 The use of a well-known address makes it unnecessary for a station to discover and store the address of the station at the other end of the link.

Another advantage of using this multicast address arises from the use of flow control between network switches. The particular multicast address used is selected from a range of address which have been reserved by the IEEE 802.1D standard which specifies the operation of switches used for bridging. Normally, a frame with a multicast destination sent to a switch will be forwarded out to all other ports of the switch. However, this range of multicast address is special and will not be forwarded by an 802.1D-compliant switch. Instead, frames sent to this range are understood to be frames meant to be acted upon only within the switch.

A PAUSE frame includes the period of pause time being requested, in the form of two byte unsigned integer (0 through 65535). This number is the requested duration of the pause. The pause time is measured in units of pause "quanta", where each unit is equal to 512 bit times.

By 1999, several vendors supported receiving pause frames, but fewer implemented sending them.[4][5] Pause frames have several disadvantages.

Issues

One original motivation for the pause frame was to handle network interface controllers (NICs) that did not have enough buffering to handle full-speed reception. This problem is not as common with advances in bus speeds and memory sizes. A more likely scenario is network congestion within a switch. For example, a flow can come into a switch on a higher speed link than the one it goes out, or several flows can come in over two or more links that total more than an output link's bandwidth. These will eventually exhaust any amount of buffering in the switch. However, blocking the sending link will cause all flows over that link to be delayed, even those that are not causing any congestion. This situation is a case of head-of-line blocking (HOL), and can happen more often in core network switches due to the large numbers of flows generally being aggregated. Many switches use a technique called virtual output queues to eliminate the HOL blocking internally, so will never send pause frames.[5]

Subsequent efforts

Congestion management

Another effort began in March 2004, and in May 2004 it became the IEEE P802.3ar Congestion Management Task Force. In May 2006 the objectives of the task force were revised to specify a mechanism to limit the transmitted data rate at about 1% granularity. The request was withdrawn and the task force was disbanded in 2008.[6]

Priority flow control

Ethernet Flow control disturbs the Ethernet class of service (defined in IEEE 802.1p), as the data of all priorities are stopped to clear the existing buffers which might also consist of low priority data. As a remedy to this problem, Cisco Systems came up with its own priority flow control extension of the standard protocol. This mechanism uses 14 bytes of the 42-byte stuffing in a regular pause frame. The MAC control opcode for a Priority pause frame is 0x0101. Unlike the original pause, Priority pause indicates the pause time in quanta for each of eight priority classes separately.[7] The Priority-based Flow Control (PFC) project was authorized on March 27, 2008 as IEEE 802.1Qbb. Draft 2.3 was proposed on June 7, 2010. Claudio DeSanti of Cisco was editor.[8] The effort was part of the data center bridging task group, which developed Fibre Channel over Ethernet.[9]

See also

References

  1. IEEE Standards for Local and Metropolitan Area Networks: Supplements to Carrier Sense Multiple Access With Collision Detection (CSMA/CD) Access Method and Physical Layer Specifications - Specification for 802.3 Full Duplex Operation and Physical Layer Specification for 100 Mb/s Operation on Two Pairs of Category 3 Or Better Balanced Twisted Pair Cable (100BASE-T2). Institute of Electrical and Electronics Engineers. 1997. doi:10.1109/IEEESTD.1997.95611. ISBN 1-55937-905-7.
  2. IEEE Standards for Local and Metropolitan Area Networks: Supplements to Carrier Sense Multiple Access With Collision Detection (CSMA/CD) Access Method and Physical Layer Specifications - Specification for 802.3 Full Duplex Operation and Physical Layer Specification for 100 Mb/s Operation on Two Pairs of Category 3 Or Better Balanced Twisted Pair Cable (100BASE-T2). Institute of Electrical and Electronics Engineers. 1997. doi:10.1109/IEEESTD.1997.95611. ISBN 1-55937-905-7.
  3. "802.3-2012  IEEE Standard for Ethernet" (PDF). ieee.org. IEEE Standards Association. 2012-12-28. Retrieved 2014-02-09.
  4. Ann Sullivan; Greg Kilmartin; Scott Hamilton (September 13, 1999). "Switch Vendors pass interoperability tests". Network World. pp. 81–82. Retrieved May 10, 2011.
  5. 1 2 "Vendors on flow control". Network World Fusion. September 13, 1999. Archived from the original on 2012-02-07. Vendor comments on flow control in the 1999 test.
  6. "IEEE P802.3ar Congestion Management Task Force". December 18, 2008. Retrieved May 10, 2011.
  7. "Priority Flow Control: Build Reliable Layer 2 Infrastructure" (PDF). White Paper. Cisco Systems. June 2009. Retrieved May 10, 2011.
  8. "IEEE 802.1Q Priority-based Flow Control". Institute of Electrical and Electronics Engineers. June 7, 2010. Retrieved May 10, 2011.
  9. "Data Center Bridging Task Group". Institute of Electrical and Electronics Engineers. June 7, 2010. Retrieved May 10, 2011.

External links

This article is issued from Wikipedia - version of the Tuesday, November 03, 2015. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.