Link aggregation
From Wikipedia, the free encyclopedia
Link aggregation, or IEEE 802.3ad, is a computer networking term which describes using multiple Ethernet network cables/ports in parallel to increase the link speed beyond the limits of any one single cable or port, and to increase the redundancy for higher availability. Other terms for this include "Ethernet trunk", "NIC teaming", "port teaming", "port trunking", "EtherChannel", "Multi-Link Trunking (MLT)", "DMLT", "SMLT", "DSMLT", "R-SMLT", "NIC bonding", "Network Fault Tolerance (NFT)" and "link aggregate group" (LAG). Most implementations now conform to clause 43 of IEEE 802.3 standard, informally referred to as "802.3ad".
A limitation of Link aggregation is that all the physical ports in the link aggregation group must reside on the same switch. SMLT, DSMLT and RSMLT technologies remove this limitation by allowing the physical ports to be split between two switches.
Contents |
[edit] Link aggregation and the network backbone
Link aggregation is an inexpensive way to set up a high-speed backbone network that transfers much more data than any one single port or device can deliver. Although in the past various vendors used proprietary techniques, the preference today is to use the IEEE standard which can either be set up statically or by using the Link Aggregation Control Protocol (LACP). This allows several devices to communicate simultaneously at their full single-port speed while not allowing any one single device to monopolize all available backbone capacity.
This has limitations: originally, link aggregation was developed to provide redundancy, and not bandwidth benefits. The actual benefits vary on the load-balancing method used on each device (different balancing algorithms can be configured to each end and is actually encouraged to avoid path polarization).
The most common way to balance the traffic is to use L3 hashes. These hashes are calculated when the first connection is established and then kept in the devices memory for future use. This effectively limits the clients bandwidth in an aggregate to its single member's maximum bandwidth per session. This is the main reason why 50/50 load balancing is almost never reached in real-life implementations, more like 70/30. More advanced distribution layer switches can employ L4 hash which will bring the balance closer to 50/50.
Link aggregation also allows the network's backbone speed to grow incrementally as demand on the network increases, without having to replace everything and buy new hardware.
For most backbone installations it is common to install more cabling or fiber optic pairs than are initially necessary, even if there is no immediate need for the additional cabling. This is done because labor costs are higher than the cost of the cable and running extra cable reduces future labor costs if networking needs change. Link aggregation can allow the use of these extra cables to increase backbone speeds for little or no extra cost if ports are available.
[edit] Link aggregation size and using ports efficiently
Trunking becomes inefficient beyond a certain bandwidth depending on the total number of ports on the switch equipment. A 24-port gigabit switch with two 8-gigabit trunks is using sixteen of its available ports just for the two trunks, and leaves only eight of its 1-gigabit ports for other devices. This same configuration on a 48-port gigabit switch leaves 32 1-gigabit ports available, and so it is much more efficient (assuming of course that those ports are actually needed at the switch location).
When 40-50% of the switch ports are being utilized for backbone trunking, upgrading to a switch with either more ports or a higher base-operating speed may be a better option than simply adding more switches, especially if the old switch can be re-used elsewhere on a less performance-critical part of the network.
[edit] Link aggregation of network interface cards
Trunking is not just for the core switching equipment. Network Interface Cards (NICs) can also sometimes be trunked together to form network links beyond the speed of any one single NIC. For example, this allows a central file server to establish a 2-gigabit connection using two 1-gigabit NICs trunked together.
Note that Microsoft Windows does not natively support link aggregation (at least up to Win 2003) [1]; however some manufacturers provide software for aggregation on their multiport NICs at the device driver layer.
In Linux, FreeBSD, NetBSD, OpenBSD, OpenSolaris, VMware ESX Server, and commercial Unixs such as AIX, Ethernet bonding (trunking) is implemented on a higher level, and can hence deal with NICs from different manufacturers or drivers, as long as the NIC is supported by the kernel.
[edit] Link aggregation of different types of cabling and speeds
Typically the ports used in a trunk should be all of the same type, such as all copper ports (CAT-5E/CAT-6), all multi-mode fiber ports (SX), or all single-mode fiber ports (LX).[citation needed]
The ports also need to operate at the same speed. It is possible to trunk 100-megabit ports together, but trunking a 100-megabit port and a gigabit port together is a bad idea, since traffic is distributed among member ports with no regard of individual port speeds, and thus, effective throughput of 100-megabit + gigabit aggregate will be 200 megabit. However, mixing port sizes within a trunk is technically supported in the 802.3ad standard. Ports operating in different duplex will not aggregate. One half duplex and a full duplex port cannot aggregate.
[edit] Link aggregation support and cross-brand compatibility
A limitation on link aggregation is that it would like to avoid reordering Ethernet frames. That goal is approximated by sending all frames associated with a particular session across the same link[2]. Depending on the traffic, this may not provide even distribution across the links in the trunk.
Most gigabit trunking is now based on clause 43 of the IEEE 802.3 standard added in March 2000 by the IEEE 802.3ad task force.[3] Other proprietary trunking protocols existed before this standard was established. Some examples include Cisco's Port Aggregation Protocol (PAgP), Adaptec's Duralink trunking, and Nortel MLT Multi-link trunking. These custom trunking protocols typically only work for interconnecting equipment from the same manufacturer or product line.
Even though many manufacturers now implement the standard, issues may occur (for example Ethernet auto-negotiation[4]). Testing before production implementation is prudent.
Intel has released a package for Linux called Advanced Networking Services (ANS) to bind Intel Fast Ethernet and Gigabit cards.[5] Also, newer Linux kernels support bonding between NICs of the same type.
[edit] See also
- Multi-Link Trunking (MLT): Nortel's Port Aggregation Protocol
- Split multi-link trunking (SMLT): Nortel's proprietary enhancement to LACP
- Routed-SMLT (R-SMLT): Nortel's proprietary enhancement to LACP
[edit] References
- ^ LACP (802.3ad) on Windows 2003
- ^ http://grouper.ieee.org/groups/802/3/hssg/public/apr07/frazier_01_0407.pdf
- ^ IEEE 802.3ad Link Aggregration Task Force
- ^ Carrier Ethernet World Congress Interoperability Report p11
- ^ Intel Advanced Networking Services