RDMA over Converged Ethernet
RDMA over Converged Ethernet (RoCE) is a network protocol that allows remote direct memory access (RDMA) over an Ethernet network. There exist two RoCE versions, namely RoCE v1 and RoCE v2. RoCE v1 is a link layer protocol and hence allows communication between any two hosts in the same Ethernet broadcast domain. RoCE v2 is a internet layer protocol which means that RoCE v2 packets can be routed. Although the RoCE protocol benefits from the characteristics of a converged Ethernet network, the protocol can also be used on a traditional or non-converged Ethernet network.[1] [2]
Background
Network-intensive applications like networked storage or cluster computing need a network infrastructure with a high bandwidth and low latency. The advantages of RDMA over other network application programming interfaces such as Berkeley sockets are lower latency, lower CPU load and higher bandwidth.[3] The RoCE protocol allows lower latencies than its predecessor, the iWARP protocol.[4] There exist RoCE HCAs (Host Channel Adapter) with a latency as low as 1.3 microseconds[5][6] while the lowest known iWARP HCA latency in 2011 was 3 microseconds.[7]
RoCE v1
The RoCE v1 protocol has been defined on top of the Ethernet protocol and uses ethertype 0x8915.[1] This means that the frame length limits of the Ethernet protocol apply - 1500 bytes for a regular Ethernet frame and 9000 bytes for a jumbo frame.
RoCE v2
The RoCE v2 protocol, sometimes called Routable RoCE[8] or RRoCE, has been defined on top of UDP and supports both IPv4 and IPv6.[2] The destination port number 4791 has been reserved for RoCE v2.[9] Packets with the same UDP source port and the same destination address must not be reordered. Packets with different UDP source port numbers and the same destination address may be sent over different links to that destination address.
RoCE versus InfiniBand
RoCE defines how to perform RDMA over Ethernet while the InfiniBand architecture specification defines how to perform RDMA over an InfiniBand network. RoCE was expected to bring InfiniBand applications, which are predominantly based on clusters, onto a common Ethernet converged fabric.[10] Others expected that InfiniBand will keep offering a higher bandwidth and lower latency than what is possible over Ethernet.[11] While Ethernet is a more familiar technology to most than InfiniBand, the cost of InfiniBand equipment, especially switches, was predicted in 2009 to be lower than that of 40 Gigabit Ethernet.[12]
The technical differences between the RoCE and InfiniBand protocols are as follows:
- RoCE v1 is a link layer protocol and hence not routable. RoCE v2 and InfiniBand are routable.
- RoCE uses priority-based flow control while InfiniBand uses a credit-based algorithm to guarantee lossless HCA-to-HCA communication. The priority-based flow control (PFC) algorithm limits cable length and increases switch cost.[13][14] PFC is good for a small number of hops (1-2), but actual congestion control is likely to be needed at larger scale, as PFC will have issues at larger number of hops.[15]
- The InfiniBand switches available today have (always had) a lower latency than Ethernet switches. Port-to-port latency for one particular type of Ethernet switch is 230 ns[16] versus 100 ns[17] for an InfiniBand switch with the same number of ports.
- Infiniband bandwidths are higher towards clients. The current standard setups are based on 40- or 56-gigabit host adapters, which in Ethernet environments are normally only used in the backbone. Though, some newer host adapters are able to run either in 56 gigabit IB or in 56 gigabit Ethernet mode.[18]
- Configuring a DCB Ethernet network is significantly more complex than configuring an InfiniBand network.[19]
RoCE versus iWARP
While the RoCE protocols define how to perform RDMA using Ethernet frames, the iWARP protocol defines how to perform RDMA over a connection-oriented transport like the Transmission Control Protocol (TCP). RoCE v1 is limited to a single Ethernet broadcast domain. RoCE v2 and iWARP packets are routable.[20] RoCE is bound to Ethernet but iWARP is not. The memory requirements of a large number of connections along with TCP's flow and reliability controls lead to scalability and performance issues when using iWARP in large-scale datacenters and for large-scale applications (i.e. large-scale enterprises, cloud computing, web 2.0 applications etc.)[21] Also, multicast is defined in the RoCE specification while the current iWARP specification does not define how to perform multicast RDMA.[22][23][24]
Criticism
Some aspects that could have been defined in the RoCE specification have been left out. These are:
- How to translate between primary RoCE v1 GIDs and Ethernet MAC addresses.[25]
- How to translate between secondary RoCE v1 GIDs and Ethernet MAC addresses. It is not clear whether it is possible to implement secondary GIDs in the RoCE v1 protocol without adding a RoCE-specific address resolution protocol.
- How to implement VLANs for the RoCE v1 protocol. Current RoCE v1 implementations store the VLAN ID in the twelfth and thirteenth byte of the sixteen-byte GID, although the RoCE v1 specification does not mention VLANs at all.[26]
- How to translate between RoCE v1 multicast GIDs and Ethernet MAC addresses. Implementations in 2010 used the same address mapping that has been specified for mapping IPv6 multicast addresses to Ethernet MAC addresses.[27][28]
- How to restrict RoCE v1 multicast traffic to a subset of the ports of an Ethernet switch. As of September 2013, an equivalent of the Multicast Listener Discovery protocol has not yet been defined for RoCE v1.
- Software support for RoCE v2 is still emerging. Mellanox OFED 2.3 has RoCE v2 support but neither OpenFabrics OFED 3.12 nor Linux kernel 3.17 supports RoCE v2.[29] The RoCE v2 port number used in Mellanox OFED v2.3-1.0.1 (1021[30]) does not match the port number assigned by IANA (4791).
- At least one vendor that offers an RDMA over Ethernet solution has chosen another wire protocol than RoCE.[31]
References
- ↑ 1.0 1.1 "InfiniBand™ Architecture Specification Release 1.2.1 Annex A16: RoCE". InfiniBand Trade Association. 13 April 2010.
- ↑ 2.0 2.1 "InfiniBand™ Architecture Specification Release 1.2.1 Annex A17: RoCEv2". InfiniBand Trade Association. 2 September 2014.
- ↑ Cameron, Don; Regnier, Greg (2002). Virtual Interface Architecture. Intel Press. ISBN 978-0-9712887-0-6.
- ↑ Feldman, Michael (22 April 2010). "RoCE: An Ethernet-InfiniBand Love Story". HPC wire.
- ↑ "End-to-End Lowest Latency Ethernet Solution for Financial Services" (PDF). Mellanox. March 2011.
- ↑ "RoCE vs. iWARP Competitive Analysis Brief" (PDF). Mellanox. 9 November 2010.
- ↑ "Low Latency Server Connectivity With New Terminator 4 (T4) Adapter". Chelsio. 25 May 2011.
- ↑ InfiniBand Trade Association (November 2013). "RoCE Status and Plans" (PDF). IETF.
- ↑ Diego Crupnicoff (17 October 2014). "Service Name and Transport Protocol Port Number Registry". IANA.
- ↑ Merritt, Rick (19 April 2010). "New converged network blends Ethernet, InfiniBand". EE Times.
- ↑ Kerner, Sean Michael (2 April 2010). "InfiniBand Moving to Ethernet ?". Enterprise Networking Planet.
- ↑ Gross, David (16 January 2009). "Will New QDR InfiniBand Leap Ahead of 40 Gigabit Ethernet?". Seeking Alpha. This is a tertiary source that clearly includes information from other sources but does not name them.
- ↑ "A Rocky Road for ROCE" (PDF). Chelsio. 1 May 2011.
- ↑ Kamble, Keshav (17 March 2014). "Credit based Link Level Flow Control and Capability Exchange Using DCBX for CEE ports" (PDF). IEEE.
- ↑ "IETF 88 Proceedings - RDMA/IP Mini-BOF - minutes". IETF. 7 November 2013.
- ↑ "SX1036 - 36-Port 40/56GbE Switch System". Mellanox. Retrieved April 21, 2014.
- ↑ "IS5024 - 36-Port Non-blocking Unmanaged 40Gb/s InfiniBand Switch System". Mellanox. Retrieved April 21, 2014.
- ↑ Mellanox (7 May 2013). "Mellanox Announces 56 Gigabit Ethernet Interconnect Solution Family for Data Center Compute and Storage". Mellanox.
- ↑ Mellanox (2 June 2014). "Mellanox Releases New Automation Software to Reduce Ethernet Fabric Installation Time from Hours to Minutes". Mellanox.
- ↑ "RoCE: Frequently Asked Questions" (PDF). Chelsio. 1 May 2011.
- ↑ Rashti, Mohammad (2010). "iWARP Redefined: Scalable Connectionless Communication over High-Speed Ethernet" (PDF). International Conference on High Performance Computing (HiPC).
- ↑ H. Shah et al. (October 2007). "Direct Data Placement over Reliable Transports". RFC 5041. Retrieved May 4, 2011.
- ↑ C. Bestler et al. (October 2007). "Stream Control Transmission Protocol (SCTP) Direct Data Placement (DDP) Adaptation". RFC 5043. Retrieved May 4, 2011.
- ↑ P. Culley et al. (October 2007). "Marker PDU Aligned Framing for TCP Specification". RFC 5044. Retrieved May 4, 2011.
- ↑ Dreier, Roland (6 December 2010). "Two notes on IBoE". Roland Dreier's blog.
- ↑ Cohen, Eli (26 August 2010). "IB/core: Add VLAN support for IBoE". kernel.org.
- ↑ Cohen, Eli (13 October 2010). "RDMA/cm: Add RDMA CM support for IBoE devices". kernel.org.
- ↑ Crawford, M. (1998). "RFC 2464 - Transmission of IPv6 Packets over Ethernet Networks". IETF.
- ↑ Mellanox (28 September 2014). "Mellanox OFED for Linux Release Notes Rev 2.3-1.0.1" (PDF). Mellanox.
- ↑ ophirmaor (29 April 2014). "RoCE v2 Considerations". Mellanox.
- ↑ Malhi, Upinder (4 September 2013). "PATCH Cisco VIC RDMA Node and Transport". linux-rdma mailing list.