HyperTransport
From Wikipedia, the free encyclopedia
HyperTransport (HT), formerly known as Lightning Data Transport (LDT), is a bidirectional serial/parallel high-bandwidth, low-latency point-to-point link that was introduced on April 2, 2001.[1] The HyperTransport Consortium is in charge of promoting and developing HyperTransport technology. The technology is used by AMD and Transmeta in x86 processors, PMC-Sierra, Broadcom, and Raza Microelectronics in MIPS microprocessors, AMD, NVIDIA, VIA and SiS in PC chipsets, HP, Sun Microsystems, IBM, and Flextronics in servers, Cray, Newisys, QLogic, and XtremeData, Inc. in high performance computing, and Cisco Systems in routers.
Contents |
[edit] Overview
HyperTransport comes in three major versions — 1.0, 2.0, and 3.0 — which run from 200 MHz to 2.6 GHz (compared to PCI at either 33 or 66 MHz). It is also a DDR or "Double Data Rate" connection, meaning it sends data on both the rising and falling edges of the clock signal. This allows for a maximum data rate of 5200 MT/s when running at 2.6 GHz; this frequency is auto-negotiated.
HyperTransport supports an auto-negotiated bit width, based on two 2-bit lines to 32-bit lines. The full-sized, full-speed, 32-bit interconnect has a transfer rate of 20.8 GB/s (2.6 GHz * (32 bits / 8)) per direction, or 41.6 GB/s aggregated bandwidth per link, making it much faster than many existing standards. Links of various widths can be mixed together into a single application (for example, 2x8 instead of 1x16), which allows for higher speed interconnects between main memory and the CPU, and lower speed interconnects among peripherals as appropriate. The technology also has much lower latency than other solutions.
HyperTransport is packet-based, with each packet always consisting of a set of 32-bit words, regardless of the physical width of the connection. The first word in a packet is always a command word. If a packet contains an address, then the last 8 bits of the command word are chained with the next 32-bit word in order to make a 40-bit address. An additional 32-bit control packet is allowed to be prepended when 64-bit addressing is required. The remaining 32-bit words in a packet are the data payload. Transfers are always padded to a multiple of 32 bits, regardless of their actual length.
HyperTransport packets enter the interconnect in segments known as bit times. The number of bit times that it necessitates depends on the width of the interconnect. HyperTransport can be used for generating system management messages, signaling interrupts, issuing probes to adjacent devices or processors, and general I/O and data transactions. There are usually two different kinds of write commands that can be used - posted and non-posted. Posted writes are ones that do not require a response from the target. This is usually used for high bandwidth devices such as Uniform Memory Access traffic or Direct memory access transfers. Non-posted writes require a response from the receiver in the form of a "target done". Reads also cause the receiver to generate a read response.
HyperTransport also facilitates power management as it is compliant with the Advanced Configuration and Power Interface specification. This means that changes in processor sleep states (C states) can signal changes in device states (D states), e.g. powering off disks when the CPU goes to sleep.
Electrically, HyperTransport is similar to Low Voltage Differential Signaling (LVDS) operating at 2.5 V.
There has been marketing confusion between the use of HT referring to HyperTransport and the use of HT to refer to Intel's Hyper-Threading feature of some Pentium 4 based microprocessors. Hyper-Threading is officially known as Hyper-Threading Technology (HTT) or HT-Technology. Because of this potential for confusion, the HyperTransport Consortium always uses the written out form: "HyperTransport".
[edit] Applications for HyperTransport
[edit] Front-Side Bus Replacement
The primary use for HyperTransport is to replace the front-side bus, which is currently different for every type of machine. For instance, a Pentium cannot be plugged into a PCI bus. In order to expand the system, the front-side bus must connect through adaptors for the various standard buses, like AGP or PCI. These are typically included in the respective controller functions, namely the northbridge and southbridge.
In theory, a similar computer implemented with HyperTransport is faster and more flexible. A single PCI↔HyperTransport adaptor chip will work with any HyperTransport enabled microprocessor and allow the use of PCI cards with these processors. For example, the NVIDIA nForce chipset uses HyperTransport to connect its north and south bridges.
[edit] Multiprocessor interconnect
Another use for HyperTransport is as an interconnect for NUMA multiprocessor computers. AMD uses HyperTransport with a proprietary cache coherency extension as part of their Direct Connect Architecture in their Opteron and Athlon 64 FX (Dual Socket Direct Connect (DSDC) Architecture) line of processors. The HORUS interconnect from Newisys extends this concept to larger clusters.
[edit] Router or Switch Bus Replacement
HyperTransport can also be used as a bus in routers and switches. Routers and switches have multiple network interfaces and data has to be forwarded between these ports as fast as possible e.g. a four port 100 Mbit/s Ethernet router needs a maximum 800 Mbit/s of internal bandwidth (100 Mbit/s * 4 ports * 2 directions). HyperTransport greatly exceeds the bandwidth needed for this application. However, HyperTransport has largely fallen out of favour with the networking community, in favour of SPI 4.2 and PCI Express.
[edit] HTX and Co-processor interconnect
The issue of bandwidth between CPUs and co-processors has usually been the major stumbling block to their practical implementation. After years without an officially recognized one, a connector designed for such expansion using a HyperTransport interface was introduced and is known as HyperTransport eXpansion (HTX). Using the same mechanical connector as a 16-lane PCI-Express slot (plus an x1 connector for power pins), HTX allows plug-in cards to be developed which support direct access to a CPU and DMA access to the system RAM. The initial card for this slot was the QLogic InfiniPath InfiniBand HCA. Recently, co-processors such as FPGAs have appeared which can access the HyperTransport bus and become first-class citizens on the motherboard. Current generation FPGAs from both of the main manufacturers (Altera and Xilinx) can directly support the HyperTransport interface and have IP Cores available. Companies such as XtremeData, Inc. take these FPGAs (Altera in this example) and create a module that allows FPGAs to be plugged directly into the Opteron socket.
As of January 2008, the HTX standard is limited to 16 bits and 800 MHz, making it slower than the PCI-E standard from which it borrows its connector.[2] An earlier Samtec test connector,[3] however, achieved full 32-bit, 2.8 GHz operation.
AMD has announced an initiative named Torrenza in September 21, 2006 to further promote the usage of HyperTransport for plug-in cards and coprocessors.
[edit] Implementations
- AMD AMD64 and Direct Connect Architecture based CPUs.
- SiByte MIPS cpus from Broadcom
- PMC-Sierra RM9000X2 MIPS CPU
- ht_tunnel from OpenCores project (MPL licence)
- ATI Radeon Xpress 200 for AMD Processor
- NVIDIA nForce chipsets
- nForce Professional MCPs (Media and Communication Processor)
- nForce 4 series
- nForce 500 series
- nForce 600 series
- nForce 700 series
- ServerWorks (now Broadcom) HT-2000 HyperTransport SystemI/O Controller
- The IBM CPC925 and CPC945 PowerPC 970 northbridges
- Raza Thread Processors
[edit] HyperTransport frequency specifications
HyperTransport Version |
Year | Max. HT Frequency | Max. Link Width | Max. Aggregate Bandwidth (bi-directional) |
---|---|---|---|---|
1.0 | 2001 | 800 MHz | 32 Bit | 12.8 GB/s |
1.1 | 2002 | 800 MHz | 32 Bit | 12.8 GB/s |
2.0 | 2004 | 1.4 GHz | 32 Bit | 22.4 GB/s |
3.0 | 2006 | 2.6 GHz | 32 Bit | 41.6 GB/s |
[edit] See also
- Front side bus
- Intel QuickPath Interconnect
- PCI Express
- RapidIO
- Fibre Channel
- List of device bandwidths
[edit] References
- ^ HyperTransport Consortium (2001-04-02). "API NetWorks Accelerates Use of HyperTransport™ Technology With Launch of Industry's First HyperTransport Technology-to-PCI Bridge Chip". Press release.
- ^ Emberson, David; Brian Holden (2007-12-12). "HTX specification". Retrieved on 2008-01-30.
- ^ Holden, Brian; Mike Meschke, Ziad Abu-Lebdeh, Renato D’Orfani. "DUT Connector and Test Environment for HyperTransport".