Bus (computing)
From Wikipedia, the free encyclopedia
This article does not cite any references or sources. (November 2007) Please help improve this article by adding citations to reliable sources. Unverifiable material may be challenged and removed. |
In computer architecture, a bus is a subsystem that transfers data between computer components inside a computer or between computers. Unlike a point-to-point connection, a bus can logically connect several peripherals over the same set of wires. Each bus defines its set of connectors to physically plug devices, cards or cables together.
Early computer buses were literally parallel electrical buses with multiple connections, but the term is now used for any physical arrangement that provides the same logical functionality as a parallel electrical bus. Modern computer buses can use both parallel and bit-serial connections, and can be wired in either a multidrop (electrical parallel) or daisy chain topology, or connected by switched hubs, as in the case of USB.
Contents |
[edit] History
[edit] First generation
Early computer buses were bundles of wire that attached memory and peripherals. They were named after electrical buses, or busbars. Almost always, there was one bus for memory, and another for peripherals,[citation needed] and these were accessed by separate instructions, with completely different timings and protocols.
One of the first complications was the use of interrupts. Early computers performed I/O by waiting in a loop for the peripheral to become ready. This was a waste of time for programs that had other tasks to do. Also, if the program attempted to perform those other tasks, it might take too long for the program to check again, resulting in loss of data. Engineers thus arranged for the peripherals to interrupt the CPU. The interrupts had to be prioritized, because the CPU can only execute code for one peripheral at a time, and some devices are more time-critical than others.
Some time after this, some computers began to share memory among several CPUs. On these computers, access to the bus had to be prioritized, as well.
The classic, simple way to prioritize interrupts or bus access was with a daisy chain.
DEC noted that having two buses seemed wasteful and expensive for mass-produced minicomputers, and mapped peripherals into the memory bus, so that the devices appeared to be memory locations.
Early microcomputer bus systems were essentially a passive backplane connected directly or through buffer amplifiers to the pins of the CPU. Memory and other devices would be added to the bus using the same address and data pins as the CPU itself used, connected in parallel. Communication was controlled by the CPU, which had read and written data from the devices as if they are blocks of memory, using the same instructions, all timed by a central clock controlling the speed of the CPU. Still, devices interrupted the CPU by signaling on separate CPU pins. For instance, a disk drive controller would signal the CPU that new data was ready to be read, at which point the CPU would move the data by reading the "memory location" that corresponded to the disk drive. Almost all early microcomputers were built in this fashion, starting with the S-100 bus in the Altair.
In some instances, most notably in the IBM PC, although similar physical architecture is employed, instructions to access peripherals (in
and out
) and memory (mov
and others) have not been made uniform at all, and still generate distinct CPU signals, that could be used to implement a separate I/O bus.
These simple bus systems had a serious drawback when used for general-purpose computers. All the equipment on the bus has to talk at the same speed, as it shares a single clock.
Increasing the speed of the CPU becomes harder, because the speed of all the devices must increase as well. This often led to odd situation where very fast CPUs had to "slow down"[citation needed] in order to talk to other devices in the computer. While acceptable in embedded systems, this problem was not tolerated for long in general-purpose, user-expandable computers.
Such bus systems are also difficult to configure when constructed from common off-the-shelf equipment. Typically each added expansion card requires many jumpers in order to set memory addresses, I/O addresses, interrupt priorities, and interrupt numbers.
[edit] Second generation
"Second generation" bus systems like NuBus addressed some of these problems. They typically separated the computer into two "worlds", the CPU and memory on one side, and the various devices on the other, with a bus controller in between. This allowed the CPU to increase in speed without affecting the bus. This also moved much of the burden for moving the data out of the CPU and into the cards and controller, so devices on the bus could talk to each other with no CPU intervention. This led to much better "real world" performance, but also required the cards to be much more complex. These buses also often addressed speed issues by being "bigger" in terms of the size of the data path, moving from 8-bit parallel buses in the first generation, to 16 or 32-bit in the second, as well as adding software setup (now standardised as Plug-n-play) to supplant or replace the jumpers.
However these newer systems shared one quality with their earlier cousins, in that everyone on the bus had to talk at the same speed. While the CPU was now isolated and could increase speed without fear, CPUs and memory continued to increase in speed much faster than the buses they talked to. The result was that the bus speeds were now very much slower than what a modern system needed, and the machines were left starved for data. A particularly common example of this problem was that video cards quickly outran even the newer bus systems like PCI, and computers began to include AGP just to drive the video card. By 2004 AGP was outgrown again by high-end video cards and is being replaced with the new PCI Express bus.
An increasing number of external devices started employing their own bus systems as well. When disk drives were first introduced, they would be added to the machine with a card plugged into the bus, which is why computers have so many slots on the bus. But through the 1980s and 1990s, new systems like SCSI and IDE were introduced to serve this need, leaving most slots in modern systems empty. Today there are likely to be about five different buses in the typical machine, supporting various devices.
[edit] Third generation
"Third generation" buses are now[when?] in the process of coming to market, including HyperTransport and InfiniBand. They also tend to be very flexible in terms of their physical connections, allowing them to be used both as internal buses, as well as connecting different machines together. This can lead to complex problems when trying to service different requests, so much of the work on these systems concerns software design, as opposed to the hardware itself. In general, these third generation buses tend to look more like a network than the original concept of a bus, with a higher protocol overhead needed than early systems, while also allowing multiple devices to use the bus at once.
Buses such as Wishbone have been developed by the open source hardware movement in an attempt to further remove legal/patenting constraints from computer design.
[edit] Description of a bus
At one time, "bus" meant an electrically parallel system, with electrical conductors similar or identical to the pins on the CPU. This is no longer the case, and modern systems are blurring the lines between buses and networks.
Buses can be parallel buses, which carry data words in parallel on multiple wires, or serial buses, which carry data in bit-serial form. The addition of extra power and control connections, differential drivers, and data connections in each direction usually means that most serial buses have more conductors than the minimum of one used in the 1-Wire serial bus. As data rates increase, the problems of timing skew, power consumption, electromagnetic interference and crosstalk across parallel buses become more and more difficult to circumvent. One partial solution to this problem has been to double pump the bus. Often, a serial bus can actually be operated at higher overall data rates than a parallel bus, despite having fewer electrical connections, because a serial bus inherently has no timing skew or crosstalk. USB, FireWire, and Serial ATA are examples of this. Multidrop connections do not work well for fast serial buses, so most modern serial buses use daisy-chain or hub designs.
Most computers have both internal and external buses. An internal bus connects all the internal components of a computer to the motherboard (and thus, the CPU and internal memory). These types of buses are also referred to as a local bus, because they are intended to connect to local devices, not to those in other machines or external to the computer. An external bus connects external peripherals to the motherboard.
Network connections such as Ethernet are not generally regarded as buses, although the difference is largely conceptual rather than practical. The arrival of technologies such as InfiniBand and HyperTransport is further blurring the boundaries between networks and buses. Even the lines between internal and external are sometimes fuzzy, I²C can be used as both an internal bus, or an external bus (where it is known as ACCESS.bus), and InfiniBand is intended to replace both internal buses like PCI as well as external ones like Fibre Channel.
[edit] Bus topology
In a network, the master scheduler controls the data traffic. If data is to be transferred the requesting computer sends a message to the scheduler, which puts the request into a queue. The message contains an identification code which is broadcast to all nodes of the network. The scheduler works out priorities and notifies the receiver as soon as the bus is available.
The identified node takes the message and performs the data transfer between the two computers. Having completed the data transfer the bus becomes free for the next request in the scheduler's queue.
Bus benefit: any computer can be accessed directly and messages can be sent in a relatively simple and fast way. Disadvantage: needs a scheduler to assign frequencies and priorities to organize the traffic.
See also: Bus network.
[edit] Examples of internal computer buses
[edit] Parallel
- ASUS Media Bus proprietary, used on some ASUS Socket 7 motherboards
- CAMAC for instrumentation systems
- Extended ISA or EISA
- Industry Standard Architecture or ISA
- Low Pin Count or LPC
- MicroChannel or MCA
- MBus
- Multibus for industrial systems
- NuBus or IEEE 1196
- OPTi local bus used on early Intel 80486 motherboards.
- Peripheral Component Interconnect or PCI
- S-100 bus or IEEE 696, used in the Altair and similar microcomputers
- SBus or IEEE 1496
- VESA Local Bus or VLB or VL-bus
- VMEbus, the VERSAmodule Eurocard bus
- STD Bus for 8- and 16-bit microprocessor systems
- Unibus
- Q-Bus
[edit] Serial
- 1-Wire
- HyperTransport
- I²C
- PCI Express or PCIe
- Serial Peripheral Interface Bus or SPI bus
- FireWire i.Link or IEEE 1394
[edit] Examples of external computer buses
[edit] Parallel
- Advanced Technology Attachment or ATA (aka PATA, IDE, EIDE, ATAPI, etc.) disk/tape peripheral attachment bus
(the original ATA is parallel, but see also the recent serial ATA) - HIPPI HIgh Performance Parallel Interface
- IEEE-488 (aka GPIB, General-Purpose Instrumentation Bus, and HPIB, Hewlett-Packard Instrumentation Bus)
- PC card, previously known as PCMCIA, much used in laptop computers and other portables, but fading with the introduction of USB and built-in network and modem connections
- SCSI Small Computer System Interface, disk/tape peripheral attachment bus
[edit] Serial
- USB Universal Serial Bus, used for a variety of external devices
- Serial Attached SCSI and other serial SCSI buses
- serial ATA
- Controller Area Network ("CAN bus")
- EIA-485
- FireWire