Talk:Peripheral Component Interconnect

From Wikipedia, the free encyclopedia

Contents

[edit] Some More Specifications

Due to the fact, that the PCISIG is offering the specs only for its (paying ?) members it would be nice, if there could be some more information be included here. Of course such an article can not substitute for the lack of documentation. People serious about PCI will be PCISIG members anyway. But for some hobbyists and artists, anyway, people, who do not need full technical coverage, but a little more factual overview about the topic, it might be nice to get an extended information about the technology. Personally I would use the physical size specifications (I am designing a 3D model of a motherboard in a CAD program) and I guess, that some electronic-hackers might be interested in a link (or publication) of the signal paths on the bus.

[edit] PCI-X

Is it possible to put a "64 bit PCI-X" card in a "normal" 32 bit PCI slot? Will it work properly?

For the most part, yes Snickerdo 01:46, 8 October 2005 (UTC)
if it will fit it should work, the main issue is likely to be other components getting in the way of the overhanging part of the card. Plugwash 19:55, 11 January 2007 (UTC)
The 32bit-part of the slot has a 'REQ64#-signal line, which is used by the board to tell the card whether it's in a 32bit or 64bit slot' Ranma 16:09, 9 February 2007 (UTC)

[edit] PCI 64

Does anyone know anything about PCI 64? Is it a homonym for some other spec? I can't find any mention of it on the PCI SIG site, but Adaptec (amongst others) label some of their products as PCI 64. Mr. Jones 12:02, 17 Mar 2005 (UTC)

I'm guessing that means 64-bit PCI. 64-bit PCI slots are longer than 32-bit PCI slots, to hold 32 extra pins. --DavidCary 04:06, 21 Jun 2005 (UTC)

[edit] Forth, PCI and OpenFirmware

Perhaps someone could elaborate on the Forth boot code that was originally speced in the PCI standard?

I didn't think OpenFirmware, which is surely what you mean, was originally, or is, part of the PCI standard. It was common on Sun workstations and Apple Macs (and still is) and those machines also used PCI busses. But I don't think that either standard was included in the other. --drj

Well, according to 'Chapter 18: Expansion ROMs' of 'PCI System Architecture' (Third Edition, covering PCI2.1, 1997), the PCI standard defines Expansion ROMs to some extend, including a 'Code Type' field (Should be byte 16 after the "PCIR" signature): "This one-byte field identifeies the type of code contained in this image as either executable machine language for a particular processor/architecture or as interpretive code. A value of 00h indicates Intel ix86 (IBM PC-AT compatible) executable code, while 01h indicates interpretive code. The Open Firmware standard (reference IEEE standard 1275-1994) is used for interpretive code. The values from 02h through FFh are reserved. A basic description of the Open Firmware standard can be found at the end of this chapter". It is also possible to have "Multiple Code Images Contained In One Device ROM", which allows a device to contain _both_ x86 and openfirmware versions in one ROM. Ranma 16:06, 9 February 2007 (UTC)

[edit] Any thoughts on PCI 2.3?

Somebody should also find out if PCI 2.3 motherboards DO support 5V cards, and then write about it. There seems to be a big confusion about the correct answer on that question. Not even PCI-SIG themselves writes clearly about this on their site. Or do they? At least the reality among PCI 2.3 motherboards, supporting or not supporting 5V cards, does not seem to be clear.

Afaict there is no such thing as a universal PCI slot (unlike with AGP), because all the cards must sit on one bus, 3.3V slots only seem to be seen on high end motherboards that support 66mhz 64 bit PCI and all cards i've seen are keyed for either 5V only or universal. 130.88.116.241 15:57, 5 February 2007 (UTC) Plugwash 16:45, 5 February 2007 (UTC)

[edit] Dell - 64 bit?

I've just opened up a Dell server, and it has extended PCI slots. Are these 64 bit PCI slots, or something proprietry? If so a photo here would be nice. Rich Farmbrough 17:46, 8 Feb 2005 (UTC)

Almost certainly 64 bit slots Plugwash 15:46, 7 March 2007 (UTC)

[edit] Speed

So PCI 2.2 allows for a "peak transfer" of "533 MB/s"? [1] say that you will be limited to 30 MB/s to 50 MB/s on a non X/Express PCI.--Jerryseinfeld 14:24, 18 Mar 2005 (UTC)

They're talking about 32-bit 33MHz slots. 64-bit 66MHz slots, even if not PCI-X standard, will still get considerably more bandwidth than a standard 32-bit 33MHz slot found in most desktop PCs. Snickerdo 01:48, 8 October 2005 (UTC)

It is incorrect to say speed here; it's capacity, something totally different. The whole template in the article is wrong. UPDATE: The template is now fixed.

[edit] Universal PCI

The page should definitely have some info on Universal PCI. I.e. PCI cards that accept both 5 V and 3.3 V. (June 22, 2005)

[edit] PCI electrical power ratings

Can someone add this info? How much current can a PCI card draw ?

Aha, I found this two mail messages : http://www.pcisig.com/reflector/msg05240.html http://www.pcisig.com/reflector/msg05243.html

Can someone add this to the article ?

--195.250.201.212 13:07, 8 October 2005 (UTC)

[edit] Low Profile & Half-height PCI

- There is no mention of 'Low Profile' PCI cards. (anon) - could the half-heigh dimensions be added to 'Size of PCI extension card' ? -aug 25 '06. (anon)

I have attempted to cover Low Profile PCI specs, which I believe is the same thing as "half height". (It's not actually half, more like 2/3). I added links to the PCI specs I got this from, which are difficult to interpret. Had to figure this out for our own PCI card ... Aaron Lawrence 04:55, 12 July 2007 (UTC)

[edit] rates

This site gives rate of 528 decimal MB/s, not 533. Someone who knows these well double check. — Omegatron 15:46, 29 December 2005 (UTC)

I'm a bit confused here too. AFAIK, 1 KB = 1024 bytes, 1 MB = 1024 KB. In the paragraph "Conventional PCI bus specifications", the calculation is "33.33 MHz × 32 bits ÷ 8 bits/byte = 133 MB/s" but that's really just 133.32 million bytes. So, if you break this down to MB according to the rates above, shouldn't you get 127 MB/s?
ADude, 11 May 2006
Yeah, your math is correct. It's the same conspiracy again hard drive manufacturers saying 1.0 GB when they really mean 1,000,000,000 bytes. Rmcii 04:47, 12 May 2006 (UTC)
It's not a conspiracy. The only data capacity where it makes sense to measure in base-2 multiples, is RAM, when you're talking about row/column based binary addressing. From an engineering point of view it does NOT make sense to measure a hard disk in base-2 multiples, because an engineer will want to know the true number of sectors*bytes! This is because the natural, intrinsically sequential nature of that storage medium is NOTHING LIKE addressing RAM - it literally is just a big stream of bits. Likewise with serial communications specifications - it makes NO LESS SENSE to count the number of bits coming out in base-10, than it does to do so in base-2 multiples that software-types seem to cling to so much... in fact from an engineering point of view base-2 multiples would be very annoying (there are many interesting calculations that can be done with base-10 multiple specs, but base-2 multiples distorts the true number of "symbols" we're trying specify and have to be converted back to base-10 multiples every time!). 59.167.116.194 15:49, 2 September 2006 (UTC)
The 133MB/sec is a reflection of the nature of the medium for the engineer's benefit, not the consumer. It's not so clear-cut in this case: should they have expressed the bandwidth in Mbit instead of Mbyte, as most comms specs are? The answer is no, that wouldn't make sense, this is a parallel data bus 32-bits wide, not some 2-wire serial interface or radio transmission... so unfortunately, it's one of those grey areas where you want to talk about ordinarly communications bandwidth but the parallel nature of the data bus is designed from the point of view of getting whole words at a time from point A to point B. In this case, my engineering training would say that the solution is to express the bandwidth in Baud, aka "symbols per second" - each "symbol" being one 32-bit word, in which case you're left with 33.33 million baud. But then this doesn't help understand the data transfer problem any better - anyway, my point is there is no conspiracy ;-) 59.167.116.194 15:49, 2 September 2006 (UTC)

[edit] PCI Add-On

There should be a part in this page telling people what can be put on the PCI slots, That would make life easier for people wanting to know more about PCI. Alkady 16:44, 5 February 2006 (UTC)

I'll second that. I can think of Sound cards (for audifiles or back when AC'97 wasn't a standard MoBo feaure), Ethernet cards (again, before it was a standard), wireless cards, Raid controller, SCSI cards... But that's it. There's more right? 64.238.49.65 17:26, 25 September 2007 (UTC)
Ethernet and sound cards are still made and while not as common as they once were due to motherboard integrations are still pretty easy to get and come in a wide range of specifications. There is a wide variety of disk controller cards ranging from cheap and cheerfull IDE/SATA controllers (usually with fakeraid) to high end scsi and fiber channel controllers some of which have hardware raid. 56K modem PCI cards are extremely common (makers of desktop motherboards didn't want to bother with the beuracracy of certifying thier boards for connection to telephone lines, attempts to work arround this with special risers have been tried but not had much success). Some graphics cards are still PCI (and at one time in the pentium era almost all were). Then there are cards for most common interfaces (serial, paralell, USB, firewire), video capture cards, data aquisition cards and many many other more specialist cards. Plugwash 22:59, 26 September 2007 (UTC)

[edit] BDF/Bus enumeration

I've added some info pertaining to the bus/device/function concept and bus enumeration process to PCI Configuration Space, but I question if the information should remain there or be merged into this article. Does anyone have an opinion or preference? Rmcii 04:51, 12 May 2006 (UTC)

[edit] Versions compatibility ?

Do PCI v2.2 cards work in PCI v2.0 slots ? Vice versa ?

xerces8 --213.253.102.145 13:03, 29 May 2006 (UTC)

afaict the important thing is the voltage, if a card supports the correct voltage (as indicated by its keying) for a slot then it should work in that slot. I'm pretty sure though that putting a slower card in a faster slot will bring the whole bus down to the speed of the slowest card (this is one reason why high end motherboards tend to have multiple PCI busses) Plugwash 20:30, 2 February 2007 (UTC)

[edit] Specification table.

I think a table of specifications of the different pci versions would be nice.

[edit] Why are PCI slots 'backwards'

In 24 years of working with computers, mostly PCs and clones, I've often wondered why PCI, MCA, AGP and other slots are 'backwards' when compared to the ISA slot? Ie, when you compare an ISA board to the others, the components are on the opposite side of the board. The 'tower' PC case style predates the introduction of PCI and the later bus slots, so it was logical to tip the case up on its left end- with the motherboard on the right side. That would let convection easily remove heat from ISA peripheral board components. But with any non-ISA cards the components are on the bottom side in a tower case, trapping heat under the cards where it can't be quickly removed by natural convection.

In all these years I've seen exactly ONE sanely designed tower PC case for a board having no ISA slots. It had the motherboard on the left side so the components on the PCI and AGP cards were on the top side, the drives were close to the bottom of the case- and thus close to their interface connectors, and the power supply was located at the bottom where its cable could connect via as short and direct a path as possible without the thick bundle of wires being in the way of anything, since the ATX power connector was where it should be, right up close to the onboard port connectors. Some people commented that it 'looked weird' with the CD-ROM drive down so low, but I liked its logical design.

I think the reason was to allow for "shared slots" where a PCI and ISA connector would be placed hard up against each other allowing the user to choose either one for the plate position but i don't know for sure.. Plugwash 01:43, 1 January 2007 (UTC)

[edit] are 5V only slots and cards still the norm?

initially i thought that the 5V comment in the infobox was wrong, but having looked at a box with (no its not mine, it just happened to be being fixed at the time i dropped by) i noticed that the 32 bit section of the 64 bit slots was notched the opposite way to the way the 32 bit slots (and every normal PCI card i've seen) were notched. This has lead me to belive that normal 32 bit PCI slots are still 5V (despite what the latest versions of the PCI standard say). Can anyone clarify and should this be mentioned in the article. —The preceding unsigned comment was added by Plugwash (talkcontribs) 00:52, 13 January 2007 (UTC).

[edit] photo request

Template:Regphoto IMO what would be really good is a picture showing multiple slot types (ideally all 4 but even three would be nice) in context with the case and/or the whole motherboard (because 3.3V and 5V slots are the reverse of each other keying wise) Plugwash 02:12, 17 February 2007 (UTC)

[edit] Add-in card dimensions from PCI Express Spec

From PCI Express Electromechanical Specification, Rev 1.0a

Full size: 111.15 x 312.00*
Half-length: 111.15 x 167.65
Low profile: 68.90 x 167.65
*It is strongly recommended that standard height add-in cards be designed with a 241.30 mm maximum length.

From table 6-1 on page 67. Dimensions are height x length in mm. Height is from the top of the card to the bottom of the fingers. Dimensions do not include the bracket. —Ryan 05:00, 8 June 2007 (UTC)

[edit] PCI Hotplugging

If PCI doesn't support hotplugging (as per this article), why does the Linux kernel include support for it? --CCFreak2K 00:57, 4 July 2007 (UTC)

The options is there in the spec though rarely implemented (i think some high end server hardware has it) and a very common derivitive of PCI (cardbus) also has hotplugging support. Plugwash 22:48, 28 July 2007 (UTC)

[edit] Edge v. level-triggered

The article says, "level-triggered ... was chosen over edge-triggering... However, this efficiency gain comes at the cost of flexibility, and one interrupting device can block all other devices on the same interrupt line."

However, that's not correct. There is no loss of flexibility from using level-triggering. Using edge-triggering for a shared interrupt line is simply an unambiguous design blunder. There's no trade-off.

Also, the statement that with level triggering one interrupting device can block all other devices on the same interrupt line is untrue. That's the case for edge-triggering, not level-triggering. With level-triggering, if ANY device requests service, the CPU will be interrupted. With edge triggering, if one badly broken device asserts the interrupt line continually, it will block all other devices on the same interrupt line, and NO interrupts will be seen. (Either way, the system is badly broken, of course, but it is wrong to call this a "cost" for level-triggering.)

In the case of a device that is so thoroughly broken that the interrupt request line is simply "stuck" in the active state, the effect depends on the interrupt triggering mode: With level triggering the result is continual interrupts, and with edge triggering the result is no interrupts at all, but either way the result is a broken system.

If two or more devices share an interrupt line, then the bus must be using "wired-OR" logic (e.g., active-low, open-collector TTL). That's simply a result of the fact that the interrupt line is shared, regardless of whether it is edge or level-triggered. So if one device asserts the interrupt line continually, there will be no "edges," and if it is an edge-triggered design there will be no interrupts.

The enormous advantage of level-triggered interrupts is that (unless some device is very severely malfunctioning) one device cannot interfere with another. Nothing that one properly working device does can cause another device's interrupt to be lost.

With edge-triggered interrupts, race conditions WILL cause interrupts to be missed, even if everything is functioning properly. If a second device should request service before the first device's interrupt pulse has ended, there will be only one edge, and only one interrupt will occur. The second device's interrupt is simply missed. So even after determining which device needs service, subsequent to servicing that request the OS must check for ALL other devices which could have requested service at (approximately) the same time, because their interrupt(s) might have been missed. This adds software complexity, and incurs considerable additional processing overhead, and lengthens interrupt processing latency.

But with level-triggered interrupts, interrupts can never be missed. After servicing a device, the ISR can simply exit, confident that if another device is also already requesting service, or is about to do so, then another interrupt will promptly occur.

There is no trade-off. There is no advantage whatsoever to using edge-triggered interrupts. Level-triggering is the ONLY rational design for a shared interrupt line. That's why edge triggering isn't used for shared interrupt lines. NCdave (talk) 11:09, 22 December 2007 (UTC)

[edit] PCI bus speed

According to the article the PCI bus speed is: (33.3 MHz*32 bits)=133 MB/sec. Actually 33.3 MHz*32 bits yield ~127 MB/sec. —Preceding unsigned comment added by Arnob1 (talkcontribs) 01:44, 31 January 2008 (UTC)