Talk:Central processing unit/Archive 2
From Wikipedia, the free encyclopedia
Visuals
I'd like to add some more visuals to the new sections I'm writing for modern CPU design considerations. However, I'd like to do so without resorting to block diagrams. No offense to block diagrams, but they just aren't visually arresting nor something that the non-engineer/non-programmer would likely be interested in looking over. Any suggestions on interesting visuals to provide for ILP and TLP? I'm leaning towards just inserting an image of a monster like POWER5 or Sun's UltraSPARC T1 into the article for the latter :) -- uberpenguin 05:33, 11 December 2005 (UTC)
- Tried adapting the style of diagram Jon Stokes (Hannibal) uses for this purpose? -- Wayne Hardman 16:10, 23 January 2006 (UTC)
To add, or not?
I've been toying with the idea of including some sort of discussion of CPU cache design and methodology as well as some blurb about RISC vs CISC. However, I keep coming back to a couple of major mental blocks. First, the article is already fairly lengthy, and I'm afraid to add any more major sections for risk of making it too all-inclusive. Second, I want to keep the article as close as possible to a discussion of STRICTLY CPUs (not computers, not memory, not peripherals, not software), and I somewhat feel that RISC vs CISC is an ISA topic first and foremost and should be covered in discussions of ISA design, not CPU design. Finally, the section discussing the motivations for and function of superscalar architecture does very briefly touch on why CPU caches are necessary for very deeply pipelines, so I'm tending to believe that a lengthy diversion into cache methodology would be overspecific and largely detract from the flow of the article. Input is appreciated on any or all of these points! -- uberpenguin 06:16, 11 December 2005 (UTC)
- The cache subject really belongs in the memory arena than in the CPU area; cache is a memory function despite the fact it is coupled to the cpu design and often is on the same chip as the cpu core. Thus it doesn't seem like it should have more than a mention in passing in this article. RISC/CISC is really an ISA issue. It certainly affects the details of a cpu design (and vice versa), it is a more detailed and somewhat separate and divorced subject. As you say, the article is already large, and thus shouldn't be expanded with these topics IMHO. Also, note Wikipedia has other articles which can and do treat these. -R. S. Shaw 01:08, 13 December 2005 (UTC)
-
-
- Yeah, I have long since dropped the idea of adding much of a discussion of cache here. -- uberpenguin 16:00, 5 March 2006 (UTC)
-
CPU Clustering
Hey,
I'm thinking that something on clustering should be added to this article? What do guys think? --ZeWrestler Talk 16:56, 12 December 2005 (UTC)
- The article already touches on SMP and NUMA as TLP strategies. Cluster computing is a systems-level design approach and really has nothing to do with CPU design at all. -- uberpenguin 20:02, 12 December 2005 (UTC)
- Ok, thanks. --ZeWrestler Talk 16:45, 13 December 2005 (UTC)
Nitpicking
A possibly overly pedantic remark: The article says "The first step, fetch, involves retrieving an instruction (which is a number or sequence of numbers) from program memory.". I would actually say that the instruction is represented by a number rather than that it is a number. That is, the instruction is a conceptual entity and not a concrete one. When for example a processor manual refers to the ADD instruction we think of it as something other than a number. Or would it be confusing to make this distinction? -- Grahn 03:39, 15 December 2005 (UTC)
- I can't see how changing the phrasing from 'which is a...' to 'which is represented by...' could cause more confusion. However, while I do agree with you that an instruction is more conceptual than concrete, I think that for all intents and purposes an instruction IS a number in almost every major device that could be called a CPU. How does one gauge binary compatibility of two CPUs? Is it not whether the instructions (numbers) used in software can be interpreted successfully for the same end result by both? One could hardly argue that two CPUs are binary/ISA compatible if they used the exact same (conceptual) instructions but used different numbered opcodes, a different signed representation, a different FP system, etc.
- On the other hand, as I said earlier, your suggested phrasing really doesn't make the text more confusing, so I'll modify it per your suggestion. -- uberpenguin 04:25, 15 December 2005 (UTC)
I think it would be good if there where examples of where elements of the Harvard architecture are commonly seen as well. I am not quite sure where this is the case except maybe in the segments in the intel 80n86 processors. And in embeded systems where the code is ROM and the Data is in RAM (but I am not sure that this counts and the RAM and ROM share the same address bus). Gam3 05:00, 27 December 2005 (UTC)
- Superscalar cache organization and some SIMD designs, mostly. This article isn't about cache (as discussed above), so the former is out. I thought the latter was a bit too specific to cover in much detail, but I didn't want to altogether ignore it; thus the single brief sentence hinting that Harvard still pops its head up from time to time. -- uberpenguin 22:51, 30 December 2005 (UTC)
Integrated circuit CPUs between discrete transistor CPUs and microprocessors
There wasn't a transition directly from discrete transistor CPUs to microprocessors; CPUs were built from SSL, MSI, and LSI integrated circuits before the first microprocessors. I'd rename "discrete transistor CPUs" to "discrete transistor and integrated circuit CPUs", or something such as that, and mention that the discrete transistors were replaced by IC's; I'd argue that transition wasn't as significant as the transition to microprocessors, so I'm not sure I'd give it a section of its own. Guy Harris 09:23, 25 December 2005 (UTC)
- True true... I got kinda lazy and omitted this for brevity. I'll try to work it in somehow. -- uberpenguin
-
- Okay I reworked that section, and I think it's a lot more accurate and understandable now. Let me know what you think. -- uberpenguin 23:48, 25 December 2005 (UTC)
You also skipped the bitslice technology that the late 70 and later minicomputers used. While the chip fabrication technology could support only very simple complete CPU's, similar technology was used to create multi chip CPU's of much higher power. CF AMD 2900 series devices.--Shoka 17:53, 4 March 2006 (UTC)
- I'm pretty sure I mentioned multi-IC CPUs very briefly. I'm not sure that there is a whole lot to be said about them from an architectural standpoint, though. -- uberpenguin 16:04, 5 March 2006 (UTC)
new inline citation
check this out, might make our lives slightly easier. in particular with citing sources. --ZeWrestler Talk 17:28, 31 December 2005 (UTC)
64-bit arithmetic on 32-bit processors, etc.
"Arbitrary-precision arithmetic" generally refers to bignums, and the arbitrary-precision arithmetic page does so as well; that's not, for example, how 32-bit C "long"s were done on 16-bit processors, and it's not how 64-bit "long longs" are done on 32-bit processors, so that really doesn't fit in "arbitrary-precision arithmetic". For example, it only takes two instructions on PPC32 to add 2 64-bit values, each stored in two registers (addc followed by adde on PPC32), and it only takes two instructions to load the two halves into registers - the equivalent ops are equally cheap on x86 - and it's typically done in-line, so it's arguably misleading to refer only to bignums when discussing that case. Guy Harris 22:51, 12 January 2006 (UTC)
- True, I just didn't want either the footnote or the text to get too bulky on a side point... Several folks who prowl FAC are pretty ardently against great elaborations that could conceivably be moved to another article. I'll change the text a bit to mention hardware big int support. --uberpenguin 23:12, 12 January 2006 (UTC)
Square wave pic
I find the picture labeled "Approximation of the square wave..." slightly misplaced, the subject of the article isn´t at all related to fourier series, and cpu clocksignals are generated by other means. I suggest that it be removed or replaced, and it would be great if someone would bother to draw one that shows rising and falling edges (would aid understanding of DDR memory for instance).
Daniel Böling 14:07, 25 January 2006 (UTC)
- Agree totally. It was always a bit out of place; I was grasping for straws to find a picture for that section, had that extra one laying around and stuck it in "temporarily." What would really be neat/appropriate there is a picture of a logic analyzer hooked up to some small microprocessor like a Z80... If nobody else has something like this I'll see if I can set it up sometime soon. -- uberpenguin 22:52, 29 January 2006 (UTC)
-
- Ehh... Well the logic analyzer with the counter is an improvement for now. I'll see if I can find or make something better in the future. -- uberpenguin 02:48, 3 March 2006 (UTC)
fundamental concepts
lead sentence
shldn't it be "the component in a digital computer that interprets instructions contained in software and processes data."? Doldrums 07:57, 4 March 2006 (UTC)
- Well, the current sentence isn't incorrect since software contains both instructions and data. And when other data is loaded from disk into RAM it can be seen as becoming part of the software. Redquark 11:00, 4 March 2006 (UTC)
- Thanks to the VN architecture, there's usually little differentiation between data storage and instruction storage areas (except in some limited cases like superscalar cache). Therefore you see whole classes of instructions (like the "immediate" instructions) that contain both an operational message as well as data, which are read simultaneously upon execution. I think the original phrasing is very much correct from a literal standpoint. However, conceptually and perhaps theoretically, it's better to think in the terms you mention. Plus the extra word really doesn't harm anything, so it might as well stay... -- uberpenguin
@ 2006-05-19 14:51Z
- Thanks to the VN architecture, there's usually little differentiation between data storage and instruction storage areas (except in some limited cases like superscalar cache). Therefore you see whole classes of instructions (like the "immediate" instructions) that contain both an operational message as well as data, which are read simultaneously upon execution. I think the original phrasing is very much correct from a literal standpoint. However, conceptually and perhaps theoretically, it's better to think in the terms you mention. Plus the extra word really doesn't harm anything, so it might as well stay... -- uberpenguin
Comments on changes
I have largely reverted the changes to bolding and image spacing. It baffles me why someone would feel that stacking images on top of each other (thereby removing their correct contextual position in the text) on the right side is more pleasing than having the thumbnails in the correct context and balanced on both sides. I've therefore reverted back to my original image layout; if you have problems with that please discuss them here before changing anything.
On the subject of bolding which has been discussed before in this article's FA nomination: the reason certain terms are bolded is because they are key to understanding subsequent text and are often unique or have unique meaning to the computer architecture field. I concede that some terms were unneccessarily bolded, so I've cut down the bolding to what I believe to be fundamental terms in the article.
I also removed the out of place blurb about Mauchly and the Atanasoff-Berry Computer that was added by an anonymous user. The text had absolutely no relevance to this article and I'm surprised it wasn't removed by somebody else. -- uberpenguin 14:38, 4 March 2006 (UTC)
- I disgaree heartily with the image reversions. While some people (like me) prefer to right justify, and others prefer to scatter, is up to argument. However, no image should be 350px. The usual on articles now, particulalry featured articles, is 250px. Páll (Die pienk olifant) 15:08, 4 March 2006 (UTC)
-
- The norm for pictures is |thumb| which sets the image size as per user preferances (but defaults to the rather small 180px wide), and the norm for diagrams is whatever size they're readable at. However, as long as the page works at 800x600 (which this does currently), it's whatever is appropriate for the page. This page is has much more text:image than Zion National Park, makes sense to have them a bit larger. Wouldn't hurt setting most of them to user-defined size though, the assumption is always that people will click on the pic if they want to see the detail. --zippedmartin 17:06, 4 March 2006 (UTC)
-
- While I do respect your position, per zippedmartin's comments, could we possibly defer to my preference here? Where I'm not going against current WP recommended styling I'd just rather have it "my way" in an article I wrote. I know it's selfish motive, but it's just more pleasing for me to see it this way and the issue is little more than editor preference. -- uberpenguin 04:43, 5 March 2006 (UTC)
heat sinks
The link to heat sink was removed from See also, with the comment "heat sinks have nothing to do with a discussion of CPUs directly." This statement is contrary to the following: "Heat sinks are widely used in electronics, and have become almost essential to modern central processing units. ... Due to recent technological developments and public interest, the market for commercial heat sink cooling for CPUs has reached an all time high" (from heat sink) Shawnc 21:43, 4 March 2006 (UTC)
- I stand by my statement. You can make the argument that a discussion of thermal dissipation is key to nearly any significant engineering (and especially electrical engineering) scheme, but as it stands a CPU is a functional device, not necessarily a physical implementation. Thermal issues could justifiably be discussed in an article about, say, integrated circuits, but not here in an article about a functional device with many forms of implementation. "See also" could easily get out of hand if we included every article that could be logically connected to CPUs and their various incarnations and implementations. As it is, it should keep to topics that very directly relate to CPUs in the architectural sense. -- uberpenguin 02:44, 5 March 2006 (UTC)
-
- Also, upon reading the heat sink article, I think it's deplorable state makes it a bad example to use here. If a lay man read that they might think that microelectronics are the singular application of thermal management devices. -- uberpenguin 04:47, 5 March 2006 (UTC)
-
- In other words, "already covered in CPU cooling." Alright. Shawnc 03:24, 6 March 2006 (UTC)
-
-
- That too! -- uberpenguin 05:26, 6 March 2006 (UTC)
-
- I did some substantial work on the thermal grease article the other week. I have a few ideas and will see what I can do to further these secondary articles along. Thermal dissipation is a way of life here in the Phoenix Valley! ;-) -- Charles Gaudette 19:35, 4 June 2006 (UTC)
digital is not base-2
digital does not mean base-2, that is what binary means. A bit is a binary digit, which is not a pleonasm. MarSch 13:54, 5 March 2006 (UTC)
- Yes, I'm fully aware of that. Unfortunately there is an anonymous editor that is hellbent on including information about the Atanasoff-Berry Computer in this and other computer related articles. He seems to believe that my removing the text as irrelevant is an attempt to obscure history, but ignores the fact that this article is about CPUs, not early non-stored program computers. I've removed the poorly written and off topic text that he added (again). If/when you see him re-add the text, feel free to remove it as nonsense. -- uberpenguin 15:53, 5 March 2006 (UTC)
-
- Just in case I didn't make it clear; the text that you took issue with was added by the anon editor, not myself. Where I mention bits in the "integer precision" section I was careful to indicate that they are related to binary CPUs only. -- uberpenguin 16:08, 5 March 2006 (UTC)
-
-
- Thanks for removing it. I wasn't looking for an addition of a whole section with a picture, so I didn't fix it myself. -MarSch 18:12, 5 March 2006 (UTC)
-
-
-
-
- As we all know ALL non binary computers ware failures, as we also know ALL computers nowadays use binary sytem like ABC did first !!! As many of you don't know ENIAC/EDIVAC were 10 bit systems. And Industrilized version(derivate) of the ABC as the US court concluded. That Anonymous user, 5 March 2006. —Preceding unsigned comment added by 71.99.137.20 (talk • contribs)
-
-
-
-
-
-
- Actually, ENIAC used a word length of 10 decimal digits, not 10 bits (binary digits). Its total storage was twenty of them.[1] -- Gnetwerker 17:45, 6 March 2006 (UTC)
-
-
-
-
-
-
-
-
- Did you just said "decimal" ??? 71.99.137.20 17:54, 6 March 2006 (UTC)
-
-
-
-
-
- Yes he certainly did. ENIAC was a digital, decimal computer. Digital does NOT imply binary as you claim; that's one of the reasons your added text keeps getting reverted. Digital simply means finite state (as opposed to infinite state - analog), binary is a base-2 numeral system. Claiming they are one in the same is a fairly severe confusion of terms. -- uberpenguin 18:18, 6 March 2006 (UTC)
-
-
- Additionally, the claim that "ALL non binary computers were failures" is absolutely ridiculous. ENIAC provided reliable service for BRL for over ten years, being upgraded significantly a few times (including one that made it a stored program computer). IBM, Burroughs, and UNIVAC all built several commercially successful computers that used digital decimal arithmetic. UNIVAC I, which is nearly universally considered the first highly commercially successful computer, used BCD arithmetic. Unless you, counter to nearly all computer historians, consider UNIVAC I a failure, then you probably won't wonder at why UNIVAC II and UNIVAC III also supported BCD.
- I'm really failing to see the point you are trying to make here. Nobody is arguing that the ABC wasn't influential or wasn't the first digital electronic computer. However, it wasn't stored program and is thus irrelevant to this article. CPUs are Von Neumann machines, so the American history of CPUs really starts somewhere between ENIAC (which was converted to stored-program) and EDVAC (which was stored program by design). -- uberpenguin 18:31, 6 March 2006 (UTC)
-
-
-
- Another example of a successful non-binary computer was the IBM 1620, which was (in its time) a minicomputer specifically intended for scientific calculation. This did all its arithmetic in decimal using lookup tables for multiplication and addition. The default precision for REAL numbers was 8 significant figures, but this could be varied up to 30 figures. Murray Langton 22:02, 7 March 2006 (UTC)
-
stored-program computer - mombo jumbo
After John Atanasoff beeng called by NAVI and making NOL computer project (on advice of John von Neumann) and secretly ENIAC was beeing build while John Mauchly was somehow participating in both projects without the knoledge of John Atanasoff. And somehow the NOL project was shut down again on the advice of John von Neumann. And after finaly John Mauchly admiting in court that he for instance had to take "crash course in electronics" shortly after beeing introduced ABC basics. After the trail John von Neumann used the "stored-program computer" noncence like it's the verry key to computers. The truth it's not even close to any of the Atanasoff finding which he made just in making one version of his computer. Which are those findings : the use of binary base, logical operators instead of counters by using vacuum tubes (transistor), refreshed memory using capacitors, separation of memory and computing functions, parallel processing , and system clock. 71.99.137.20 17:47, 6 March 2006 (UTC)
- And I put the question to you again: what the heck does this have to do with stored-program CPUs? All devices ever called CPUs were Von Neumann/Harvard machines. I'll let you deal with your weird idea that the stored program concept wasn't a huge milestone in computer development, but the fact is that CPUs are stored program machines. -- uberpenguin 18:39, 6 March 2006 (UTC)
- It's very possible that Mauchly got his ideas on how to do arithmetic from Atanasoff - and that seems to be pretty much what the judge said in the court case - it's not good that he stole ideas and infringed patents. However, no amount of court rulings change the fact that the ABC wasn't a stored program device and ENIAC (eventually) was. Von Neumann certainly did say that the stored program thing is the very key to computers - and he was absolutely 100% correct. If you have to step the machine through the algorithm you want it to perform by hand (as was the case with the ABC) then it's completely analogous to a non-programmable pocket calculator. If you had enough memory and enough time and the right peripherals, the ENIAC could have run Windows 2000, balanced your checkbook and played chess (and any of the other amazing things computers can do). The ABC could no more have done those things than could an abacus or a pile of rocks. That ability to run a program is what makes a computer different from a calculator. If User:71.99.137.20 doesn't/can't/won't understand that then he/she doesn't understand computers AT ALL. You can imagine a computer with just 'nand' 'shift-right', 'store', 'literal' and 'jump if not carry' as it's underlying instruction set and 'numbers' that are only 1 bit wide! Amazingly, such a machine is Turing-complete and can therefore (according to the Church/Turing theorem) be used to implement any arithmetic or logic function in software. So here we see something that you HAVE to call a computer because it can (in principle) run Windows, balance your checkbook and play chess - who's hardware can't increment or decrement - let alone add or subtract - and which can only represent the numbers as high as 1 and as low as 0! This whole programmability thing isn't just a part of what makes up a computer - it's the ENTIRE thing that is a computer. Computers with things like adders and multipliers and 32 bit floating point only need them to get higher performance. SteveBaker 05:54, 10 March 2006 (UTC)
Proposed Additions
To make a good article better, may be the following additions will be useful:Connection 12:11, 7 March 2006 (UTC)
- In a Conceptual Background Section, a word about Von Newman Architecture, and a link thereof. This Conceptual Background may also solve the Atanasoff-Berry Computer issue. :) Connection 12:11, 7 March 2006 (UTC)
- A State-of-the-Market Section. It discusses Processor Families: especially the 8088, and its developments, design solutions (more circuits per space v. architecture changes). Or a link to Notable CPU architectures, and adding this Section there).Connection 12:11, 7 March 2006 (UTC)
- Erm... Well, the history section already does briefly explain the significance of the stored program computer and Von Neumann's ideas and designs. I'm not sure what else you are suggesting we add. I don't really think a state of the market section is necessary for several reasons:
- It's very difficult to create such a thing and make it sufficiently terse without someone screaming POV.
- The Notable CPU architectures page was largely created to avoid this problem, and is currently linked in See also.
- I strongly believe this article should stick to the history, evolution, and fundamental operation of CPUs as functional devices. It should avoid elaborating on specific architectures unless their mention is useful to illustrate a certain concept. Architecture history is sufficiently covered in the specific articles on those architectures.
- Anyway, let me know what you think, I'd like some clarification on your first point. -- uberpenguin 15:32, 7 March 2006 (UTC)
- --
- CPU Architecture is a unique development where many people has contributed to it expressly or otherwise! This aspect I want to be stressed as a Design "lesson". In the History section, focus has been on implementation. I want a stress on Architecture, and how it came along. What each has contributed. Von Newman is not mentioned at all. All this needs to be presented in context. Also, what contributions didn't "show" in the main development path. Here comes other non-Von Newman Architectures. All this can only be links to other wiki Articles.
- On the other point, what prompted my sugestion is that I ddn't see the 8088, 368, 468, etc, in Notable CPU architectures. However you are right. They should be covered there. My other point should be placed there. ;) --Connection 09:32, 8 March 2006 (UTC)
-
- The 8088, 80386, 80486 aren't instruction set architectures, they're microprocessors that implement instruction set architectures. The instruction set architecture they implement could be called x86 or, in the case of the 80386 and later processors, IA-32; they are mentioned on the Notable CPU architectures page. Guy Harris 09:51, 8 March 2006 (UTC)
-
-
- Mi Culpa. I didn't see the IA-32 or x86 Links, as I was searching 8088, 80386, 80486! ... Who said they are instruction set architectures? --Connection 11:28, 8 March 2006 (UTC)
-
-
- The line between CPU architecture and implementation did not really exist until the S/360 (the article notes this). Up until that point, most significant traits you could enumerate about CPU "architecture" were merely implementation details. Therefore, a discussion of early computers is necessarily mostly about implementation. The history section is fairly short because this article cannot cover a lot of the ground covered by history of computing hardware; we're only really concerned with the development of stored program computers (Von Neumann IS mentioned, re-read the history section... I was merely hesitant to overtly label him "the inventor of the stored program architecture" because that's not wholly true.).
- I'm still not entirely sure what you want to include, but you're welcome to go ahead and write it here or in the article so we can see what you had in mind. Just try not to cover a lot of ground already covered by articles like CPU design (implementation) and Instruction set (ISA). -- uberpenguin 14:21, 8 March 2006 (UTC)
-
-
- What I have in mind is a set minor touches to connect things together. I will add them directly in the future. Over and out. --Connection 21:22, 8 March 2006 (UTC)
-
"integer precision"?
Surely "integer range" is meant. Where would I find any variation in precision of integer units between processors? --ToobMug 08:51, 31 March 2006 (UTC)
- Heheheh! Yeah - you're 100% correct. A lot of people misuse the term. SteveBaker 13:09, 31 March 2006 (UTC)
- All better now! SteveBaker 13:15, 31 March 2006 (UTC)
- Precision is the correct term to use here. Dictionary definitions:
- All better now! SteveBaker 13:15, 31 March 2006 (UTC)
-
-
- 'the accuracy (as in binary or decimal places) with which a number can be represented usually expressed in terms of the number of computer words available for representation'.
- 'The number of decimal places to which a number is computed.'
-
-
-
- Precision isn't being used in the strictly scientific sense here, but it is common to see the term used in relation to digital microelectronics. -- uberpenguin
@ 2006-03-31 13:54Z
- Hmmm - maybe there is a rift in hacker culture here. I've been in the business 30 years and I wouldn't think of talking about the precision of an integer. The number of 'decimal places' is the number of digits after the decimal point - and that's zero for an integer. The number of 'significant digits' however depends on the size or range of the storage allocated to the integer. If the usage elsewhere is different, I'm suprised - but I guess anything is possible. The trouble with using 'precision' when you mean 'range' is that 'precision' loses it's meaning when applied to (for example) fixed point arithmetic. No matter what - I think the article should use terms that are (at worst) less ambiguous and (at best) not incorrect. 'Range' and 'Size' express the meaning perfectly well. Precision is certainly not acceptable to everyone. SteveBaker 17:17, 31 March 2006 (UTC)
- Personally I hate the imprecise (hehe) word 'size', but it's not a really big deal, so I'll leave the changes. -- uberpenguin
@ 2006-04-01 00:44Z
- Personally I hate the imprecise (hehe) word 'size', but it's not a really big deal, so I'll leave the changes. -- uberpenguin
- Hmmm - maybe there is a rift in hacker culture here. I've been in the business 30 years and I wouldn't think of talking about the precision of an integer. The number of 'decimal places' is the number of digits after the decimal point - and that's zero for an integer. The number of 'significant digits' however depends on the size or range of the storage allocated to the integer. If the usage elsewhere is different, I'm suprised - but I guess anything is possible. The trouble with using 'precision' when you mean 'range' is that 'precision' loses it's meaning when applied to (for example) fixed point arithmetic. No matter what - I think the article should use terms that are (at worst) less ambiguous and (at best) not incorrect. 'Range' and 'Size' express the meaning perfectly well. Precision is certainly not acceptable to everyone. SteveBaker 17:17, 31 March 2006 (UTC)
- Precision isn't being used in the strictly scientific sense here, but it is common to see the term used in relation to digital microelectronics. -- uberpenguin
-
CCIE
who has done CCIE here ?
--It fayyaz@hotmail.com 17:34, 12 April 2006 (UTC)