Talk:Graphics processing unit

From Wikipedia, the free encyclopedia

This is the talk page for discussing improvements to the Graphics processing unit article.

Article policies

Contents

[edit] Introduction

The intro states "In more than 90% of desktop and notebook computers integrated GPUs are usually far less powerful than their add-in counterparts." This almost led me to believe that 10% of integrated GPUs are more or equally powerful as add-in ones. However, that is not what the external source means.

Quoting: "In fact... more than 90 percent of new desktop and notebook computers use integrated graphics."

Is my view valid?--64.230.4.64 (talk) 22:38, 10 February 2008 (UTC)

[edit] Untitled discussion

This Page has been directly quoted in PCWorld Magazine in the October 2006 edition, on the article about GPU's.--Tiresais 08:22, 7 September 2006 (UTC)


NVIDIA Corporation coined the term around 1999 to describe its GeForce range of graphics chips, based on the abbreviation "CPU" for a computer's central processor. However, Sony may have used the term in 1994 to describe the graphics hardware inside its PlayStation game console.

Okay, so if Sony or nVidia coined the term "GPU", how does one explain this?
The information in question has been removed until verification. Thanks for pointing to this issue. Optim 15:19, 9 Jan 2004 (UTC)
Why has the comment that nVidia coined the term been restored? GPU was being used in the mid 1980s, and nVidia was founded in 1993 Crusadeonilliteracy 13:20, 21 Feb 2004 (UTC)
I've been wondering that for a while now. I'll try to fix it. -lee 22:53, 5 Jan 2005 (UTC)
I removed the claim once more. Please do not add it again, since (as demonstrated above), the term has been in use LONG before NVidia has been on the scene. -- uberpenguin 02:59, 18 December 2005 (UTC)
What? If it showed up again between January and December, I didn't put it there. Please get your attributions straight. -lee 22:43, 9 January 2006 (UTC)
I wasn't singling you out, nor was I blaming you for adding it. It's just a general warning to anybody involved not to re-add the text. -- uberpenguin 23:42, 9 January 2006 (UTC)

[edit] Untrue Claim

"Several (very expensive) graphics boards for PCs and computer workstations used digital signal processor chips (like TI's TMS340 series) to implement fast drawing functions,..." TMS340 never was a DSP, this was a VDP (from my knowledge term VDP is also missused to other chips than TM340 familly). TMS34010 and TMS34020 from programmer point of view was a normal CPU but this familly has additionaly special instructions for graphic operation and highly integrated graphics subsytem ie vertical and horizontal counters, with these counters and additional hardware this VDP was synchronized with electron beam (this claim is valid for CRT displaying technology). TMS340 can be used to build normal uComputer - they dont need for normal calculations ie not graphics any additional - universal CPU's. Also from my knowledge TI DSP (TMS320) was used as a part of the various graphics subsystems (for acceleration of the calculation).

I went ahead and fixed this (though it's been a while now). -lee 15:09, 27 March 2007 (UTC)

[edit] RAM

The article needs to make better sense of the whole RAM thing. It states that:

A GPU will typically have access to a limited amount of high-performance VRAM directly on the card, which offers much greater speed than dynamic RAM, though at much greater cost. For example, most modern cards have 256 MB of VRAM, with some having as much as 512MB of VRAM, whereas the computer itself may have 1GB or more of system memory.

But both RAM links redirect to Dynamic random access memory. And when I was looking at video cards, most of them listed that their ram was DDR - the same kind as the standard system memory. So is the video RAM really a different type of RAM, or is it just in a different place - on the video card as opposed to in the mother board's main DIMM slots?

Well, today most cards are using GDDR3 which is definitely not in-use on motherboards. But some cheaper cards use DDR2, or even regular DDR SDRAM. And before that cards used SDRAM, EDO DRAM, FPM DRAM. So yes the RAM chips are often absolutely the same as those used for the system. The implementation of them is just different. Graphics boards can run their memory faster because it is easier to make the bus faster if it is short and the RAM is directly soldered on the board instead of in sockets. And, of course, because graphics boards often use faster specced RAM chips. --Swaaye 04:05, 30 March 2006 (UTC)
Eh... No, the real utility of having onboard RAM is that it is dedicated, not that its implementation is somehow faster than that of primary storage. Graphics card memory (which is more or less used for framebuffer and texture memory) is directly addressable by a GPU and therefore isn't subject to all the delays involved in having to utilize the CPU (or DMA of some sort) for main memory access. The physical parameters of the graphics expansion board have pretty much no bearing on making the RAM somehow "faster" (at least in the context of this discussion). Rather, the onboard memory is dedicated for usage by the GPU and therefore invokes much less latency than using computer primary storage would. I'll improve this article's treatment of on-card memory a bit... -- uberpenguin @ 2006-03-30 04:40Z
This isn't entirely correct. The fact that it is local memory is the key factor in lower-end cards where memory bandwidth isn't a limiting factor. However, in high-end cards, speed is absolutely the most important feature of video card ram. Take the X1950XTX for example, its memory is clocked at 1GHz (2GHz DDR), far faster than any system memory to date. In short, video cards use a local batch of fast, dedicated memory to perform memory intensive functions. The faster the memory, the more performance you get. -- michaelothomas
I basically removed the offending text and cleaned up the paragraph a tad. This article doesn't need to explain the design motives regarding expansion cards, so I didn't really add any more content. -- uberpenguin @ 2006-03-30 04:54Z

It is also true that how wide the memory interface is make a difference in performance. The higher the number, the better. Many mid-range cards these days use 128 bit interface while more powerful or special enhanced editions of cards usually have 256 bits or more.Alexander 04:02, 11 August 2007 (UTC)

[edit] Transistors

I think that GPU sizes are twice that of current generation CPUs. Is this true ? Wizzy 08:17, 10 August 2006 (UTC) yeah i got 2 nVidia 8800GTX's and their 11"long 2"deep and 4" wide these are the first directX10 copatibale card on the market they have 756MB GDDR3 thesr things are gynormos

[edit] The article should be split or moved

1. No-one ever used the GPU term until nVidia invented it for GeForce 256 2. The correct name has always been "video card", and it continues to be used --Dmitry (talkcontibs ) 09:09, 17 September 2006 (UTC)

Oppose 1: See the first topic of this page.
2: There's no such thing as a correct term. NVIDIA might have coined it but, if you're saying that it is not in common usage, you must be living under a rock still. rohith 19:44, 17 November 2006 (UTC)
Still, though, the scope here does seem pretty narrow. As noted further down, people working in academic and industrial applications had GPUs and frame buffers long before Atari, Apple, IBM and Commodore made them common on PCs, and long before Nvidia started applying the term to the GeForce. I'm wondering if we should indeed merge the PC-specific items into the video card article (or perhaps another article like History of video hardware on personal computers) and leave this one for a general treatment of the subject (covering things GPUs do in general, like BitBLT, Bézier curves, etc), and some of the history I didn't know about when I first rewrote this. -lee 15:22, 27 March 2007 (UTC)

[edit] Which GPU has a 3D accelerator on it?

Can someone tell which one is exactily, i coufused KanuT 03:03, 3 January 2007 (UTC)

hello! anywhat tell me Which GPU has a 3D accelerator on it KanuT 21:17, 14 January 2007 (UTC)

These days, they all do. Even really old cards like the ATI Rage family have at least some 3D capability. -lee 15:23, 27 March 2007 (UTC)

[edit] History re-ordered

I think the history section needs to be re-arranged. For example, the following bit:

In the late 1980s and early 1990s, high-speed, general-purpose microprocessors became popular for implementing high-end GPUs. Several high-end graphics boards for PCs and computer workstations used TI's TMS340 series (a 32-bit CPU optimized for graphics applications, with a frame buffer controller on-chip) to implement fast drawing functions; these were especially popular for CAD applications. Also, many laser printers from Apple shipped with a PostScript raster image processor (a special case of a GPU) running on a Motorola 68000-series CPU, or a faster RISC CPU like the AMD 29000 or Intel i960. A few very specialised applications used digital signal processors for 3D support, such as Atari Games' Hard Drivin' and Race Drivin' games.

is under the "1970's" heading, despite clearly not being relevant to the 70's. I'd trivially cut&paste it into the 80's section, but I wonder if maybe it should be split up between the 80s and 90s? Emteeoh 20:49, 12 January 2007 (UTC)

Will do. I don't know how that got split up like that. -lee 15:00, 27 March 2007 (UTC)

I have a PC, and i don't know how to check whar graphics card i have... could someone tell me how to figure it out, like through control panel or something? thanks.

[edit] Microsoft bias in the history section?

The history section of this article seems to concentrate overmuch on the impact of DirectX/Direct3D and its predecessors, only mentioning the older and far more developed standard OpenGL later on, and neglecting any mention of 3Dfx's Glide API or MiniGL. While DirectX did bring many of the mentioned features into general use among Windows users, they were generally not the first to bring them to market, or even into popular use. GreenReaper 04:22, 14 January 2007 (UTC)

Which do you think would come first and dominate an article on war; WWII or the secret 100 year burger war of 1732? The fact of the matter is, DirectX dominates the market, is far wider known and is a far more "important" (though not better) than OpenGL. We have always had the policy of not letting personal views get in the way of obvious topic domination. --Jimmi Hugh 03:35, 13 April 2007 (UTC)
DirectX is about as close to winning the war as the US is to winning the war in Iraq; While Glide may have died out, OpenGL is still under strong industry use, and it is more than worth the mention of the hundreds of graphical APIs that came before DirectX. So, while your personal views may be that DirectX has won, direct evidence to the contrary proves otherwise.76.89.26.83 11:16, 28 October 2007 (UTC)
The wonderful thing about history is that its large and expansive, you can provide just the directX side of it but thats hardly comprihensive. there is no reason to focus just on directX and there is no reason to give non microsoft api's credit where its not due but pretending that they don't exist helps no one —Preceding unsigned comment added by Gordmoo (talkcontribs) 04:20, 11 October 2007 (UTC)

[edit] Rename or broaden

above discussion notwithstanding, this strikes me as a reasonably competent version of what it is. however, it is NOT an article about GPUs in general. it is aritcle about the history of GPUs in PC applications, with emphasis on MS Windows.

There is little attention to basic theory of operation or general evolution of this type of processor. the word "vector" appears only twice. Scalability issues are not addressed. pioneers in vector proessing and computer imaging with little PC involvement like Cray, TI, or SGI are not mentioned at all.

No sense criticizing an apple for not tasting like a pear, but we shouldn't call it a pear either. i suggest that either the scope of the article be broadened or the title be changed to better reflect the content and a separate, more general GPU article (linked to this one) be started.

- ef

I have to agree here. I wrote most of the beginning a while back; it was even worse then (apparently someone thought NVIDIA had invented the term!). Unfortunately, most of my knowledge of graphics systems comes from the PC and consumer electronics world; I'm familiar with some of the high-end stuff by name (E&S should be on that list too), but I don't really know enough to flesh this out. In any case, if you know more than I do, feel free to edit the article and add what you know. Thanks. -lee 14:59, 27 March 2007 (UTC)

[edit] 90% ??

Working in the industry, I have to seriously question the figure that 90% of desktops have integrated video on the motherboard. There's also the issue of referencing Intel-based systems, but not AMD-based systems.

69.95.74.113 20:13, 23 June 2007 (UTC)

It's 90% of desktops and notebooks. Not many notebooks use a separate video card. And the figure is from a cited source. If you know of a better source, please add it. --Harumphy 21:10, 23 June 2007 (UTC)

Actually anything other than a very low-end business only laptop these days has mobile versions (or onboard, but still superior to Intel chipsets, like the ATI onboard graphics, or Nvidia 6100) of desktop graphics cards. Also, outside of business, or entry level oriented machines, actual graphics card are featured. There is no way it is as high as 90% unless you count non-PC computers.Alexander 04:07, 11 August 2007 (UTC)

Opinions and anecdotes are of no help. We need facts from cited sources. Harumphy 12:32, 11 August 2007 (UTC)

[edit] Weird formulation?

"A GPU can sit on top of a video card, or it can be integrated directly into the motherboard in more than 90% of desktop and notebook computers"

Should not this be something like:

"A GPU can sit on top of a video card, or it can be integrated directly into the motherboard, as it is in more than 90% of desktop and notebook computers" —Preceding unsigned comment added by LarsPM (talk • contribs) 09:31, 4 September 2007 (UTC)

[edit] Proposed merge: Graphics hardware and FOSS

  • oppose - not the same topic. --Gronky 11:16, 23 October 2007 (UTC)
  • oppose - two very different topics. --Harumphy 11:34, 23 October 2007 (UTC)
  • Support. Free software drivers are only notable in an encylopedic concept in the context of discussing either free software or GPUs. The current standalone article reads like an essay because it's basically a floating discussion. As this article currently doesn't discuss garphics card drivers at all, and such drivers are an important aspect of modern GPU considerations, I think this is an appropriate merge target. Chris Cunningham 11:40, 23 October 2007 (UTC)
  • Oppose - This is its own free-standing topic, and I believe a quite notable one. It seems to be gaining in notability, so if this does get merged it will probably separate out as its own article again in a little while; although I believe it's already sufficiently notable for that. --Daniel11 08:00, 26 October 2007 (UTC)
  • Oppose - See above. --Alexc3 (talk) 00:52, 6 December 2007 (UTC)
  • Oppose - FOSS is not an important aspect of GPUs for most people. It's a narrow issue that doesn't belong in a general discussion of graphics cards. Since everyone except one person has opposed this over the past three months, I'm removing the merge proposal. 72.229.28.14 (talk) 20:11, 1 February 2008 (UTC)

[edit] Requested move

The following discussion is an archived discussion of the proposal. Please do not modify it. Subsequent comments should be made in a new section on the talk page. No further edits should be made to this section.

The result of the proposal was no consensus to support move at this time. JPG-GR (talk) 17:30, 29 May 2008 (UTC)

This is to address the question of "GPU" (Graphics processing unit) being used as a title when it is not the most common name WP:COMMONNAME and to settle the charge that it is not a vendor-neutral term. Regardless of the particular etymology of "GPU", the fact remains that neither 3dfx, ATI or Intel have ever marketed their graphics products as GPUs, to my knowledge. ATI calls them VPUs, which is really just an attempt at counter-branding obviously derived from NVIDIA's use of GPU. 3dfx and Intel have both relied on generic phrases such as graphics "processor", "accelerator", "chip", "chipset", etc. And these terms are still widespread, used both by Intel and in colloquial usage. I don't see any reason why the wiki shouldn't use similar terminology. Google results:

  1. "graphics processor" -wikipedia 3.2m
  2. "video chipset" -wikipedia 1.3m
  3. gpu OR "graphics processing unit" -wikipedia 1.25m
  4. "gpu" -wikipedia 1.2m, but many hits aren't about computer hardware
  5. "video processor" -wikipedia 1.2m
  6. "graphics chip" -wikipedia 1m
  7. "graphics accelerator" -wikipedia 880k
  8. "graphics processing unit" -wikipedia 313k

As you can see, even if we were to be extremely generous by A)allowing for the acronym "GPU" to stand in for the spelled-out phrase, B)attribute every ghit to this context, and C)combine hits from both GPU and "graphics processing unit", although many would be redundant, the phrase "graphics processor" is still more common twice over. If we discount the non-relevant hits for GPU we would likely end up with "graphics chip" still being more common. Thus "GPU" and its lesser equivalent "graphics processing unit" together are riding third in common usage. I will also note that "processor" and "chip" both refer to the same scope of hardware as "GPU", and doesn't introduce the implication that video card does. Given that "graphics processor" is the WP:COMMONNAME and that GPU is brand-specific terminology (in spite of the popularity of the brand), the best title of the article would be Graphics processor, unless someone can find one that is even more common. Ham Pastrami (talk) 04:17, 19 May 2008 (UTC)

For some reason, I get 20,500,000 hits for GPU on Google, and 1,750,000 for "graphics processor" in quotes. I don't think it's a proprietary term at all, just like CPU isn't. --Wirbelwindヴィルヴェルヴィント (talk) 06:33, 19 May 2008 (UTC)
Addendum: Searching google for "gpu AND graphics OR video OR processor OR processing OR unit OR video" yields 696,000 results. --Wirbelwindヴィルヴェルヴィント (talk) 06:39, 19 May 2008 (UTC)
Because you're not including the -wikipedia part. That filters out links that are from or likely related to the usage on Wikipedia so that the incumbent phrase (such as the one currently in use in this and subsequently related articles) does not influence the count. It is a bit curious that allowing Wikipedia links would cause the number to shrink, though. Maybe someone who knows a little more about how Google runs its searches can answer to this. I don't think CPU is a proprietary term either, but that's not what we're discussing (simply "processor" might be more common, but is ambiguous, and ambiguity is not a problem in this case). CPU is not also not a valid comparison for the reason shown below. Ham Pastrami (talk) 09:15, 19 May 2008 (UTC)
  • Comment the COMMON NAME is GPU. And VPU was originated by 3DLabs for their WildCat line of professional graphics boards, and later co-opted by both nVidia and ATI, though no one else calls them VPUs. And neither Intel, nVidia, ATI, nor Matrox originated the GPU term. IIRC, NEC was the first to use it for an ISA 16 pro-graphics board that required you to solder it into your motherboard. Aside from the fact that GPU is regular computer jargon, like signal processing unit, central processing unit. A graphics processor could be a box the size of a PC, or a VAX. 70.55.86.17 (talk) 08:24, 19 May 2008 (UTC)

Is GPU regular computer jargon "just like" CPU and SPU? Let's find out:

  • "cpu" -wikipedia 16m (just dandy)
  • "central processing unit" -wikipedia 2.2m (ok)
  • "signal processor" -wikipedia 2.36m (ok)
  • "signal processing unit" -wikipedia 327k (oops!)
  • "spu" -wikipedia (8 out of top 10 hits unrelated to circuitry)

SPU is ambiguous and "signal processing unit" receives far fewer hits than "signal processor". Where are these articles located? Central processing unit. Signal processor. Ok, that's perfectly in line with the demonstrated common names. So now we have further evidence that "GPU" and other derivative acronyms are not comparable to "CPU" in terms of commonality, and are inconsistent with Wikipedia naming conventions. Ham Pastrami (talk) 09:15, 19 May 2008 (UTC) Regarding "graphics processor" potentially being some kind of unholy box. That title already redirects to this article. If there were a practical cause for concern about ambiguity, that would not be the case. Can you at least give an example of a "graphics processor" of this size? Also, where does it say that a GPU can never be the size of a box? Video cards are getting to that size already, so it's not completely out of the question for the core or chipset to some day reach this size. In any case this seems like a red herring to me. Ham Pastrami (talk) 09:19, 19 May 2008 (UTC)

(I'm an administrator who came here via WP:RM) I don't think you've demonstrated sufficiently what the common name for this thing is, and I don't there's consensus to move. Have you looked at books, journal articles and other scholarly works and see what they call this thing? enochlau (talk) 09:22, 29 May 2008 (UTC)

The above discussion is preserved as an archive of the proposal. Please do not modify it. Subsequent comments should be made in a new section on this talk page. No further edits should be made to this section.