Talk:Comparison of Nvidia graphics processing units

From Wikipedia, the free encyclopedia

This article may be too technical for a general audience.
Please help improve this article by providing more context and better explanations of technical details to make it more accessible, without removing technical details.

Why is this article considered 'too technical' and yet the ATi equivalent article Comparison of ATI Graphics Processing Units is not? Also, the 7900GX2 is of course 2 GPUs on one board, in this light should it not be the TOTAL MT/s, Pipes x TMU's x VPU's, that are atated and not the specs of half the card?

A quick look at the article and it didn't seem that bad as far as tech speak, after all you are talking about a comparison of GPUs either you keep it as it is and maybe add breif explanation of terms or you dim it down to a this is faster than that and that is faster than this article. --AresAndEnyo 21:48, 21 December 2006 (UTC)

Contents

[edit] 512MB GeForce 6800 (AGP8X)

Why is this version of the 6800 not listed here? My card, as listed in nVidia's nTune utility is a standard GeForce 6800 chip with 512MB of memory, with clock speeds of 370MHz Core and 650MHz Memory. These were the factory clock speeds I received the card with; it was purchased from ASUS. --IndigoAK200 07:34, 27 November 2006 (UTC)

This seems like a comparison of graphics cards not of GPU chips ... and in that vein, why is there no mention of nVidia's workstation products (Quadros)?--RageX 09:00, 22 March 2006 (UTC)


This article especially needs an explanation of the table headers (eg. what is Fillrate? What is MT/s?) ··gracefool | 23:56, 1 January 2006 (UTC)

While I agree that an explaination would be nice, I have to ask why a page such is needed. It seems to have unexplained inaccuracies, or at the very least questionable info. As cards are released, it will need constant maintainance. Not only that, but 3rd party manufacturers often change specs, so while a certain nVidia card might have these specs...a card you buy might not. I'm certainly willing to clean up this page, but I want some input on how valuable it is to even have it in the first place before I go to the trouble.--Crypticgeek 01:45, 2 January 2006 (UTC)
It's a handy reference. If you can find another one on the 'net (I'm sure there's a good, accurate one somewhere) we could think about replacing this with a link to it. Note that it is a comparison of GPUs, not cards, so 3rd party manufacturers don't matter. New GPUs aren't released that often. ··gracefool | 22:39, 11 January 2006 (UTC)

NVIDIA's website indicates that the 7300GS has a 400MHz RAMDAC gpu. Is there a reason that everyone is changing that to 550MHz? Where did you acquire that information? --bourgeoisdude

See RAMDAC for explanation. RAMDAC frequency determines maximum possible resolution and/or refresh rate. ONjA 16:52, 24 January 2006 (UTC)

The Process Fabrication (gate lenght) should be listed in nm instead of μm, the fractional values are quite cumbersome, beside, the industry more commonly use nm than μm now that we see processing units manufactured on a 45nm being announced.


Question:

Why is it that a 6600 graphics card is considered better then an 5900 Graphics card? I understand that it has enhanced instruction sets and alots of new features, but in terms of naked processing power and ram speed the 6600 seems slower then the 5900 graphics card.

6600 can process twice as much pixels per clock as 5900 due to 8 pipelines compared to 4. Also FX family known to be very slow when processing pixel shaders v2.0, which is now makes the greatest impact in modern games. 6600 does more work per MHz. ONjA 14:53, 4 May 2006 (UTC)

The bus column does not list PCI for many of the cards in the FX family and the Geforce 6200. I suspect there are other mistakes of excluding the PCI bus from the MX family. I will add PCI as one of the bus options for the 6200 and 5500, as I am sure these two cards support PCI. —The preceding unsigned comment was added by Coolbho3000 (talkcontribs) 22:45, 10 May 2006 (UTC)

I have made the 6200 PCI a seprate row because of its differences from the other 6200 versions (boasts a NV44, not NV44a core, yet doesn't support Turbocache). I have named this section the 6200 PCI. Please correct me if you think this isn't suitable. —The preceding unsigned comment was added by Coolbho3000 (talkcontribs) 22:52, 10 May 2006 (UTC)


[edit] doom 3

Shouldn't there be a last column called DOOM3, with values as:

  • NO - can't run the thing
  • NPL - run low mode, but not playble
  • PL - playable in low mode (at least some 20FPS)
  • L - run Low mode just fine (some 50FPS maybe)
  • M - play in Medium mode
  • H - capable of high quality mode
  • H+ - every feature turned on and no fps drop yet

I think that at this point, doom3 is the most reliable mark to create some notion of performance. Another option, free, would be to make a complex blender scene, but i think that games are the only reason people look at this page

Picking any independent software title shows favoritism, which is avoided if we stick to internal Nvidia specifications. Shawnc 02:05, 15 June 2006 (UTC)
That's outrageous. In that case shouldn't we have F.E.A.R. and Half-Life 2 and Quake 4 alike? rohith 07:14, 1 November 2006 (UTC)

[edit] Open GL?

Wouldn't it be apropos to include an column for the highest version of OpenGL supported? Not all of us use Windows. :) OmnipotentEntity 21:53, 22 June 2006 (UTC)

[edit] memory bandwidth

Bandwidth is calculated incorrectly. I've changed it to use GB/s, where GB/s=10^9 bytes/second. To properly calculate bandwidth in GiB/s it's (bus width * effective clock of memory) / 1073741824 (bytes/GiB) / 8 (bits / byte)

[edit] Vanta LT

I added some information about this card using lspci and nvclock -i There's some conflict with the output:

lspci: VGA compatible controller: nVidia Corporation NV6 [Vanta/Vanta LT] (Rev 15)

nvclock -i:

Card: nVidia Riva TnT 2 VANTA
Architecture: NV1 0

The chipset says this card is Vanta LT. Can anyone check what the heck is this card?

[edit] NV17, NV34, NV31 and NV36

Geforce4 MX does not have a vpu of any kind. nvidia's drivers allow certain vertex programs to use the NSR that's been around since the nv11 days, but only if the (very simple) vertex program can be run on the gpu. otherwise it's done by the cpu. http://www.beyond3d.com/forum/showthread.php?t=142

Geforce FX5200 is a 4 pixel unit/1 textuer unit design as stated here http://www.beyond3d.com/misc/chipcomp/?view=chipdetails&id=11&orderby=release_date&order=Order&cname= and here http://www.techreport.com/etc/2003q1/nv31-34pre/index.x?pg=2

Updated note to reflect that NV31, NV34 and NV36 all only have 2 FPU32 units as described here http://www.beyond3d.com/forum/showthread.php?p=512287#post512287

[edit] CPUs

What happened to the pages Comparison of Intel Central Processing Units and Comparison of AMD Central Processing Units? I can't believe that I have to use answers.com instead of WP.

See List of Intel microprocessors and List of AMD microprocessors. But you're right that your two links should also work. - Frankie

[edit] 8xxx series

Should we add the 8 series samples to the list?

Not right now. We can add it after they have been confirmed officially (that is, after it has been released...) rohith 20:24, 13 October 2006 (UTC)

[edit] Transistor count

I wish to see transistor counts being added to the table. Shawnc 00:25, 9 November 2006 (UTC)

[edit] DirectX and NV2x

DirectX 8.0 introduced PS 1.1 and VS 1.1. DirectX 8.1 introduces PS 1.2, 1.3 and 1.4.
source: shaderx,
http://www.beyond3d.com/forum/showthread.php?t=5351
http://www.beyond3d.com/forum/showthread.php?t=12079
http://www.microsoft.com/mscorp/corpevents/meltdown2001/ppt/DXG81.ppt

Thus NV20 was DirectX 8.0, but NV25 and NV28 supported the added ability of PS 1.2 and 1.3 as introduced in 8.1.

[edit] VPUs

I've listed any card with a T&L unit as having 0.5 VPUs since it can do vertex processing, but it is not programmable. This also allows better compatibility with Radeon comparisons.

[edit] Sheet Change

The sheets are too tall to see the explanation columns and card specs at the same time, if I want to compare, I need to scroll back and forth. Could someone edit the tables to have the column explanations at both the top and the bottom?

[edit] Fillrate max (MT/s) for 8800GTS is incorrect

The fillrate listed for each graphics card on both the Comparison of ATI and Comparison of NVIDIA GPU pages is based off of: "core speed * number of pixel shaders" for discrete shaders or "core speed * number of unified shaders / 2" for unified shaders.

The fillrate listed would be correct only if the 8800GTS had 128 unified shaders (500 * 128/2 = 32,000) instead of 96. The correct fillrate should be 24,000 (500 * 96/2 = 24,000).

Should this be changed, or do we need a source explicitly stating 24,000 MT/s as the fillrate?

Nafhan 20:44, 24 January 2007 (UTC)


Found page on NVIDIA homepage listing 24000 MT/s as fillrate for 8800GTS, and made update.

Nafhan 21:21, 26 January 2007 (UTC)

Its all wrong, fillrate is the number of pixels that can be written to memory so core speed * number of ROPs, 8800GTS will then have 500 * 20 = 10000 MT/s, to confirm I ran a benchmark and got "Color Fill : 9716.525 M-Pixel/s"

[edit] 6700 XL

I propose we add the 6700 xl as it is listed elseware in wikipedia: http://en.wikipedia.org/wiki/GeForce_6_Series agreed? - steve —The preceding unsigned comment was added by 86.29.54.61 (talk) 11:04, 10 February 2007 (UTC).

[edit] GeForce4 MX4000

Graphics library version for this card is mentioned 9 in this entry. Which is not true. It is not even complete 8.1; proof = http://translate.google.com/translate?hl=en&sl=zh-TW&u=http://zh.wikipedia.org/wiki/GeForce4&sa=X&oi=translate&resnum=3&ct=result&prev=/search%3Fq%3Dnvidia%2BNV18b%2Bengine%26hl%3Den%26client%3Dfirefox-a%26rls%3Dorg.mozilla:en-US:official%26sa%3DG —The preceding unsigned comment was added by Acetylcholine (talkcontribs) 18:22, 24 February 2007 (UTC).

[edit] PCX

The PCX 4300, PCX5300, PCX5750, PCX5900, and PCX5950 need to be added

[edit] New Columns

Hi,

There are at least 2 very important values missing. This are the vertex througput and the power consumption. The fillrate does not say much today, the overwhelming fillrate is used for doing anti aliasing, in my opinion no criterion to buy a new GPU.

As for me, I want to compare my current hardware to those that I might buy. Take this for example:

Model Year Code name Fab(nm) Bus interface Memory max (MiB) Core clock max (MHz) Memory clock max (MHz) Config core1 Fillrate max (MT/s) Vertices max (MV/s) Power Consumtion est. (W) Memory Graphics library support (version) Features
Bandwidth max (GB/s) Bus type Bus width (bit) DirectX® OpenGL
GeForce FX 5900 XT Dec 2003 NV35 130 AGP 8x 256 400 700 3:4:8:8 3200 less than 356, more than 68, maybe 316 22.4 DDR 256 9.0b 1.5/2.0**
GeForce 7600 GT Mar 2006 G73 90 PCIe x16, AGP 8x 256 560 1400 5:12:12:8 6720 700 22.4 GDDR3 128 9.0c 2.0 Scalable Link Interface (SLI), Transparency Anti-Aliasing, OpenEXR HDR, Dual Link DVI
GeForce 7900 GS May 2006 (OEM only)

Sept 2006 (Retail)

G71 90 PCIe x16 256 450 1320 7:20:20:16 9000 822,5 42.2 GDDR3 256 9.0c 2.0 Scalable Link Interface (SLI), Transparency Anti-Aliasing, OpenEXR HDR, 2x Dual Link DVI
GeForce 7900 GT Mar 2006 G71 90 PCIe x16 256 450 1320 8:24:24:16 10800 940 42.2 GDDR3 256 9.0c 2.0 Scalable Link Interface (SLI), Transparency Anti-Aliasing, OpenEXR HDR, 2x Dual Link DVI
GeForce 7950 GT Sept 2006 G71 90 PCIe x16 256, 512 550 1400 8:24:24:16 13200 1100 44.8 GDDR3 256 9.0c 2.0 Scalable Link Interface (SLI), Transparency Anti-Aliasing, OpenEXR HDR, HDCP, 2x Dual Link DVI

You can find the est. power consumtion at http://geizhals.at/deutschland/?cat=gra16_256 but I believe it is not allowed to take it from there...

Does anyone know where to get real tech specs from nvidia?

JPT 10:02, 2 March 2007 (UTC)

[edit] Incorrect 6600 GT ?!?

I got a 6600 GT and as google can tell you this card has 8 pipelines, not 4!

[edit] PlayStation 3 GPU

Should this sheet list the PS3's RSX chip somewhere? It is pretty much a slightly modified G70... 68.228.65.16 23:40, 19 March 2007 (UTC)

[edit] Different Versions?

There are models what have additional suffixes (ie: 7600 gs KO) should we add entries for these cards? Or explain what they mean on this page? Otherwise this is a fantastic reference page. Thanks everyone!

        66.194.187.140 18:53, 1 April 2007 (UTC)Scott

[edit] Please add 7200GS (G78)

Hello! Please dudes add information (a new row in table?) about 7200gs (G78, 80nm). This is quite important I think. For example this from Sparkle. And I also heard about 80nm 7600, but I am not sure. Please sorry me noob, I can't edit myself, so complicated and I can do something wrong. Thanks in advance. —The preceding unsigned comment was added by 89.189.19.32 (talk) 17:43, 30 April 2007 (UTC).

[edit] Layout

I've changed the layout back to how it was a week or so ago, keeping the desktop graphics cards together and the laptop cards together - it is far easier to compare cards this way, as the Go series is not really comparable to the desktop range anyway. Also - what is the difference between the 7950GX2 and the 7900GX2? They use the same core running at the same clock speeds; in fact the only difference apparent from this article is the date of release, and since the earlier one was OEM, it implies that they are the same card! Yazza 18:26, 21 May 2007 (UTC)

The 7900GX2 and 7950GX2 appear to basically be the same thing. As stated in the table, one was only available as part of an OEM system while the other was retail. Here is an article that talks about both of them: [1] VectorD 09:01, 22 May 2007 (UTC)

[edit] DirectX

DirectX 8.1 introduced features supported by NV25/NV28 in the form of Pixel Shader v1.3 (and vs 1.1 from dx8.0). DirectX 9.0 contained support for the extended shader model 2 supported by NV3x (HLSL targets PS2_a and VS2_a) the DirectX section and the relevant GPU sections have been modified.

[edit] Latest video card?

I would like to inquire about the latest video card. Why is the GeForce 8800 not listed yet? If I am not wrong, this card is already available in the USA. I got the information from the latest edition of PC Gamer, September 2007. --Siva1979Talk to me 08:45, 20 July 2007 (UTC)

Double check this article, the 8800 Series are indeed listed.Coldpower27 12:33, 20 July 2007 (UTC)
Oh yes! My mistake! --Siva1979Talk to me 08:28, 21 July 2007 (UTC)

[edit] 7500 LE

Could an expert please add the 7500 LE; it's missing. Tempshill 16:41, 19 August 2007 (UTC)

NVIDIA GeForce 7500E is missing, also. I wonder if these two are similar enough to ignore the "LE" or "E". Brian Pearson (talk) 06:00, 11 June 2008 (UTC)

[edit] 12 pixel per clock claim on Quadro FX

The recent NVIDIA's Quadro FX datasheets are boasting 12 pixel per clock rendering engine on all product ranges, even though many of these products do not have 12 pixel/vertex shaders, or even 12 raster operator engines, or even generate 12 pixels per clock. Does anyone know what does the statement really mean? Jacob Poon 23:08, 20 September 2007 (UTC)

[edit] Error in Tesla table?

The Tesla table lists a "Pixel (MP/s)" in the Memory column. I think this is supposed to be "Bandwidth reference". Can anyone confirm and fix if necessary? Anibalmorales 20:24, 11 October 2007 (UTC)

[edit] Power

I think it would be good to add the TDP when that's known.-- Roc VallèsTalk|Hist - 17:11, 25 October 2007 (UTC)

Agreed, was just about to suggest the same thing actually!--81.215.13.145 (talk) 10:25, 11 January 2008 (UTC)

[edit] 8300 GS?

Where is the GeForce 8300 GS? —Preceding unsigned comment added by 201.66.31.220 (talk) 07:05, 21 November 2007 (UTC)

[edit] 9 series

On the subject of which version of DirectX this video card will use, it seems people keeps on changing my edit of "10" to "10.1". From http://en.wikipedia.org/wiki/GeForce_9_Series , you if you check source #1 of that page, it is an old article from dailytech stating which version of DirectX the card will use, but if you check source #4, you'll see that the source dailytech quoted, which is located at http://www.chilehardware.com/foro/informacion-exclusiva-sobre-t133896.html?p=1638246#post1638246 actually stated that the card will use DirectX 10.0, not 10.1. Obviously dailytech made a typo. To reinforce the chip only supporting DirectX 10, please check source #5 of the page http://en.wikipedia.org/wiki/GeForce_9_Series which contains a full review of the card. I will change it back to "10" to reflect my findings. If there are any new information reguarding to the card, please change it to to reflect this new information and please cite source Baboo (talk) 06:35, 27 January 2008 (UTC)

it seems the person who did the editing also changed OGL to version 3, which does not currently exist, and no source supporting this change. I reverted Baboo (talk) 06:44, 27 January 2008 (UTC)

Isn't there supposed to be a 9800 GTS? —Preceding unsigned comment added by 71.104.60.85 (talk) 19:11, 11 February 2008 (UTC)

The 9600 GT is already launched. The 9800 GX2 will be launched in March followed by the 9800 GTX and the 9800 GT around the end of March and the beginning of April. The 9600 GS will come out in May. The 9500 GT will be launched in June while the 9500 GS will launch in July. I can't confirm the 9800 GTS... (Slyr)Bleach (talk) 01:56, 24 February 2008 (UTC)

[edit] 9900gtx

[2] yeah. i'll modify the tmu's to 128 because

Single chip with "dual G92b like" cores

[edit] 9-series cards

Can someone tell me why they removed my edits on the 9-series? The 9600GSO has been out for a few days now (check the Nvidia site for specs), but when I added it, as well as details for the 9800GT (early specs are out on this card), they were edited out. I've put them back up now. Sure enough the specifications on the 9900GTX/GTS are a little speculative, but the specs for the 9600GSO are rock solid; just need to verify that it has 12 ROP's like the 8800GS. Put the 9800GT specs (early) up too; I don't know why no-one's added this card sooner. There's been discussion about and specs on the 9800GT for a while, though I've yet to see anything concrete about the 9800GTS. —Preceding unsigned comment added by 78.148.132.151 (talk) 09:51, 6 May 2008 (UTC)

[edit] Core Config

All ROP numbers where ROP > Pixel units are wrong. A card should not have more ROPs than pixel pipelines, because a card can't render more pixels than it's processing. Further, IIRC FX 5800 and FX 5900 can issue 8 pixel ops if no z test is done. Finally, there needs to be consistency in differentiating cards with no vertex units at all with those that have a fixed function vertex unit. Both are 0 right now, but it's a rather significant difference.

[edit] GTX series

Does anyone think that the GeForce GTX series should be split into its own section? Nvidia doesn't seem to be using the GeForce 9 series name for these chips and they are based on a different design than the GeForce 9/8 series(es?) are. (I hate trying to figure out the plural of series! :)) -- Imperator3733 (talk) 14:19, 23 May 2008 (UTC)

Even though I have nothing to back me up on this, I think it should split (which i now see has happened in the last hour or two while i was out). It's title has no 9xxx in it at all. With that being said though, Tom's Hardware suggests that we shouldn't call this the "GTX Series". GTX, GT, GTS, etc. remain the same, just moved to the front. Perhaps "200 Series" is more appropriate? Or until we get confirmation on what to call it maybe just stick with GT200 series/chips. BlueBird05 (talk) 02:26, 26 May 2008 (UTC)