Talk:Comparison of NVIDIA Graphics Processing Units

From Wikipedia, the free encyclopedia

Why is this article considered 'too technical' and yet the ATi equivalent article Comparison of ATI Graphics Processing Units is not? Also, the 7900GX2 is of course 2 GPUs on one board, in this light should it not be the TOTAL MT/s, Pipes x TMU's x VPU's, that are atated and not the specs of half the card?

A quick look at the article and it didn't seem that bad as far as tech speak, after all you are talking about a comparison of GPUs either you keep it as it is and maybe add breif explanation of terms or you dim it down to a this is faster than that and that is faster than this article. --AresAndEnyo 21:48, 21 December 2006 (UTC)

Contents

[edit] 512MB GeForce 6800 (AGP8X)

Why is this version of the 6800 not listed here? My card, as listed in nVidia's nTune utility is a standard GeForce 6800 chip with 512MB of memory, with clock speeds of 370MHz Core and 650MHz Memory. These were the factory clock speeds I received the card with; it was purchased from ASUS. --IndigoAK200 07:34, 27 November 2006 (UTC)

This seems like a comparison of graphics cards not of GPU chips ... and in that vein, why is there no mention of nVidia's workstation products (Quadros)?--RageX 09:00, 22 March 2006 (UTC)


This article especially needs an explanation of the table headers (eg. what is Fillrate? What is MT/s?) ··gracefool | 23:56, 1 January 2006 (UTC)

While I agree that an explaination would be nice, I have to ask why a page such is needed. It seems to have unexplained inaccuracies, or at the very least questionable info. As cards are released, it will need constant maintainance. Not only that, but 3rd party manufacturers often change specs, so while a certain nVidia card might have these specs...a card you buy might not. I'm certainly willing to clean up this page, but I want some input on how valuable it is to even have it in the first place before I go to the trouble.--Crypticgeek 01:45, 2 January 2006 (UTC)
It's a handy reference. If you can find another one on the 'net (I'm sure there's a good, accurate one somewhere) we could think about replacing this with a link to it. Note that it is a comparison of GPUs, not cards, so 3rd party manufacturers don't matter. New GPUs aren't released that often. ··gracefool | 22:39, 11 January 2006 (UTC)

NVIDIA's website indicates that the 7300GS has a 400MHz RAMDAC gpu. Is there a reason that everyone is changing that to 550MHz? Where did you acquire that information? --bourgeoisdude

See RAMDAC for explanation. RAMDAC frequency determines maximum possible resolution and/or refresh rate. ONjA 16:52, 24 January 2006 (UTC)

The Process Fabrication (gate lenght) should be listed in nm instead of μm, the fractional values are quite cumbersome, beside, the industry more commonly use nm than μm now that we see processing units manufactured on a 45nm being announced.


Question:

Why is it that a 6600 graphics card is considered better then an 5900 Graphics card? I understand that it has enhanced instruction sets and alots of new features, but in terms of naked processing power and ram speed the 6600 seems slower then the 5900 graphics card.

6600 can process twice as much pixels per clock as 5900 due to 8 pipelines compared to 4. Also FX family known to be very slow when processing pixel shaders v2.0, which is now makes the greatest impact in modern games. 6600 does more work per MHz. ONjA 14:53, 4 May 2006 (UTC)

The bus column does not list PCI for many of the cards in the FX family and the Geforce 6200. I suspect there are other mistakes of excluding the PCI bus from the MX family. I will add PCI as one of the bus options for the 6200 and 5500, as I am sure these two cards support PCI. —The preceding unsigned comment was added by Coolbho3000 (talk • contribs) 22:45, 10 May 2006 (UTC)

I have made the 6200 PCI a seprate row because of its differences from the other 6200 versions (boasts a NV44, not NV44a core, yet doesn't support Turbocache). I have named this section the 6200 PCI. Please correct me if you think this isn't suitable. —The preceding unsigned comment was added by Coolbho3000 (talk • contribs) 22:52, 10 May 2006 (UTC)


[edit] doom 3

Shouldn't there be a last column called DOOM3, with values as:

  • NO - can't run the thing
  • NPL - run low mode, but not playble
  • PL - playable in low mode (at least some 20FPS)
  • L - run Low mode just fine (some 50FPS maybe)
  • M - play in Medium mode
  • H - capable of high quality mode
  • H+ - every feature turned on and no fps drop yet

I think that at this point, doom3 is the most reliable mark to create some notion of performance. Another option, free, would be to make a complex blender scene, but i think that games are the only reason people look at this page

Picking any independent software title shows favoritism, which is avoided if we stick to internal Nvidia specifications. Shawnc 02:05, 15 June 2006 (UTC)
That's outrageous. In that case shouldn't we have F.E.A.R. and Half-Life 2 and Quake 4 alike? rohith 07:14, 1 November 2006 (UTC)

[edit] Open GL?

Wouldn't it be apropos to include an column for the highest version of OpenGL supported? Not all of us use Windows. :) OmnipotentEntity 21:53, 22 June 2006 (UTC)

[edit] memory bandwidth

Bandwidth is calculated incorrectly. I've changed it to use GB/s, where GB/s=10^9 bytes/second. To properly calculate bandwidth in GiB/s it's (bus width * effective clock of memory) / 1073741824 (bytes/GiB) / 8 (bits / byte)

[edit] Vanta LT

I added some information about this card using lspci and nvclock -i There's some conflict with the output:

lspci: VGA compatible controller: nVidia Corporation NV6 [Vanta/Vanta LT] (Rev 15)

nvclock -i:

Card: nVidia Riva TnT 2 VANTA
Architecture: NV1 0

The chipset says this card is Vanta LT. Can anyone check what the heck is this card?

[edit] NV17, NV34, NV31 and NV36

Geforce4 MX does not have a vpu of any kind. nvidia's drivers allow certain vertex programs to use the NSR that's been around since the nv11 days, but only if the (very simple) vertex program can be run on the gpu. otherwise it's done by the cpu. http://www.beyond3d.com/forum/showthread.php?t=142

Geforce FX5200 is a 4 pixel unit/1 textuer unit design as stated here http://www.beyond3d.com/misc/chipcomp/?view=chipdetails&id=11&orderby=release_date&order=Order&cname= and here http://www.techreport.com/etc/2003q1/nv31-34pre/index.x?pg=2

Updated note to reflect that NV31, NV34 and NV36 all only have 2 FPU32 units as described here http://www.beyond3d.com/forum/showthread.php?p=512287#post512287

[edit] CPUs

What happened to the pages Comparison of Intel Central Processing Units and Comparison of AMD Central Processing Units? I can't believe that I have to use answers.com instead of WP.

See List of Intel microprocessors and List of AMD microprocessors. But you're right that your two links should also work. - Frankie

[edit] 8xxx series

Should we add the 8 series samples to the list?

Not right now. We can add it after they have been confirmed officially (that is, after it has been released...) rohith 20:24, 13 October 2006 (UTC)

[edit] Transistor count

I wish to see transistor counts being added to the table. Shawnc 00:25, 9 November 2006 (UTC)

[edit] DirectX and NV2x

DirectX 8.0 introduced PS 1.1 and VS 1.1. DirectX 8.1 introduces PS 1.2, 1.3 and 1.4.
source: shaderx,
http://www.beyond3d.com/forum/showthread.php?t=5351
http://www.beyond3d.com/forum/showthread.php?t=12079
http://www.microsoft.com/mscorp/corpevents/meltdown2001/ppt/DXG81.ppt

Thus NV20 was DirectX 8.0, but NV25 and NV28 supported the added ability of PS 1.2 and 1.3 as introduced in 8.1.

[edit] VPUs

I've listed any card with a T&L unit as having 0.5 VPUs since it can do vertex processing, but it is not programmable. This also allows better compatibility with Radeon comparisons.

[edit] Sheet Change

The sheets are too tall to see the explanation columns and card specs at the same time, if I want to compare, I need to scroll back and forth. Could someone edit the tables to have the column explanations at both the top and the bottom?

[edit] Fillrate max (MT/s) for 8800GTS is incorrect

The fillrate listed for each graphics card on both the Comparison of ATI and Comparison of NVIDIA GPU pages is based off of: "core speed * number of pixel shaders" for discrete shaders or "core speed * number of unified shaders / 2" for unified shaders.

The fillrate listed would be correct only if the 8800GTS had 128 unified shaders (500 * 128/2 = 32,000) instead of 96. The correct fillrate should be 24,000 (500 * 96/2 = 24,000).

Should this be changed, or do we need a source explicitly stating 24,000 MT/s as the fillrate?

Nafhan 20:44, 24 January 2007 (UTC)


Found page on NVIDIA homepage listing 24000 MT/s as fillrate for 8800GTS, and made update.

Nafhan 21:21, 26 January 2007 (UTC)

[edit] 6700 XL

I propose we add the 6700 xl as it is listed elseware in wikipedia: http://en.wikipedia.org/wiki/GeForce_6_Series agreed? - steve —The preceding unsigned comment was added by 86.29.54.61 (talk) 11:04, 10 February 2007 (UTC).

[edit] GeForce4 MX4000

Graphics library version for this card is mentioned 9 in this entry. Which is not true. It is not even complete 8.1; proof = http://translate.google.com/translate?hl=en&sl=zh-TW&u=http://zh.wikipedia.org/wiki/GeForce4&sa=X&oi=translate&resnum=3&ct=result&prev=/search%3Fq%3Dnvidia%2BNV18b%2Bengine%26hl%3Den%26client%3Dfirefox-a%26rls%3Dorg.mozilla:en-US:official%26sa%3DG —The preceding unsigned comment was added by Acetylcholine (talkcontribs) 18:22, 24 February 2007 (UTC).

[edit] PCX

The PCX 4300, PCX5300, PCX5750, PCX5900, and PCX5950 need to be added

[edit] New Columns

Hi,

There are at least 2 very important values missing. This are the vertex througput and the power consumption. The fillrate does not say much today, the overwhelming fillrate is used for doing anti aliasing, in my opinion no criterion to buy a new GPU.

As for me, I want to compare my current hardware to those that I might buy. Take this for example:

Model Year Code name Fab(nm) Bus interface Memory max (MiB) Core clock max (MHz) Memory clock max (MHz) Config core1 Fillrate max (MT/s) Vertices max (MV/s) Power Consumtion est. (W) Memory Graphics library support (version) Features
Bandwidth max (GB/s) Bus type Bus width (bit) DirectX® OpenGL
GeForce FX 5900 XT Dec 2003 NV35 130 AGP 8x 256 400 700 3:4:8:8 3200 less than 356, more than 68, maybe 316 22.4 DDR 256 9.0b 1.5/2.0**
GeForce 7600 GT Mar 2006 G73 90 PCIe x16, AGP 8x 256 560 1400 5:12:12:8 6720 700 22.4 GDDR3 128 9.0c 2.0 Scalable Link Interface (SLI), Transparency Anti-Aliasing, OpenEXR HDR, Dual Link DVI
GeForce 7900 GS May 2006 (OEM only)

Sept 2006 (Retail)

G71 90 PCIe x16 256 450 1320 7:20:20:16 9000 822,5 42.2 GDDR3 256 9.0c 2.0 Scalable Link Interface (SLI), Transparency Anti-Aliasing, OpenEXR HDR, 2x Dual Link DVI
GeForce 7900 GT Mar 2006 G71 90 PCIe x16 256 450 1320 8:24:24:16 10800 940 42.2 GDDR3 256 9.0c 2.0 Scalable Link Interface (SLI), Transparency Anti-Aliasing, OpenEXR HDR, 2x Dual Link DVI
GeForce 7950 GT Sept 2006 G71 90 PCIe x16 256, 512 550 1400 8:24:24:16 13200 1100 44.8 GDDR3 256 9.0c 2.0 Scalable Link Interface (SLI), Transparency Anti-Aliasing, OpenEXR HDR, HDCP, 2x Dual Link DVI

You can find the est. power consumtion at http://geizhals.at/deutschland/?cat=gra16_256 but I believe it is not allowed to take it from there...

Does anyone know where to get real tech specs from nvidia?

JPT 10:02, 2 March 2007 (UTC)

[edit] Incorrect 6600 GT ?!?

I got a 6600 GT and as google can tell you this card has 8 pipelines, not 4!

[edit] PlayStation 3 GPU

Should this sheet list the PS3's RSX chip somewhere? It is pretty much a slightly modified G70... 68.228.65.16 23:40, 19 March 2007 (UTC)