As we mentioned in our EVGA e-GeForce 6800 review
, just recently NVIDIA and their board partners have quietly reduced board prices on their high-end GeForce 6800 products. Whereas just a few months ago NVIDIA’s best mainstream offering at the crucial $200 price point was the 8-pipeline GeForce 6600 GT, today street prices on NVIDIA’s more potent GeForce 6800 have hit the $200 mark.
This has huge ramifications for enthusiasts looking to get the most bang for their buck because the GeForce 6800 boasts a number of features that make it more powerful than the GeForce 6600 GT.
For starters, the GeForce 6800 sports a 12-pixel pipeline architecture, with five vertex engines pumping up to 406 million triangles/second (3.9 Gigatexels/sec fill rate). The GeForce 6600 GT features fewer pixel and vertex pipes, but attempts to make up for this by running at much higher clocks, 500MHz on the 6600 GT versus 325MHz on the GeForce 6800.
The real advantage the GeForce 6800 enjoys over the GeForce 6600 GT however is in its memory subsystem.
The GeForce 6800 features a 256-bit memory interface, with four 64-bit memory controllers paired to either 128MB or 256MB of DDR memory running at 350MHz (700MHz effective). This gives the GeForce 6800 up to 22.4GB/sec of peak memory bandwidth, 6.4GB/sec more than GeForce 6600 GT. As a result, the GeForce 6800 really shines against the GeForce 6600 GT at high screen resolutions with anti-aliasing turned on. You saw this a few weeks ago in our EVGA review with 4xAA/8xAF. In multiple cases at resolutions of 1024x768 (and up) the GeForce 6800 card was running 20% faster or more.
But what if you could turn the 12-pipeline GeForce 6800 into an even more powerful 16-pipeline card without spending a dime? That’s exactly what many of you asked us to do a few weeks ago with EVGA’s e-GeForce 6800, so guess what? We did!
In order to understand how this is possible, it’s best to go over the GeForce 6800’s architecture first.
Optimizing NV40’s manufacturing process
As you probably know by now, the GeForce 6800 is based on NVIDIA’s NV40 graphics core. In AGP form, this is the exact same chip NVIDIA uses for GeForce 6800 Ultra and GeForce 6800 GT, only it has four of its pixel pipelines disabled, and one vertex unit turned off. (More specifically, NV40 sports sixteen pixel pipelines, these pipelines are organized into groups of four and are known as “quads”. The GeForce 6800 GT and Ultra have all four quads running with four pixel pipelines per quad for a total of 16 pixel pipelines. In GeForce 6800, NVIDIA turns off one of the four quads, leaving only three quads running for a total of 12 pipelines).
Why does NVIDIA do this you ask? In order to decrease manufacturing costs, NVIDIA (like all semiconductor manufacturers) uses tactics such as tweaking clock speeds and disabling features on their chips in order to service multiple markets with what is essentially the same product.
In the case of the NV40 family, the chips that are verified and tested to run at the highest clock speeds are put into GeForce 6800 Ultra cards, while those that just miss the cut go into GeForce 6800 GTs. NVIDIA then played it safe and created a third NV40-based product, GeForce 6800. Typically whenever a brand new product is introduced, some chips just don’t cut the mustard and aren’t able to run at the required clock speeds (especially when its built on an untested manufacturing process). In NVIDIA’s case for NV40, they were concerned that some chips may not be able to hit the clock speeds they’d envisioned with all sixteen pipelines running. Rather than throw these chips away as waste, they instead go into GeForce 6800 cards.
It’s important to keep in mind that the above only applies to NVIDIA’s AGP GeForce 6800 products. For PCI Express cards, NVIDIA uses two distinct graphics cores, one with 16 pipelines for GeForce 6800 Ultra/GT (NV45), and the second, which is built from the start with only 12 pipeline. This is NV41 and goes into PCI-E GeForce 6800 cards.