Summary: With Intel's integrated graphics offerings continuing to dominate the graphics market, both ATI and NVIDIA have unveiled newer DX9 value cards that hit price points that were previously unheard of -- just $50! In this article Paul takes a look at two different GeForce 6200 TC cards, as well as ATI's equivalent offering, the RADEON X300 SE, pairing them against Intel's latest integrated offering found in the 945G chipset. See how the four graphics chips stack up in a variety of games and benchmarks in our latest article!
Intel’s market leadership is of course attributed to its integrated graphics solutions in their 845G, 865G, 915G, and now 945G chipsets. The bottom line is that most people buying PC’s aren’t willing to pay $400 for graphics cards, especially when brand-new computers can easily be found under $500.
There are two ways for NVIDIA and ATI to take some of Intel’s market share in the graphics industry. One is by developing new chipsets with integrated graphics, which ATI now has with the RADEON XPRESS 200, and which NVIDIA is developing for this Fall with their C51G motherboards.
The other is through new discrete solutions (based on existing architectures) that hit new all-time-low price points.
Concocting a sub-$100 GPU
To hit these new $50 and $60 price points, both NVIDIA and ATI have had to drastically redesign their low-end graphics cards. Fundamentally, the new design from a hardware standpoint is quite simple, the cards now ship with less memory. This reduces manufacturing costs for ATI and NVIDIA’s graphics card board partners.
This is where NVIDIA’s Turbocache and ATI’s Hypermemory technologies come in. Both are designed to efficiently use your system’s system memory as additional frame buffer memory. How do they do it? Both techniques use the PCI Express interface’s exorbitant amount of bandwidth to access system memory, keeping the graphics core from stalling, and thus preventing your games from turning into a slideshow.
Since the X300 SE is limited to a 64-bit memory interface, the X300 SE boasts up to 4.8GB/sec of local memory bandwidth, which is then combined with 8GB/sec of system memory bandwidth, providing the RV370 core with 12.8GB/sec of total bandwidth, just keep in mind that there is a latency penalty that will kick in when the card accesses system memory.
As we stated earlier, HyperMemory uses the high speed bi-directional data transfer capabilities of PCI Express to store and access graphics data in system memory. HyperMemory works by using intelligent memory allocation algorithms to optimize the use of local system memory as graphics memory.
These algorithms decide where the data should be placed, the most important data is stored locally on the graphics card; data that can’t fit locally is stored as GART memory, which is non-paged system memory that is assigned to the graphics card. Once GART is full, data is stored in pageable system memory. HyperMemory can access local graphics memory and system memory in parallel, which helps to optimize performance.
The ATI RADEON X300 SE ships in two flavors, one with 32MB onboard that supports up to 128MB of memory, and another with 128MB onboard that supports 256MB of total memory. Both HyperMemory cards use a 64-bit memory interface and ship at the same 325MHz core/300MHz memory clock speeds, the only difference is the amount of memory onboard, with the 32MB RADEON X300 SE being the most popular. Prices range from $45-$55, depending on the manufacturer.
For our testing today, we’ll be using the ATI-built RADEON X300 SE 128MB card pictured above.
We’ve summarized all the TurboCache configurations in the following table:
TurboCache works in a similar manner to HyperMemory, working an onboard memory management unit to dynamically determine the amount of system memory to be used as frame buffer memory. Once the system memory is used as graphics memory, it can then revert back to standard system memory. Essentially, TurboCache is on “On-Demand” memory service, borrowing system memory for graphics frame buffer as needed.
This is where the fast, bidirectional bandwidth of PCI Express comes in, and when the 8GBps bandwidth of PCI Express is coupled with the 2.8GBps bandwidth of local memory, the effective bandwidth of a GeForce 6200TC GPU is up to 13.6GB/sec.
Today we’ll be taking a look at two different versions of the GeForce 6200TC, the XFX GeForce 6200TC 128MB, with a 32-bit memory bus and 32MB of local memory, and the Leadtek GeFroce 6200TC 256MB, also featuring a 32-bit memory bus, but with 64MB of local graphics memory.
It’s important to note that unlike the RADEON X300 SE, the GeForce 6200TC has full support for Shader Model 3.0. In order to reduce manufacturing costs however, NVIDIA has removed the color and z-compression components, as well as the OpenEXR HDR found in the GeForce 6800 from the GeForce 6200 TurboCache. This will hamper the 6200 TC’s anti-aliasing performance and prevents the card from supporting high dynamic range lighting, but the card lacks the bandwidth to run with high levels of AA with good frame rates anyway, and we’ve seen the performance impact OpenEXR HDR brings in Far Cry.
The GMA 950 is a 400MHz part that like the 6200TC and RADEON X300 SE, features four pixel pipelines. It has full support for Shader Model 2.0 and Direct X 9.0 like the RADEON X300 SE, but of course has no local memory, so it dedicates a portion of your system memory as it’s frame buffer. Motherboards featuring the 945G chipset currently are selling for as low as $110.
Intel Pentium 4 660 3.8GHz
Intel 945G Reference Motherboard
2 x 512 MB OCZ PC5400 @ 4-4-4-12
ATI RADEON X300 SE HyperMemory 128MB
XFX GeForce 6200TC 128MB
Leadtek GeForce 6200TC 256MB
Driver Version: Catalyst 5.6 and Detonator 77.30
Intel INF 126.96.36.1999
Intel Graphics Media Accelerator Driver 188.8.131.5208
80GB Western Digital WD80JB
Windows XP Service Pack 2
Far Cry 1.3
3DMark 03 – Direct3D
Unreal Tournament 2004 – Direct3D
Far Cry – Direct3D
Splinter Cell – Direct 3D
DOOM 3 – OpenGL
Half-Life 2 – Direct3D
However, Intel’s current GMA 950 results should be taken with a grain of salt, as Intel is still working on a new driver that will be add Splinter Cell: Chaos Theory support in addition to improving performance in DirectX 9 games. With that said, we have to go with the performance numbers as of today, and those numbers lie in favor of XFX’s GeForce 6200TC 128MB.
For the most part, the XFX GeForce 6200TC 128MB outran ATI’s RADEON X300 SE 128MB, running numerous tests faster than the X300 SE. Aside from performance, we must also look at the features that each graphics card brings to the table. The NV44 based GeForce 6200TC features Shader Model 3.0 support while the RV370 based only supports Shader Model 2.0.
It’s important to note that we’re truly doing an apples-to-apples comparison between the XFX and ATI cards, as both have a 64-bit memory interface, and 32MB of local memory. With that said, the XFX GeForce 6200TC 128MB is truly the faster card.
In fact, our Leadtek GeForce 6200TC 256MB with 64MB of onboard memory is noticeably slower than the XFX part, due to it’s slower memory speed of 550MHz, compared to the 700MHz memory speed of the XFX GeForce 6200TC 128MB.
This brings us to our only real complaint about the GeForce 6200 TC: NVIDIA’s branding/naming convention for the product. As it stands now, NVIDIA provides four different GeForce 6200 TC SKUs, each with wildly different feature sets. Some cards ship with a 32-bit memory interface, with only 16MB of memory, while another may ship with a 64-bit interface and 32MB of memory, yet they’re both sold as GeForce 6200 TurboCache 128MB cards!
As a result, purchasing a 6200 TC card can be incredibly confusing, even if you know just which SKU you want, it can be difficult sorting through all the marketing to determine if the card you’re looking at on the shelf features just the right memory interface and local memory size that you’re looking for. Things are likely even more confusing for the casual consumer who knows nothing about the 6200 TC, he’ll likely purchase a 256MB SKU thinking it’s the fastest 6200 TC card available, even though our benchmarks clearly demonstrate otherwise.
So are these two featured discrete graphics technologies a step up from current integrated offerings? The answer is a resounding yes, as both NVIDIA and ATI have successfully converted their existing graphics architectures to new price points that directly compete with Intel’s integrated graphics. By looking at the performance numbers, and even the driver support both companies offer for their graphics products, upgrading from integrated graphics really is a no-brainer: it’s cheap enough for everyone to do it. The truth is, Intel’s 945G has been out for almost a month and still the chipset doesn’t support Splinter Cell: Chaos Theory!
The bottom line is that NVIDIA’s TurboCache outperforms ATI’s HyperMemory technology. The XFX GeForce 6200TC 128MB featuring a 64-bit memory interface was noticeably faster than the ATI RADEON X300SE 128MB in most of our gaming benchmarks, that coupled with the extra features incorporated in the 6200TC GPU, give NVIDIA the upper hand in the latest standoff between the two graphics giants.
|© Copyright 2003 FS Media, Inc.|