Prior to summer 2000, the value segment of the graphics market was pretty boring. The formula for graphics manufacturers was pretty simple: release your latest graphics product at the high end, and cut prices on your existing product lineup, just as CPU manufacturers had been doing for decades. Eventually the blueprint was refined a bit, rather than older or lower-clocked products at the low end, we got graphics cores that were stripped variants of their high-end siblings. These crippled products left much to be desired, leaving a bitter taste in the mouths of consumers.
Redefining Value: GeForce2 MX
Everything changed with the debut of NVIDIA’s NV11 graphics core, better known today as GeForce2 MX. Unlike previous graphics architectures, GeForce2 MX was designed from the ground up for the value segment of the 3D graphics market. Fusing DirectX 7 features such as hardware transformation and lighting to a dual-pixel pipeline (with the ability to process two textures per clock), GeForce2 MX was one of the most significant releases that year. Not only did GeForce2 MX bring DX7 to the masses, it also propelled NVIDIA to #1 in desktop market share, a position it has held tenaciously to this day.
GeForce2 MX wasn’t limited just to OEM success; it was also a hit among gamers. For the first time in the history of 3D graphics, the term “value graphics” didn’t equate to poor performance. In situations where memory bandwidth didn’t play a factor, GeForce2 MX was quite capable of keeping up with the fastest graphics accelerators on the market. In fact, we imagine that quite a few of you reading this article have either owned, or are currently using a GeForce2 MX graphics card to this day.
This is quite a testament to the design of GeForce2 MX, but unfortunately, its follow-up products have been pretty big disappointments. GeForce2 MX 400 was just more of the same, only with a 25MHz core clock speed improvement. Memory bandwidth had always been one of GeForce2 MX’s weak points, so with GeForce2 MX 400’s memory subsystem 100% intact, the clock speed bump didn’t have much of an impact. To add insult to injury, NVIDIA also released a crippled GeForce2 MX, the GeForce2 MX 200. Let’s just say that we weren’t too impressed with this product. The worst part of all is that this was NVIDIA’s spring refresh for 2001. That’s right, we waited nearly a year for what turned out to be essentially nothing new.
Last year NVIDIA unveiled its GeForce4 MX family, and while NVIDIA integrated the Accuview anti-aliasing engine from GeForce4 (which allowed it to perform very competitively with GeForce3 in Quincunx AA mode) into GeForce4 MX, this part was essentially nothing more than June 2000’s GeForce2 MX on steroids. The hardware pixel and vertex shaders launched roughly a year earlier with GeForce3 were notably absent.
As a result, NVIDIA went into 2003 with what was essentially three year old DirectX 7 technology in the value segment. ATI’s own value part, RADEON 9000, boasts DX8 compliance and thus has reaped the benefits of NVIDIA’s complacence.
Enter GeForce FX 5200. Not only is this product family based on an entirely new architecture, it also possesses full DirectX 9 support. We don’t just get 2.0 pixel and vertex shaders, we’re getting the same 2.0+ pixel and vertex shaders supported in NVIDIA’s flagship GeForce FX 5800 Ultra!