Brandon: And when it comes to drivers, you’re making it too easy for me. I didn’t feel the need to address your comments on driver maturity, considering all the “optimizations” that have gone into NVIDIA’s drivers recently.
What’s up with NVIDIA’s guideline: “An optimization must not contain a pre-computed state.” Then they turn around and dump shader replacement code in their newer ForceWare drivers. So I take it a hand-optimized shader isn’t considered “pre-computed state” by NVIDIA? Looks to me like they’re not even following their own stated guidelines.
Also, don’t forget NVIDIA’s use of partial precision. NVIDIA loves to promote their 32-bit precision capability, but they fall back to FP16 in many cases.
And equating NV3x to my car’s K20 engine is like comparing Godzilla to a gecko. There’s a night and day difference between NV3x’s 125 million-130 million transistors (depending on FX 5800 or FX 5900) and ATI’s 110 million transistor RADEON 9800. ATI was able to squeeze more performance per transistor out of their high-end offerings than NVIDIA, so ATI’s always been the small efficient one, just as the RSX-S’s 100 horsepower/liter is one of the highest figures in the industry for a naturally aspirated engine. If anything, that margin has only increased with this latest generation.
ATI’s got 16 pipes, just like GeForce 6800 Ultra, only they’ve been able to squeeze theirs into a smaller 160 million transistor core. In addition, their core is running at 525MHz on the X800 XT Platinum Edition. This pales in comparison to GeForce 6800 Ultra’s paltry 400MHz core.
Sure, ATI’s engine may be a little smaller, but just like the automotive industry, the size of the engine tells you nothing about the overall performance of the entire product. The real question is what is NVIDIA doing with all those extra transistors?
Besides, it’s not like ATI’s driver team hasn’t delivered performance enhancements in the past. Just read my CATALYST 4.3 report from mid-March to see the DX9 performance increases ATI’s driver team just delivered for its users. I saw double-digit performance increases in Halo. Albeit, it’s more of a synthetic test than a real-world benchmark, but I still saw some nice improvements in Tomb Raider, which is benched with a demo based on actual game play.
The fact of the matter is performance is only going to improve on both cards. To speculate on which card will improve the most is a little premature. NVIDIA has obviously had a few weeks to polish their ForceWare 60 driver since GeForce 6800 Ultra was initially launched and while there were some improvements in Far Cry I saw the exact opposite in Tomb Raider. The new driver was also far more buggy than 60.72.
You also mention NVIDIA’s The way it’s meant to be played campaign, but it’s not like these titles offer anything special. Okay, Splinter Cell got better shadows, I’ll give you that, but NVIDIA’s distance fog in Call of Duty was downright silly. It wouldn’t make any sense for a game developer to offer proprietary features for one set of users – they’d risk pissing off their customers. Those days died with Glide and the gaming world is better off because of it.
The way it’s meant to be played is marketing speak for the average consumer who knows nothing about hardware, nothing more, nothing less. Developers have had their hands on DX9 hardware from ATI longer than NVIDIA anyway, so if anything, NVIDIA’s going to need those engineers more than ATI does.