How low can Intel go?
While it is difficult to tell when Intel lost the initiative internally, it is not difficult to tell where it began losing it relative to the competition. That occurred in 1998, with AMD’s launch of the K6-2. Until the K6-2, AMD was at best able to offer some extra clock speed over Intel’s chips (remember the 486DX-40?) The K6-2 introduced “3DNow!”, a much-hyped technology which promised to address the K6’s poor floating point performance. With the K6, AMD had simply been caught flat-footed by the gaming market and the move to 3D. The original K6 design was actually superior to the Pentium in integer performance, clock-for-clock, and it was scaling better to boot.
Then Intel launched the Pentium II, a processor based off the Pentium Pro core that featured two major improvements – one was that it was focused from the start on 32-bit applications rather than mixed 16-bit and 32-bit. The second improvement was a massive gain in floating point capabilities. Whereas the K6 was capable of roughly 70-80% of the Pentium’s floating point calculations per clock, the Pentium II stomped it silly, delivering double what the K6 could do.
Designing a new processor is not easy. It is expensive, time-consuming and a generational investment. The 386, 486, Pentium, and Pentium Pro/II/III line all saw about a four- to five-fold gain in clock speed over their existence. 386s started at 8MHz on the low end and reached 40MHz. The 486 started as slow as 25MHz and reached 100MHz (120MHz in AMD’s variations). The Pentium Pro/II/III line, technically still alive today in the Pentium M, started at 200 and went up to over 1GHz. Those three processor cores were very closely related, so distinctions between them are too thin to worry about.
As you can see then, AMD faced the terrible problem of being significantly behind in a key area of its core with no easy way to update it. Their solution was 3DNow!, a SIMD
set of operations for floating point calculations. In retrospect, it’s a fairly ugly solution and a hack. Even at the time, most knowledgeable computer users weren’t keen on it because it required – at the least – a recompilation of existing code by a game developer to take advantage of it. A better option was to write a game with 3DNow! SIMD instructions in mind from the get-go. And really, who would do this for a company as small and unknown as AMD?
Except, luckily for AMD, the internet was gaining steam at the same time and slowly but surely all the myths built around non-Intel processors were being wiped away. Just like 10 years prior there was a lot of concern about PCs being “100% IBM PC compatible” – meaning not junk like Tandy computers – in the 1990s there was worry that processors weren’t compatible with each other. This was partly FUD, partly caution with such pricey investments, and definitely not helped by the issues surrounding Cyrix processors. With the early internet there was also a lot of openness and interaction by hot, young development studios like id Software, Epic, 3DRealms and others. Gamers were keen on getting more bang for their buck so they harassed developers into including at least nominal 3DNow! support. It still wasn’t as good as a Pentium II, and Intel quickly retaliated with SSE in the Pentium III, but that’s the point – Intel had to respond. It was not the first on the market. (MMX provided SIMD for integer performance a few years prior, but no one cared about that even when integer performance was key.)