Reaching the limits
133MB/sec isn't enough
If you're a computer user that spends most of your time surfing the web and downloading the latest MP3s on your computer, it's likely that you haven't felt the bottlenecks of PCI technology. It's when you start having many high-speed devices in your computer that there is a concern.
While having 133MBs of bandwidth can easily handle a few hard drives, a network card and an optical drive, things become a problem when you have many devices sharing the same I/O bus. Take for example gigabit network cards. Each card is capable of sending and receiving 100MB/sec - still within the realm of PCI, but put in two gigabit cards however, and you have a situation where the PCI bus is being heavily saturated.
Mainstream technologies like IDE RAID are also contributing to PCI's death. A single hard drive is pushover for PCI, but when people stripe together 4 or more drives, transfer rates can climb up to 100MB/sec easily. Add a network card, a sound card and maybe a SCSI card and you can begin to see the strain.
It's not just about speed
Speed will always be an on going issue, but PCI has other hurdles as well. Because of its parallel nature, PCI requires a lot of traces and routes on the motherboard, which also requires the chipset to have an obscene number of pin connectors. The catch 22 is that PCI requires more electrical lines for every bit in its width. Going from 32-bit PCI to 64-bit PCI requires monumental layout changes to a motherboard because there are more routes to implement. This increases the chance of signaling problems as well as EMI (electromagnetic interference) noise. All this increases the cost of manufacturing throughout the entire build chain - from design to assembly, the more components, the higher the price.
2 for me, none for you
There have always been hurdles to go over when designing computer hardware, but one of those hurdles is still with us ever since PCI 1.0.
If you have one PCI device in your system, that's okay. If you have two, that's okay. But what if you have four or five devices? Physically, we can only fit a maximum of six PCI slots on a standard ATX motherboard and in fact, PCI only supports five slots. Motherboards that have six slots use a PCI bridge which shares the sixth slot with either one of the other slots or one of the onboard devices on a motherboard.
While all this sounds fantastic, designing PCI devices that behave properly when sharing resources with another PCI device is a challenge in itself.
A computer has a set number of interrupt requests (IRQs) that all devices in the computer use. IRQs give hardware devices exclusive CPU time in order to carry out specific functions and all devices follow specific IRQ priorities. By design, PCI cards are supposed to be able to share IRQs with each other, but once in a while, a peculiar device comes along that doesn't want to share its IRQs with any other device. This behavior causes sever problems like system crashes and data corruption.
We can go on all day about problems we've had with PCI cards but that's well beyond the scope of this article. The direction we need to head in, is not to increase PCI bus width, or make a patch for drivers, but design and launch a whole new solution altogether - a next generation I/O architecture.