Texturing those polygons
After all the triangle calculations have been made, the graphics chip needs to actually draw the triangles and paint the textures. To accomplish this, the Dreamcast uses an Imagination Technologies/NEC designed PowerVR Series2 chip with a raw-fillrate of 100 Mpixels/sec. The two PC graphics cards with the highest fillrate is either a GeForce 256 using DDR memory or an ATI Rage Fury Maxx which uses two chips in parallel both of which produce approximately 500Mpixels/sec. Is this a clear win for the PC? Actually no. The Dreamcast gets by with a little bit of engineering ingenuity and a little bit of "cheating."
Deferred Rendering Technology
Cover your eyes with your hand for a second. What did you see? Exactly, your hand. Unfortunately, PCs aren't very smart and if you had tried to create the same image in a 3D game, the graphics card would have to draw the whole room first and then draw your hand later which covers everything up. The processing time required for drawing the room-you-never-saw is wasted. This is called overdraw. The deferred rendering technology used in the PowerVR allows the graphics chip to ONLY draw what is visible. In that same situation, the Dreamcast would only draw the hand. This would make its "effective" fill-rate equal to 200Mpixels/sec because a traditional card with 200Mpixels/sec only spends 100 Mpixels/s of its power drawing the stuff you can see. If you had ten layers of images you couldn't see, then Dreamcast would have 1 gigapixel/sec of effective fill-rate. A clear win for the Dreamcast then? Not quite. For one, I've yet to see a game that involves drawing a room and then covering the scene with a hand. Based on the performance of the PowerVR counterpart on the PC, the Dreamcast is only about as fast a 250 Mpixel/sec graphics card, or half the speed of what a PC can currently do. Things still aren't that simple though…
"Cheating" the performance curve
Although the PowerVR only perfoms like a 250 Mpixel/sec card in the PC environment, it is different on the Dreamcast. Most Dreamcast games are developed exclusively for the machine and so you will tend to find games that take advantage of the deferred rendering by having exceptionally complex scenes with more overdraw than you would see in a PC game.
Most PC gamers like to run their games at high-resolutions such as 1024x768. At higher resolutions, aliasing or "jaggies" are less obvious. Thanks to the NTSC standard of your TV, the Dreamcast only needs to produce images at 640x480. At about one third the number of pixels per layer, the Dreamcast obviously does not have to work as hard as a PC graphics accelerator. By now, you're probably asking why the Dreamcast doesn't have to worry about aliasing in the images in the same way the PC does. Again, the answer is the TV.
The TV automatically does anti-aliasing for you, but not in a way similar to 3dfx's T-Buffer anti-aliasing. On the TV, you will find it difficult, if not impossible to identify the pixels on a TV because of the larger dot-pitch resulting in more diffuse dots. The interlacing display of the TV also acts to remove the aliasing in the image. Besides all this, you typically sit further away from the TV too. Finally, whereas a next-gen graphics card can have you running Q3 at 100 fps, the TV is VSYNC limited to 60 fps and so there's less of the need for the higher fillrates. Basically, the fuzziness, persistence of phosphors, and general technical limitations of the TV end up helping consoles in a sort of twisted way. Just don't expect to be able to read very small text on a TV screen!