Displaying the final image
The first step in multi-sampling is to consider the scene theoretically double in size in both the horizontal and vertical (thus, the picture is a total of four times larger). A triangle edge mask is used to locate those pixels that fall along each triangle edge. Edge pixels take a unique color sample for every sub-pixel, writing four separate color and Z values to their respective buffers. On the other hand, pixels not located on a triangle edge will all share the same original color value, writing four identical color values to the color buffer. Every depth (Z) value will be independently calculated.
On finalizing the scene the image is down-sampled, often using a run-of-the-mill bilinear filter. The four color values are averaged into a single pixel, resulting in the displayed pixel. With the edge pixels having had each sub-pixel uniquely sampled, the filtered value will be an average of the four samples that fell along the edge, creating a more accurate, and anti-aliased value. With this completed for the entire frame, the buffer is flipped and the image is displayed.
This, of course, is just a general overview of one multi-sampling implementation. The actual process is slightly more complex, but for our purposes here, this level of understanding will suffice.
The advantages of multi-sampling over super-sampling are two fold. Multi-sampling requires little additional fill-rate in the implementation we discussed, while in other implementations it can require no greater fill-rate at all. Additionally, with all internal pixels requiring only a single pixel sampling (i.e. A bilinear filtered pixel requires 4 samples, where a super-sampled bilinear filtered pixel requires 16 samples), the result is just a slight increase in texture reads.
On the negative side, buffer storage requirements are extensive, as is bandwidth consumption. Just as with super-sampling, color and z-buffers because four times larger than at the selected resolution. A linear increase in bandwidth consumption takes place, with larger buffers requiring greater bandwidth.
Fragment Level Algorithms
Fragment level algorithms do not work on a sub-pixel level, but rather on a fragment level. A sub-pixel is effectively an entire pixel of its own, whereas a fragment is simply a segment of a complete pixel. A sub-pixel will store a full color and Z value, where a fragment will only store information regarding a segment of a complete pixel.
Buffer sizes do not increase with fragment level algorithms, as the number of pixels dealt with is exclusively dependant on the selected resolution. A variation in storage is only found in that the fragment data must be stored in a separate buffer. This storage requirement is notably less than with multi-sampling, as similar anti-aliasing levels can be achieved with relatively few fragments.