Most gamers know of vertex and pixel shaders – they’re one of the bigger technologies being used in games for the PC and XBox/360 to make the game look better. But some of my friends occasionally ask why we have them, or what they do for us. Here’s a quick overview for those curious out there, but before we begin, a few terms:

Vertex: Most geometry you see rendered to your screen is made up of a set of vertices (points in space), which are put together to form triangles, which make up the geometry of a 3D model.

System memory: your computer has system RAM, which is where running programs live, along with the data they operate on. It is much faster to get data from system memory than from your hard drive.

Video memory: your video card also has RAM, which is where it stores geometry and textures, along with the “front” buffer (the current screen image) and the “back” buffer (the image you are creating to next show on the screen). Video memory is very fast for the video card to work with, but it is slow to get data from system memory to video memory (or back). This is why most video cards these days have 256 or 512 MB of memory – the more they can store, the less they have to fetch from system memory as they are rendering, since it is already there from the previous frame of the game.

Vertex Shaders
A vertex is made up of one or more of the following: coordinates (x, y, z), texture coordinates (x, y), a normal (the direction the polygon faces, for lighting), a color (for tinting the polygon with or without texture data).

To render a model to your screen, the vertex data is copied from system memory to your video card memory, then processed. A traditional rendering model (called a “fixed function” pipeline) will transform the vertices from model-relative coordinates to world-relative coordinates (put your model into the world), and from there transform again into camera-relative coordinates (put the world in the camera’s view), and once more to transform into screen coordinates (put everything into X/Y coordinates that can be put on screen).

This is where vertex shaders come into play – they can modify the vertex data during this process of getting the vertex on screen, making it change size, shape, color, texture coordinates, and so on. Of course, this was always possible, since your code could modify the system memory version of the vertex data and put it back on the memory card. What makes vertex shaders special is that they work on the data already on the video card – copies from system memory to video card memory are slow, so you are able to do more effects on more geometry when using vertex shaders while still keeping a good framerate.

Pixel Shaders
Pixel shaders are very similar to vertex shaders, only they modify the pixels about to be rendered to your screen. Some immediately obvious effects can be done here, such as sepia-tone tinting (convert the R,G,B color to H,S,V, then use the intensity/V as a brightness to apply to a sepia color) to get an old movie effect. You can also do more advanced effects, such as bump/environment mapping, parallax mapping, per-pixel lighting, depth of field blurring, and bloom/high dynamic range lighting.

Pixel shaders give you control not normally available in “fixed function” rendering – you don’t normally have access to the pixels of each object as it is about to be rendered to the screen. You can create some of the effects by using render targets, however. You render your screen to a texture, then modify that texture and re-render it to the screen using full-screen polygons. Of course, this suffers from the same memory copying from the video card to the system memory and back, so is much slower than just modifying the pixels on the video card as they are about to be rendered.

Different Vertex/Pixel Shader Versions
One major problem with vertex and pixel shaders is compatibility – since the shaders are basically operations built into your video card, they can’t be upgraded. As the standards for shaders are extended, and video cards become more powerful, new shader models are introduced. Thus far, we have shader model 1, 1.1, 1.2, 1.3, 1.4, 2.0, 3.0, and so on out on the market. Each of these represents new instructions, the availability of more instructions or temporary memory for the shader to use, or other advancements.

While some shader effects can be done in any shader model, there are some that absolutely require a certain level of support – similar to how your computer simply can’t output audio if it lacks an audio card. As I said before, all effects possible with shaders can be done by the CPU, but are far too slow to be used in real-time games. This is how movie rendering studios such as ILM and Pixar have been doing awesome effects far before the existance of vertex or pixel shaders. But since their frames can take up to several minutes to render, while a video game frame normally must be created within 1/30th of a second, they can afford to do things in slower ways than games can.

This lack of features in older shader models is why some new games require certain levels of shaders. True, they could render without using their special effects in some cases, but a) that would significantly degrade the visual appeal of the game, and b) it may actually give the game a totally different appearance. For example – the artists create an octopus model, which is animated using vertex shaders to make its tenticles swirl around. If your video card doesn’t support the correct level of vertex shaders to do that animation, the model will be completely static on your screen.

Of course, some of the level of “requirement” is due simply to lack of time. There are some effects that can be dumbed down to work in lesser shader models, or even removed from the game, but supporting either of those will require longer development cycles, which cost more money.