Quote:


What are vertex and pixel shaders?




Ok, here goes:

So this makes sense, i have to start at the beginning...A few years ago, all graphics cards implemeted a fixed set of vertex and texture instructions, determined by the API(s) they chose to support. the main API has been Direct3D and that's the one i kow something about, so I'll restrict my comments to that.

Therfore, Microsoft wrote direct3d to give developers access to hardware functions. So basically, Microsoft decided that among other things, 3D primitves should be triangular polygons, and nothing else. Also, they decided on a few fixed texturing methods and blending options.

In any case, the cards that supported early versions of Drect3d(to d3x7) all worked essentially the same way, giving graphics developers a limited number of thing they could do. For instance, if you wanted to do environment mapping, there was only one or so ways it could be done.. so you in your app you write somehting like ENVIRONMENTMAP, and D3D takes care of everything else (I'm simplifying this a lot) This worked well beaause it was easy for developers to use and easy for card manufacteures to write their drivers. The main problem with this was becasue you only had a limited number of functions you could do, all games sort of looked alike, and there was no way to implement your own texture/vertex effects. if you wanted to use a non-standard lighitng model, you had to do it all in software, which was too slow.

This fixed-function approach was fittingly called the Fixed-Function Pipeline.
With DirectX8, Microsoft made the biggest leap so far in realtim 3d, they developed the concept of the programmable pipeline, in which vertex and pixel operations could be determined programatically. this of course required a a graphics CPU to process, now called a GPU..and Nvidia had already made such a thing, called the GeForce.

So with the stars aligned, now can say a few things about what shaders actually are and how they work.

First, the term 'shader' is probably unfortunate, because not all shaders have anythign to do with actually shading anything. Vertex shaders, for instance, only determine properties of, what else, vertices. A vertex shader, like a pixel shader, is a little program that can be run, in hardware, on every vertex that it is assigned to. To do this is software is of course extrmely slow, but its fast when the hardware does it. So you can do all sorts of neato things to a model on a per-vertex level. The most obvious is moving it around in novel ways. However, the most powerfull application of vertex shader is not manipulationg the positions of a vertex, indeed not to change the vertex at all, but to pass information to a pixel shader. Now a pixel shader program is run for every single pixel in the model texture. However, each pixel has only a few parent vertices on the uv map, so you can caluculate things on the vertices, and pass that to the pixels, so data is linearly interpolated. this means you can speed up caluculations. it is this relationship bewteen the vertex shader and the pixel shader which allows us to actually find the pixel's position in the 3D world, something that would be impossible without shaders. and once you know the pixel's position, you can treat it almost as a 3d object. In other words, pixels can be assigned a normal(a normal as you may know, is defined as the direction vector of a plane)..so a pixel can be interpreted as a polygon in 3d space, which can then be lit according to the world lighting model. Needless to say, this can improve the appearance of objects tremendously.

This sort of procedure is called per-pixel lighting/normalmapping,and just scratches the surface of what you can do with shaders. remember, the science and math here is nothing new, its all basic stuff developed years ago, but the big thing is that you make it all happen in hardware.

The possibilities are literaly endless. The developer can do pretty much any effects he can think of in video hardware now, because the hardware is just another CPU.

Another important thing pixel shadrs allow you do are graphics post-processing in realtime. You can implement photoshop style filters that can manipulate your textures on the fly.

Bascically, shaders are the most important tool the graphics developer has now, becaus eif you can think of a visual effect, it can probably be done in a shader. Name it, and there is a way.. even realtime raytracing and global illumination, hair, clothing, water,etc. Everyything you always wanted in 3d graphics are now possible, its just a matter of incremental improvements in performance.

I hope this helps a bit. Please excuse any typos or conceptual errors.