Even though I'm not an expert, I think I should explain the concept of bumpmapping with 3DGS to you, so that you will also be able to *understand* the examples in this forum.
You esentially have a choice between two kinds of bumpmapping: DOT and Environment. A simple bumpmapping shader just modulates the prightness, or if you prefer a fixed specular colour, in response to the slope or derivative of the 'normal height' . And environment mapping is more limited with 3DGS, in that a simple shader won't reflect the real-time environment around an Entity. Instead, you really need to define an "environment cube" which will always seem to surround your entity in advance, so that your shader can reflect this environment cube. There is a documented function in the manual which converts *your* image into an environment cube, so that your shader will be able to address it correctly. But then your bumpmap will modulate the CameraSpaceReflectionVector .
The general concept of bumpmapping, is that your texture has a bump channel, in addition to RGB and sometimes A , so that small features don't need to be modelled. But with 3DGS, you haven't been provided a bump channel which your shader would need to load from the texture. So you must improvise here, and with 3DGS you have a lot of chances to improvise. One popular method, is just to use your Blue colour channel as your Bump channel. But your shader can derive a bump channel in other ways if you like, to map this 'normal height' . And normally, the math in your shader then does the work of producing this effect. Even though C-Script also features a function which finds the derviatives of your textures' Blue colour channel in direction u and direction v, thus transforming the texture (image).
In 3DGS the coordinates within a texture image are labelled u and v. Your script can even change the corresponding Skills of some enities over TIME, causing the texture to move in he desired direction.
Within the shader, X Y and Z coordintes are followed by a logical (1.0) to make a 4-component vetcor, so that multiplying this vector with a 4x4 matrix will perform any linear transformation of those coords. Thus, if you knew the equation
Y = MX + A
from Linear Algebra to convert 3-component vector X into vector Y, consider adding constant vector A as the 4th column of M, so that A will get multiplied by this quaternion and still get added to 4-component Y. The immense advantage of this is that it reduces the math of transforming coordinates from a multiplication and an addition to just a (matrix) multiplication. For this reason, a whole series of tranformations can be represented as a single matrix, and available ones are named matWorld, matWorldView, matWorldViewProj andsoon to get an object's X, Y and Z into the gameworld coords, then into coords around the camera, and then into screen positions as projections of the space around the camera. The game engine upates these matrices for you. And because 4x4 matrix multiplication in floating-point numbers is a single built-in operation of your graphics card's VPU, you can produce screen coordinates with only very few DirectX assembler ops.
But because doing this is still complicated, there are software tools recommended here, which will do the shader-programming for you. Another one which the other two didn't mention above was the Sphere plug-in. After all, even a minor mistake in the procedural shader will cause an image which looks grossly wrong. And with a *pixel* shader, you're responsible for writing the final colour directly to the pixel on the screen. Thus, you must then produce *all* the effects, because *your* pipeline circumvents even the post-processing normally built in to the game engine. You might find that shadows aren't cast onto your shader, unless you did something about that...
The *vertex* shader just defines the address on your texture image, from where the colour later gets read, thus modifying this address. This logically happens before the pixel shader gets called, and might still allow for engine post-processing if you didn't define your own pixel shader. Somebody please correct me if I'm wrong. Thus the programmed vertex shader ends in two assembly-language operations: oPOS outputs TO which POSition on the screen, and oT0 outputs FROM where within loaded Texture number 0. If you were working with more than one loaded texture (source) image, there would conceivably be an oT1 instruction for the second one...
In HLSL you refer to numbered loaded textures with square brackets.
Dirk
P.S. If you absolutely need to have an indepndent Bump channel: A 3DGS Model Entity is allowed to have up to 4 Skins and not just one. Your first Skin could contain only colour information, but Skin2 could be loaded as your second Material Texture. Then, the Blue channel of Skin2 could become your Bump channel if you programmed accordingly. You could use your own 2D graphics software to paint your second Skin to match your first.
Last edited by dirkmittler; 09/08/05 19:27.