I'm writing a shadow mapping shader, it works like this:

1) Render the depth of the scene, as seen from the light's position to a 32-bit floating point red texture. (Done, works)
2) Render the scene from the camera's point of view and project the depthmap onto the scene (like normal projective texturing). Then I compare the depth stored in the depth texture to the distance from the pixel to the light. If the distance to the light is greater than the value stored in the depthmap, the pixel is shadowed (since this point isn't visible from the light's point of view).

To project the depthmap, a matrix is used to calculate UV coordinates on the depthmap for every vertex. This matrix is the product of the matWorld of the mesh, the matView and matProj of the lightsource and a scale and bias matrix(I can calculate the last one).

What I need is the matView and matProj of viewA (the view placed at the light) while rendering the scene from viewB (the normal camera).

As stated in my first post, I couldn't find a way to somehow store the matrix when the light/depth view is rendered.