hello everyone,

I am storing the distance to the camera (length of the position in viewspace) in a fp-texture and want to restore the position for each pixel later. To do so I get the projected point at the distance of 1, normalize it (So I've got the 'direction') and scale it by the looked up depth.
This works all in all well. If I output the position as color, the grid is aligned to the camera axes and stays at its position (so it's cameraspace, isn't it?)


So it looks like without the matViewInv applied

Now I thought it would be easy, if I have the coordinates in cameraspace it's enough to multiply them with matViewInv to get everything alright. But that didn't worked out and I have no idea what could be wrong.

Here is the relevant code:
-get the depth
Code:
//VS:
OUT.vPos = mul(IN.pos, matWorldView);

//PS:
OUT.depth = float4(length(IN.vPos), 0.f, 0.f, 1.f);



-try to restore it
Code:
float depth = tex2D(depthMap, IN.tex).x;
float3 viewDir = float3(IN.tex.xy*2-1, 0.5773);//tan(camera.arc*0.5)
viewDir.y *= 0.75;//768/1024 = aspect
float4 pos = float4(normalize(viewDir)*depth, 1.f);
pos = mul(pos, matViewInv);
return pos;



I'm pretty desperate right now, because I tried to find a solution for it for a few days now... I hope you can help me with that. Thank you =)
Scorpion