Hi,
I finally found a way to reconstruct the world space coords of a scene from a prerendered depthmap in view space. It has been quiet hard for me to find the proper info so I decided to share it. It is probably not the best way but it works!

I wanted to render the depthmap in view space because its linear distribution.

The view space depth is computed in the object shader
Code:
// vs
outDepth = mul ( inPos, matWorldView ).z;
// ps floating point bitmap
return float4 ( inDepth, 0, 0, 1.0f );
// ps packed for 8 bits bitmap channel
float viewDepth = ( inDepth - camera.clip_near ) / camera.view_fustrum_depth; //*


*view_fustrum_depth = clip_far - clip_near;

The deferred pass needs to convert the screen coords to any point in clip space in order to convert it to view space by inverse projection matrix multiply.
Code:
float4 Proj = float4 ( 2.0f * float2 ( inTex.x, 1.0f - inTex.y ) - 1.0f, fAnyDepth, 1.0f );
float3 View = mul(Proj, matProjInv).xyz;



This randomly selected point certainly describes the camera projection ray for the actual screen pixel, so we can scale its coords by the relation between the point depth and the depthmap value, and get the renderered coords in view space.
Code:
float fDepth = tex2Dlod ( DepthSampler, inTex ).r;
View = float3 ( View.xy * fDepth / View.z, fDepth );



Finally we only need to convert the view space coords to world space with the inverse view matrix.
Code:
float3 World = mul ( float4(View.xyz,1.0f), matViewInv ).xyz;



Hope it helps.
Salud!