Yes, but he only uses them for a gradient mip map selection.
I think an interesting part in the source code is the following:
// Transform the View Space Reflection to Screen Space
// This because we want to ray march in to the depth buffer in Screen Space (thus you can use the default hardware depthbuffer)
// Depth is linear in Screen Space per Screen Pixel
float3 vspPosReflect = Input.ViewPos + vspReflect;
float3 sspPosReflect = mul(float4(vspPosReflect, 1.0), g_mProj).xyz / vspPosReflect.z;
float3 sspReflect = sspPosReflect - Input.ScreenPos;
Two thoughts come to my mind: How does his projection matrix look like and esp. why does he divide by vspPosReflect.z (including the z-component of ssPosReflect)?
I had to adapt my code as follows to get somehow passable results:
// transform from view to screen space
float4 tPos = mul(float4(viewpos+dir,1), matProj);
tPos.xy = tPos.xy/tPos.w;
float3 screenDir = tPos.xyz-projViewPos;