@Hummel:

No, not really. Basically you take the viewspace normal and fetch from the center of the texture, which I generated in Cinema4D. I came up with this when ParadoxStudio showed their work with cloning the MatCap technique from Pixologix (in ZBrush) - and they use it to render a complex material in e.g. 3DS Max with lights, ambient and everything the material on a sphere. The rendered hemisphere then "bakes" the material properties including lighting and specular and so on at every pixel, which has of course a certain view-space normal. In the approach, you use the view-space normal of your object to fetch exactly that pixel and voilá, you get the pre-rendered lighting from 3DS Max.

Problem is, that this is super-static and the illusion breaks, if the camera moves around the object. If the material is uni-color, you can use it, though, e.g. for car-paint materials or piano coating. You can also apply a texture pattern on it, but then it really breaks down, because when you move around the object, the pattern "stays".

Interestingly, I read yesterday a paper about rendering techniques of the Toy Story 3 game. They said, that they use spherical harmonics for the ambient term and they render that on a spherical map, with they then use as a lookup (see texture in the upper left corner of the screen). The benefit is that you calculate each term once and then you just do a lookup for each pixel, which has also the benefit that it can be instantly used in a deferred shading context.



I already prepared a scene in C4D with spherical texture map:



Since C4D can bake me the color, lighting, both, and reflection, I am going to try to combine them:



The goal would be to translate the viewspace normal to a corresponding UV coordinate and fetch there.