Originally Posted By: Slin
So what I want to do in my case is just basic downsampling dividing height and width of the next target by two, going down as many stages as needed.
If I do this, each pixel of the new texture should be the average of four pixels of the previous texture. (..) so I guess I can just sample into the middle of those four pixels and the result will be the average, is that correct?
Yes, this is correct and that is how bilinear interpolation works. When you do this repeatedly down to a 1:1 pixel image, you create a Gaussian pyramid, that encodes in each pixel the local average that corresponds to a pixel neighborhood of a lower level of the pyramid (lower levels = bigger images with high frequency details).

Originally Posted By: Slin
Does this in gamestudio only mean offsetting the texture coordinate by half a pixel of the new texture (which should be one pixel of the previous texture) and sample there?
As I understand, you don't need to do this, because the center of a pixel in the lower texture is in correspondence with the center of the according 2x2 source pixels:



When your lower image is the destination of your shader, it can be the case, though, that the output UV space is not in correspondence with the input UV space. In my SSAO solution, I feed hires textures into the SSAO stage for which the output target is half the size, so I had to multiply the incoming texcoords with 2 to project the coords back to the input UV space; the same when you scale it up again: when the incoming image is half the size of the outgoing image, I had to divide the coords by 2. I don't know if this is a bug or intended, but you have to do this. After that, the coords are aligned as it is shown in my picture above.

Originally Posted By: Slin
Does this just mean to do every the same as always, but instead of rendering into RGB888 targets, I will render into floating point render targets?
Yes, because averaging pixels values which are originally bucket values between 0...255, you would throw them back into a bucket-like image.

I don't know much about HDR processing, but since you always loose some luminance during a straight forward gaussian pyramid on RGB images (even with high precision floating point targets), you should normalize the image accordingly when going up the pyramid again - and then the bucket procedure you did before with 8888 targets will lead to some artifacts I guess (maybe these are those bands you see.. I don't know).

If you want to stay with 8888 targets and want to keep quality, I suggest you change to a colorspace, in which the luminance is on a single channel, whereas the other channels encode the rest. HSV is a popular color space which is also easy to implement, but it also has its flaws. In my experience with image processing I had most significant good results with CIE Lab color space, in which L encodes the luminance and A and B the color. For an implementation I would copy the corresponding source code from OpenCV, or you can use my version (see my B.Sc. thesis page 53):



Since a whole pyramid is costly, I would try to avoid FP targets under any circumstance, but that is just a gut feeling... wink

Last edited by HeelX; 05/27/13 17:51.