For some basic bloom and in the future also tone mapping with auto exposure, I need to downsample a render target.
How would I do this correctly? Sure, I could render into a new render target and downsample all the way down in just one shader by taking every pixel of the texture to downsample building the new one and average them and that might be a good idea on modern hardware, especially in combination with the slow DirectX 9 API, but if I want to go all the way down to 1*1, it is clearly too much and it seems extremely inefective in terms of parallelism.
So what I want to do in my case is just basic downsampling dividing height and width of the next target by two, going down as many stages as needed.
If I do this, each pixel of the new texture should be the average of four pixels of the previous texture. Again, I could just do texture lookups on those four pixels and average them and on AMD hardware there even is fetch4 which might help to optimize it (although, I am not sure if may only works with depth maps). However, every "normal" hardware supports linear interpolation on texture lookups, so I guess I can just sample into the middle of those four pixels and the result will be the average, is that correct?
Does this in gamestudio only mean offsetting the texture coordinate by half a pixel of the new texture (which should be one pixel of the previous texture) and sample there?

If I want to upsample, do I just have to sample using the same texcoords as for the pixel I am rendering and interpolation does the rest and there is no difference if I do this in several passes or just in one, or just use the texture as if it already had the correct resolution?


I want to do high dynamic range rendering. Does this just mean to do every the same as always, but instead of rendering into RGB888 targets, I will render into floating point render targets?
And then if I want I can apply tone mapping of some kind to such a floating point texture and everything will be the way it is supposed to be?
I guess the tonemapping is done in linear gamma space?


My results with gamma correction lost a lot of contrast compared to my results without, which might look more realistic, but it also lacks a lot of contrast. Is this normal, or am I doing something wrong? Any ideas on how to get some contrast back?
I am currently using an ambient value of 0.05 and it is still brighter than a value of 0.2 without gamme correction. I am also having a lot of banding artifacts around light sources because of the low render target resolution with gamma correction, are there any solutions for this other than a higher precision render target?

Thanks tongue