Quote:

http://manual.3dgamestudio.net/material-flags.htm

ENABLE_VIEW is just a simple flag that lets your event run before view rendering. It should not affect your entity materials or make them buggy. But as far as I see, you've not set a return value in your event function - this might cause randomly some strange behavior of your application, and can be the reason of your problem. Or it's something that you're doing in the event - DX functions often have side effects.


I uploaded a VERY basic test which showcases the problem. I'm not doing anything fancy in that test (no DX stuff, no fancy shader, actually not much other than loading a level and applying a material to a view). Please have a look at it: http://www.project-havoc.com/upload/enable_view_problem.rar

this is how it looks (note the one marine.mdl not being tinted red, this is the bug when setting ENABLE_VIEW)

this is how it should look



Quote:
In your second code you seem to use the wrong effect pointer. The individual effect pointer of a material is not render_d3dxeffect, but mtl->d3deffect.


This isn't supported in A7 (which i'm still using as i can't afford the update from A7 Pro to A8 Pro, plus my teammates from CSiS would also have to get the Upgrade. Sadly this is way above our budget (which is almost 0€)).
Are there other ways of accessing the current effect (directly via dx maybe?). Can't think of an other easy way when skimming through atypes.h/avars.h ... If i have to i would also be willing to create a dll for this if it's not TOO much of a hack with 1000s lines of code.
The only non-dll solution i see at the moment is implementing my own render loop which i absolutely don't want to. I guess i'd have to implement all the culling stuff myself then (among other things)?
Any other hints on how to access the current effect?

Quote:
As to sharing render targets, yes, when the targets have the same size and format, there is no reason why this should not work.


Good to know! I'll give it a go once/if i get the above two (main) problems solved.



While tinkering some more i got another problem which might not be solvable but i want to make sure:
To further boost performance of the deferred lighting part i want to use mixed resolution rendering / inferred lighting. Basically lighting is computed at a lower resolution and then scaled to full size again using bilateral upsampling.
Now this was working nice in my old light pre-pass renderer, but i wasn't making use of the zbuffer then.
The problem is that the zbuffer won't be rescaled if i render to a view 1/2 screen_size and the zbuffer was generated at full screen_size.
Question: Can one rescale the zbuffer to match the new screen_size of a view.stage while keeping the zbuffer's content? I don't think this is possible, but i want to make sure as this would be another nice (optional) performance boost.


Thanks for your answers so far! I sure hope to get this sorted out as the current architecture seems really promising. It's way faster than the old light pre-pass renderer while looking a lot better due to different lighting models now possible (Cook Torrance, Oren-Nayar, Ward, etc. All that without any shader branching due to S.T.A.L.K.E.R.-style LUTs).


Shade-C EVO Lite-C Shader Framework