I think it would be good if a view entity would just get rendered after the scene and after the postprocessed fullscreenquad.
That would mean that the view entity attached to the view rendering the scene would be completely postprocessed and the one attached to the last postprocessing stages view wouldn´t be post processed. That seems the most logical way to me as I think that a flag wouldn´t really work.
I have no idea how far it is possible to have view and postprocessing at one time.
Edit:
Missed this part of one of jcls posts:
They [view entities] are rendered after all the views.
But why are view entites attached to views?
When are panels rendered?
Wouldn´t it then make more sense to sort views, view entities, panels, ... all together by layers and to allow rendering view entities into bmaps the way panels can be rendered into a bmap?
I could then just set the camera layer to 0, the one of my view entity to 1 andthe one of my camera.stage to 2, set the same render targets for camera and the view entity and could postprocess it in camera.stage.
Probably just some stupid, senseless thinking...