http://www.conitec.net/beta/target_map.htm

target_map
Render target for panels and texts. If this parameter is set to a bitmap, the object renders into the bitmap instead of on the screen.


Honestly, this is a really cool feature! Though, how useful is it? There are still question about it:

  • How does a RTT panel react with the mouse? If I use a panel on a model: will the mouse be evaulated correctly?
  • This would make resolution indepent display-programming easier. Currently I run for each panel a callback which grabs specific layout information (position on a reference resolution) and calculates the scaled position and scaling for the panel. If I pass the panel to a bmap of a view entity - given that the mouse gets evaluated - the engine would do the main scaling stuff by itself. But most interfaces are build upon several panels and texts. How could I nest several panels with target_map into another so that I finally pass one composition of these to one view entity skin?
  • Will this apply to TTF Texts as well?
  • If the mouse is evaluated on skins and I want to develop code for a drag-and-drop operation - how would I do that if the panel is passed to an entity skin? In the current state I work with the difference vector from panel to mouse and the framewise movement vector of the mouse. How would I work on this?


I am really excited about this feature, so I apologize for my nagging questions

[EDIT] Again about the nesting feature: Maybe you could add a new engine sort-of-collection datatype which holds several panels and text (added via engine commands... its basicalls a stack or a linked list) and has one target_map pointer? You can e.g. switch the collection's visibility off and all contained panels switch off, too. Or such things..

Cheers, Christian

Last edited by HeelX; 11/21/07 12:28.