Funny enough, the last thing I've implemented in C++ was a reference counted object class to see how that stuff is put together - without any synchronization though, thread safe code is something I still have to explore.

I have one question though (very offtopic but hey these forums are more or less anarchistic by now anyways), Sid: You have this nice Object class in Rayne, which gives you all these nifty features like reflection, reference counting, etc, but the fact that they're reference counted enforces allocation of objects on the heap - so how do you make that work nicely with cache-friendly processing of objects? I suppose you're using a custom allocator (or multiple) to ensure objects are allocated close to each other, but I still can't imagine that this is as optimal as it could be with tightly packed object buffers.
Or, to rephrase my actual question in a way so you could answer it with a yes or a no, did the comfortable features of having this base object class outweigh the possible performance gains of having a less abstracted, but more data-oriented object model for you? Or do you perform some kind of programming magic in the background so the nice API of your object model just hides an optimized implementation?

I hope my question is somewhat clear, I'm doing a lot of research regarding different object models currently and I'm basically just trying to squeeze every sponge I can get into my hands, if you will, for first hand experiences. tongue (Especially because most opinions you find on public blogs are those of very very C-affine data-orientation fanatics, which may be a fascinating and important topic, but it's not really helpful to simply be told to scrap all OOP and make everything a system stacked onto another system then feed them data, without any explanations on how to architect such a structure in a codebase that's still somewhat flexible).