Wordy, but here we go... <sigh> ....
I'm talking about the math in the 3D engine itself, and not about script or LiteC commands (sorry if I didn't make that clear at first).
Take any square cube (8 vertices) - for each frame rendered, that cube has to have its 8 vertices scaled, translated (moved), rotated, etc, etc...
Then the surfaces are constructed, then the textures are mapped to the surfaces.
If you scale an object from its creation (when you designed and made it - not when did a Load Object) say by .5 (smaller) or 1.5 (larger), you have now forced additional math mostly on the texturing side IMHO.
Using the .5/1.5 example above for a 16*16 bmp, the bmp itself has to be resized before mapping - .5 scale would require bmp reduction to 8*8, 1.5 scale would be a 24*24 bmp.
BUT - this is not just straight clipping or multiplication, it's *resampling* the entire bitmap.
How many of us *really* use a 16*16 bmp (256 pixels), eh..?
More like 64*64 (4096 pixels), 128*128 (16384 pixels),or 256*256 (65536 pixles).
Yes I know you can use any "factor of 2" - 8, 16, 32, 64, etc.. and this probably exists in the first place because of the algorithm used for resampling.
Any resampling takes time, and IMHO always clobbers the detail of the bmp unless it's just a "flat solid color".
So, any thing you can do to keep from scaling an object from its creation avoids additional math, and "fuzzing" the texture.
And I do mean "fuzzing", not an obscenity alternate - although either would appropriate for what it does to a really "crisp" texture <grin>.

So if I (or you) design an object at its true "run-time" size to begin with, so you don't have to scale it to make it fit or look "right", it should avoid all the crud I babbled about above...

Of course I could be dead wrong about the whole dang thing, proving myself a ditz. ("again" as my wife would add...)

-Neut.


Dreaming ain't Doing..!
<sigh> Darn semicolons - I always manage to miss at least 1..!