The problem is that computers calculate binary, not decimal as you seem to assume in your example. A var is 32 bits and has 10 bits fixed point, hence the precision, 10 bits = 1024 and 1/1024 = 0.001. Now you notice the 1024, not 1000? Thats because of the binary calculation, and thats why rounding does not end up with your 63.000, you normally get some odd number instead. Multiplication of 32 bit vars on a PC results in 64 bit, and as the vars have a precision of 0.001 the interim result can only have a precision of 0.0005 and not "1e-6". Then the result is shifted back to 32 bit, ending up with 10 bits fixed point and 0.001 precision again. If I remember right Conitec has posted here some time ago the assembler functions they use for multiplying and dividing vars, but I cant find them anymore.

For higher precision than 0.001, in any computer language you have to use float or double.

Ok this was a little technical but anyway I hope this helps to understand variables?