Firstly, I'd like to thank Tobias and jcl for taking the time to respond.

I had taken the statement about the precision equalling 0.001 literally, and made the false assumption that a var represents a number using a "pseudo-decimal point", i.e., that a number z is represented internally via the binary form of the integer 1000*z. In such a system, 0.5 would be stored as (111110100)binary= (500)decimal. The pseudo-decimal point approach is perfectly reasonable, and is often used for calculations involving money, for obvious reasons.

Back to Lite-C: With 10 bits to the right of the binary point, the closest representation of 1.05 is round(1.05 * 1024) / 1024= round(1075.2) / 1024 = 1075/1024= 1.0498. 60 * 1.0498= 62.988, so everything checks.

I'm still unclear about the following: Consider the expression 60.0 * str_to_num(str). It looks as though Lite-C automatically treats the 60.0 and the result of the str_to_num() function as vars rather than floats. I can't find anything in the manual that says that var is the default type. It would be good if this can be clarified.