Originally Posted By: Redeemer
I was talking about the doom engine on a game forum and somebody said something stupid about how the game deals with precision in its numbers, so I had to correct them. Later I was asked for a more detailed explanation and I offered the following explanation of fixed-point math, floating-point math, and why you even need "precision" to begin with:

Click to reveal..
The gist of it is that you need a helluva lot of precision and range in your numbers to generate any kind of recognizable 3D graphics at reasonable scale. Think about it... at 16 bits, even if you defined 1 "map unit" as being equal to 1 millimeter of real world length, you'd end up with less than 66 meters (~200 ft) on each axis to make any of your big mazy levels. Tiny? Yes. So clearly you need a ton of range to make anything reasonably large, and doom fixes this by making 1 map unit equally to something in the range of ~50mm. That gives them plenty of range to make nice big levels.

But why is the precision such a big deal? Well stop and think about that for a moment too. The most basic trig functions, sin and cosine, return FRACTIONS. That's not an accident. But it poses a problem because we need to use these functions frequently as coefficients to make our 3D transformations, and we'll lose precious amounts of information if a large number of the digits resulting from these computations is lost afterwards. So clearly, we don't just need range, we need precision too.

Doom meets this balance of range and precision by using a thing called 16-bit fixed point arithmetic. It is called 16-bit because each part of the numbers it uses have a 16-bit integer part, AND a 16-bit fractional part (binary fractions?! well, yeah, did you think fractions only worked if you counted in tens?). It is called FIXED POINT because the radix point (decimal point in plain english, but binary point more specifically) sits in the middle of the number and never moves (as opposed to a FLOATING POINT, which will put more digits on the left or right side of the number as necessary to increase the range or precision of the number). Lastly, it's called arithmetic because we're adding and stuff. Duh.

So, 16-bit fixed point arithmetic. That means our biggest number can be just shy of 65536, and our smallest number (sans zero) is 1/65536. That's a lot of precision! Is it enough?

Sort of. But if you've played Doom a lot, and you're astute person who picks up easily on visual glitches, you'll notice that even with ALL of this range and precision, you still get occasional visual artifacts such as "straight" walls that distort when you are extremely close to them, or extremely long side-defs that appear to "jump around" as you move the camera.

Can you imagine how bad those artifacts would look with over FOUR THOUSAND TIMES less precision? Walls would be CONSTANTLY jumping around. The bending and warping would be obscene. It'd be like you jumped into Picassoland. Ridiculous.

So that's PART of the reason it would make no sense for Marathon to use precision that small.

The second part is that using such little precision makes absolutely positively no sense from an implementation perspective. Marathon used fixed point math just like Doom. But if they really have FOUR THOUSAND TIMES less precision, that means they are only allocating 4 bits for the fractional parts of their numbers. Were the marathon devs extremely short on memory, forcing them to store and extract the fractional parts of different variables in whole bytes? Were they just masochistic? Or is somebody confused in their explanation? I'll let you decide.

By the way... the visual artifacts I mentioned from Doom have been fixed in many sourceports such as ZDoom, partly by increasing the size (and therefore the resolution) of their variables, and partly by switching to floating-point arithmetic (where the number of bits allocated to range and precision is adjusted as needed).

But even with this extended size, the visual inaccuracies haven't been completely eliminated so much as marginalized into non-existence. Visual inaccuracies, a result of information lost during transformations, is endemic to ALL 3D hardware/software. You just can't see it most of the time anymore thanks to most modern variable sizes, which are quite enormous. 64 bits = 64 binary digits, you know... and that is a lot of digits.

It's quite long, but for those of you who are not programming wizards, you may find it useful to read. But mostly I'm just reposting it here because I don't want it to go to waste on that board grin (there are not too many technical people there...)


Probably a waste putting this to the fun thread... while still not the 100% perfect place it's possibly better dumped in the Gameplay/Game Design forum.