Because:

(assume target camera x position is 0, and assume camera.x is twenty)

faster machine: time_step = 4:

camera.x -= .02 * (camera.x - 0) * 4;

wait(1); // camera.x now equals 18.4

camera.x -= .02 * (camera.x - 0) * 4;

wait(1); // camera.x now equals 16.928

camera.x -= .02 * (camera.x - 0) * 4;

wait(1); // camera.x now equals 15.57376

camera.x -= .02 * (camera.x - 0) * 4;

wait(1); // an amount of time has passed. camera.x now equals 14.3278592

final camera.x after one second on faster machine is 14.3278592



slower machine: time_step = 2:

camera.x -= .02 * (camera.x - 0) * 2;

wait(1); // camera.x now equals 19.2

camera.x -= .02 * (camera.x - 0) * 2;

wait(1); // same amount of time has passed. camera.x now equals 18.432

final camera.x after one second on slower machine is 18.432

hence decrease of camera.x is greater on faster computers if you use

camera.x -= .02 * (camera.x - 0) * time_step;


well i was wrong when i said that a faster machine would produce slower camera movement (mostly due to the painfully noobish assumption that time_step reflects the number of frames that occurred in the last second), but the point remains the same. Because i'm dealing with exponents the usual means of implementing time_step doesn't work. I've really got some thinking to do. Perhaps i should start a new thread, as ive made such a mess of this one? Nah ill just correct the first post


I HEART 3DGS