"the engines fault"
I once posted here for help because of a test I made that gave a very wrong position.
MMO btw. Dont smile like that :}
I used c_move with a speed X times greater than the speed per frame for Y frames (wich adds up to the total distance, on theory) and the two entities were off by 30 quants when gliding on a wall. I dont remember wich (the instant or actual moving entity) one got further right now.
I discarded that test and code.
Now I use the same principle , multiplying the distance that needs to be covered for certain number of frames.
But my prediction is (at best) off by 7-8 quants, wich is fine. But when it reaches 20 quants (more than half of the time) I move the entity towards the prediction, wich produces visible 'artefacts'. If I dont move it , it'll be more than 20 quants next time and the ent has to snap to place.
I see we're getting off topic on this one, but I hope its fine.
Could you give me some clues how you do it? Keep in mind I update 5 times/sec.
Here's my prediction (simple extrapolation really) code:
var temp_pos[3];
vec_set(temp_pos[0],my.x);
var temp_speed[3];
var temp_trace[3];
if(my.movement_input == 1) { temp_speed[0] = ((( 16 / time_step ) / 1000 ) * (my.movement_speed * time_step )) * latency_time; temp_speed[1] = 0; }
if(my.movement_input == 2) { temp_speed[0] = ((( 16 / time_step ) / 1000 ) * ((my.movement_speed * 0.7142857142857143) * time_step )) * latency_time; temp_speed[1] = ((( 16 / time_step ) / 1000 ) * (my.movement_speed * time_step )) * latency_time; }
if(my.movement_input == 3) { temp_speed[0] = ((( 16 / time_step ) / 1000 ) * ((my.movement_speed * 0.7142857142857143) * time_step )) * latency_time; temp_speed[1] = ((( 16 / time_step ) / 1000 ) * ((-my.movement_speed * 0.7142857142857143) * time_step )) * latency_time; }
if(my.movement_input == 4) { temp_speed[0] = ((( 16 / time_step ) / 1000 ) * (-my.movement_speed * time_step )) * latency_time; temp_speed[1] = 0; }
if(my.movement_input == 5) { temp_speed[0] = ((( 16 / time_step ) / 1000 ) * ((-my.movement_speed * 0.7142857142857143) * time_step )) * latency_time; temp_speed[1] = ((( 16 / time_step ) / 1000 ) * ((my.movement_speed * 0.7142857142857143) * time_step )) * latency_time; }
if(my.movement_input == 6) { temp_speed[0] = ((( 16 / time_step ) / 1000 ) * ((-my.movement_speed * 0.7142857142857143) * time_step )) * latency_time; temp_speed[1] = ((( 16 / time_step ) / 1000 ) * ((-my.movement_speed * 0.7142857142857143) * time_step )) * latency_time; }
if(my.movement_input == 7) { temp_speed[0] = 0; temp_speed[1] = ((( 16 / time_step ) / 1000 ) * (my.movement_speed * time_step )) * latency_time; }
if(my.movement_input == 8) { temp_speed[0] = 0; temp_speed[1] = ((( 16 / time_step ) / 1000 ) * (-my.movement_speed * time_step )) * latency_time; }
if(my.movement_input == 0) { temp_speed[0] = 0; temp_speed[1] = 0; }
debug_var[1] = latency_time;
debug_var[2] = temp_speed[0];
debug_var[3] = temp_speed[1];
temp_speed[2] = 0;
c_move(me , vector(temp_speed[0],temp_speed[1],temp_speed[2]) , NULLVECTOR , GLIDE );
vec_set( temp_speed , NULLVECTOR );
vec_set( temp_trace , vector( my.x , my.y , my.z - 150 + my.foot_height ) );
c_trace( my.x , temp_trace , IGNORE_PASSABLE | IGNORE_ME | USE_BOX );
if( !trace_hit )
{
vec_set( target , temp_trace );
}
my.soil_height = target.z - my.foot_height;
if(my.z > my.soil_height + ( 5 + 20 * my.soil_contact ) * time_step )
{
my.soil_contact = 0;
temp_speed[2] = maxv( temp_speed[2] - 9 * time_step , -90 );
}
else
{
my.soil_contact = 1;
temp_speed[2] = 0;
my.z = my.soil_height;
}
if( temp_speed[2] ) { c_move( me , nullvector , vector( 0 , 0 , temp_speed[2] * time_step ) , IGNORE_PASSABLE ); }
my.z = maxv( my.soil_height , my.z );
vec_set(my.predicted_current_x,my.x);
vec_set(my.x,temp_pos[0]);
The function takes "latency_time" as an argument, to use for duration of movement, like 200ms between 2 updates.
I tought about using the same prediction client-side to move the entity just a little bit toward the upcomming position, so that it doesnt snap or get corrected when the pos arrives. Thats the best I can think off...
PS.: The "(( 16 / time_step ) / 1000 )" is used to get the duration of one frame in ms(i think, cant remember anymore)
It gets multiplied by speed to get distance to be covered for 1 ms, and then multiplied by latency_time to give total distance for the requested time.
Maby thats what gives me errors, but it all added up on theory...