i think it would be best for object-based motion blur though. you couldn't use that example for motion-blur of animated models because polygons are definitely going to intersect with each other a lot, particularly with complex hands, fingers, etc.

on second thought, the blurring would make the intersections a non-issue. i was thinking about it at school, and thinking that a closed fist has normals in lots of different directions, pointing towards and away from each other. very fast-moving fists would have a lot of intersecting polygons, but it shouldn't look too bad if everything is blurred and smoothed.

it would work with rotating models of the relative velocity was vertex-based, not object-based (which would be required anyway for blurring animations). each entity could remember their own camera-relative vertex positions in an array (may be easier with lite-c if you have global structs that point to an entity and contain an int[array] with relative camera positions for each frame).

i haven't learned how to pass extra info (such as an array) to a shader yet (i'm very fresh at this), but i will in a sec after i do some experiments for a school project.

julz


Formerly known as JulzMighty.
I made KarBOOM!