I believe what you're looking at is the fact that time steps on the server are quantized, and movement must complete in a discrete number of timesteps. Given that constraint, there are basically two reasonable options to choose from:
- Divide the whole movement evenly among the timesteps it will take; or
- Use an exact movement delta for each step except the last one, which gets whatever remains of the requested movement.
I've chosen the first option, simply because that allows the client-side interpolation to more precisely model what actually happens server-side. I assumed that, for reasonably long movement requests, the difference in speed isn't so large that it would matter; it amounts to a few percent of difference, which perhaps in some scenarios might matter more than I thought. Indeed on short movement you would get greater variance, but that seems like a bit of a pathological case to me, and I wonder if that actually matters.
The behavior is not so firmly entrenched that I couldn't change to the second mode, but I do wonder if that couldn't yield undesirable unevenness in the interpolation or other unforeseen behavior. That might be less of a problem in these days of the new interpolation protocol format, but I'd have to try to find out.