When you're converting from microseconds to milliseconds, or go from higher precision to a lower one, or downscaling in any form; you have to be careful not to just do a divide. You also need to adjust the y-intercept. There are better methods for higher-accuracy downscaling methods, but they're more complex than just an add and a divide (also, the divide is a division by a constant, so it gets implemented as a multiplication and a bunch of logical shifts).
Blue line is the high-precision that we're converting from, orange line is what we get if we just int divide by scale, green line is what we get if we first add half of the divisor first and then divide. As you see, the orange line is closer to the ground truth that we're converting from. However, it overshoots the ground truth at of the time, which is something to be aware of.
