Messages in this thread | | | Subject | Re: Clock monotonic a suggestion | From | john stultz <> | Date | 21 Mar 2003 12:53:27 -0800 |
| |
On Fri, 2003-03-21 at 00:01, george anzinger wrote: > Joel Becker wrote: > > If the system is delayed (udelay() or such) by a driver or > > something for 10 seconds, then you have this (assume gettimeofday is > > in seconds for simplicity): > > > > 1 gettimeofday = 1000000000 > > 2 driver delays 10s > > 3 gettimeofday = 1000000000 > > 4 timer notices lag and adjusts > > Uh, how is this done? At this time there IS correction for delays up > to about a second built into the gettimeofday() code. You seem to be > assuming that we can do better than this with clock monotonic. Given > the right hardware, this may even be possible, but why not correct > gettimeofday in the same way?
Because to to it properly is slow. Right now gettimeofday is all done with 32bit math. However this bounds us to ~2 seconds of counting time before we overflow the low 32bits of the TSC on a 2GHz cpu. Rather then slowing down gettimeofday with 64bit math to be able to handle the crazy cases where timer interrupts are not handled for more then 2 seconds, we propose a new interface (monotonic_clock) that provides increased corner-case accuracy at increased cost.
thanks -john [unhandled content-type:application/pgp-signature] | |