When chrony reads in the linux rtc for the first time to trim the system
clock, it only reads it once. As it is possible that the rtc updates
itself during the read operation, the reported rtc time could be false.
To prevent this I've added a loop that reads the rtc clock twice, if the
seconds do not match retry the two read operations. If they match you
can assume the read operation was successful.
This is based on the hwclock implementation of reading the rtc clock
from the util-linux package.
This is a revert of commit 99d18abf updated for later changes. It seems
in that commit the calculation was changed to match the reversed dfreq
added in 1a7415a6, which itself was calculated incorrectly. Fix the
calculation of updated frequency and matching dfreq.
The Clang static analyzer scan-build from Debian clang version 3.4-1
found the following unneeded assignment.
rtc_linux.c:756:5: warning: Value stored to 'error' is never read
error = 1;
^ ~
Indeed, if in that if branch, the function returns without ever looking
at the variable `error`. So remove the line.
We want to correct the offset quickly, but we also want to keep the
frequency error caused by the correction itself low.
Define correction rate as the area of the region bounded by the graph of
offset corrected in time. Set the rate so that the time needed to correct
an offset equal to the current sourcestats stddev will be equal to the
update interval (assuming linear adjustment). The offset and the
time needed to make the correction are inversely proportional.
This is only a suggestion and it's up to the system driver how the
adjustment will be executed.
None of the current handlers really need it and with temperature
compensation enabled it would be necessary to undo the compensation
before passing it to the handlers.
This is to avoid incompatibility between 64/32-bit client/server.
While at it, convert all time values in the protocol to timeval
to avoid Y2K38 problem.
2) Changes to rtc_linux.c which a) do a double read of /dev/rtc when the
PPM interupt is turned on after the wait time expires. The current read
does not block to the second, as it should, thus two reads are needed.
Also, changes so that at startup the system properly ignores the last
system time from the initial burst mode for setting the system time since
it can be way off. At present this last system time is included in the
regression, which throws it off until finally that sample is dropped.