Instead of counting missing responses, switch to the offline state
immediately when sendmsg() fails.
This makes the option usable with servers and networks that may drop
packets, and the effect will be consistent with the onoffline command.
When the burst option is specified in the server/pool directive and the
current poll is longer than the minimum poll, initiate on each poll a
burst with 1 good sample and 2 or 4 total samples according to the
difference between the current and minimum poll.
This option is for indicating to chronyd that the reference clock is
kept in TAI and that chrony should attempt to convert from TAI to UTC by
using the timezone configured by the "leapsectz" directive.
Similarly to the maxdelaydevratio test, include in the maximum delay
dispersion which accumulated in the interval since the last sample.
Also, enable the test for symmetric associations.
If no rxfilter is specified in the hwtimestamp directive and the NIC
doesn't support the all or ntp filter, enable TX-only HW timestamping
with the none filter.
macOS 10.13 will implement the ntp_adjtime() system call, allowing
better control over the system clock than is possible with the existing
adjtime() system call. chronyd will support both the older and newer
calls, enabling binary code to run without recompilation on macOS 10.9
through macOS 10.13.
Early releases of macOS 10.13 have a very buggy adjtime() call. The
macOS driver tests adjtime() to see if the bug has been fixed. If the
bug persists then the timex driver is invoked otherwise the netbsd
driver.
Use the timezone specified by the leapsectz directive to get the
current TAI-UTC offset and set the offset of the system clock in order
to provide correct TAI time to applications using ntp_adjtime(),
ntp_gettime(), or clock_gettime(CLOCK_TAI).
Always write the measurement history on exit when the dump directory is
specified and silently ignore the dumponexit directive. There doesn't
seem to be a good use case for dumpdir and -r without dumponexit as the
history would be invalidated by adjustments of the clock that happened
between the dump command and chronyd exit.
While the measurements log can be useful for debugging problems in NTP
configuration (e.g. authentication failures with symmetric keys), it
seems most users are interested only in valid measurements (e.g. for
producing graphs) and don't expect/handle entries where some of the RFC
5905 tests 1-7 failed. Modify the measurements log option to log only
valid measurements, and for debugging purposes add a new rawmeasurements
option.
Change the default NTP rate limiting leak to 2 (25%). Change the default
command rate limiting interval to -4 (16 packets per second) and burst
to 8, so the interval is the only difference between NTP and command
rate limiting defaults.
This reverts commit 50022e9286.
Testing showed that ntpd as an NTP client performs poorly when it's
getting only 25% of responses. At least for now, disable rate limiting
by default again.
Change the default interval of both NTP and command rate limiting to -10
(1024 packets per second) and the burst to 16. The default NTP leak is 2
(rate limiting is enabled by default) and the default command leak is 0
(rate limiting is disabled by default).
The maxlockage option specifies in number of pulses how old can be
samples from the refclock specified by the lock option to be paired with
the pulses. Increasing this value is useful when the samples are
produced at a lower rate than the pulses.
The maxjitter directive sets the maximum allowed jitter of the sources
to not be rejected by the source selection algorithm. This prevents
synchronisation with sources that have a small root distance, but their
time is too variable. By default, the maximum jitter is 1 second.
Change default minsamples to 6 and polltarget to 8. This should improve
stability with extremely small jitters (e.g. HW timestamping) and not
decrease time accuracy at minimum polling interval too much.