Add an option to use the median filter to reduce noise in measurements
before they are accumulated to sourcestats, similarly to reference
clocks. The option specifies how many samples are reduced to a single
sample.
The filter is intended to be used with very short polling intervals in
local networks where it is acceptable to generate a lot of NTP traffic.
When the local polling interval is adjusted between minpoll and maxpoll
to a sub-second value, check if the source is reachable and the minimum
measured delay is 10 milliseconds or less. If it's not, ignore the
maxpoll value and set the interval to 1 second.
This should prevent clients (mis)configured with an extremely short
minpoll/maxpoll from flooding servers on the Internet.
If the polling interval is shorter than 8 seconds, set the burst
interval to the 1/4th of the polling interval instead of the 2-second
constant. This should make the burst option and command useful with
very short polling intervals.
Fix mismatches between the format and sign of variables passed to
printf() or scanf(), which were found in a Frama-C analysis and gcc
using the -Wformat-signedness option.
Instead of counting missing responses, switch to the offline state
immediately when sendmsg() fails.
This makes the option usable with servers and networks that may drop
packets, and the effect will be consistent with the onoffline command.
Allow SRC_MAYBE_ONLINE to be specified for new NTP sources and
connectivity setting to select between SRC_ONLINE and SRC_OFFLINE
according to the result of the connect() system call, i.e. check whether
the client has a route to send its requests.
NTPv1 packets have a reserved field instead of the mode field and the
actual mode is determined from the port numbers. It seems there is still
a large number of clients sending NTPv1 requests with a zero value in
the field (per RFC 1059).
Follow ntpd and respond to the requests with server mode packets.
Clients sending packets in the interleaved mode are supposed to use
a different receive and transmit timestamp in order to reliably detect
the mode of the response. If an interleaved request with the receive
timestamp equal to the transmit timestamp is detected, respond in the
basic mode.
When the burst option is specified in the server/pool directive and the
current poll is longer than the minimum poll, initiate on each poll a
burst with 1 good sample and 2 or 4 total samples according to the
difference between the current and minimum poll.
Compare both receive and transmit timestamps in the NTP test number 1.
This prevents a client from dropping a valid response in the interleaved
mode if it follows a response in the basic mode and the server did not
have a kernel/hardware transmit timestamp, and the random bits of the
two timestamps happen to be the same (chance of 1 in 2^(32-precision)).
Before sending a new packet, check if the receive/transmit timestamp
is not equal to the origin timestamp or the previous receive/transmit
timestamp in order to prevent the packet from being its own valid
response (in the symmetric mode) and invalidate responses to the
previous packet.
This improves protection against replay attacks in the symmetric mode.
Save the local receive and remote transmit timestamp needed for
(re)starting the symmetric protocol when no valid reply was received
separately from the timestamps that are used for synchronization of the
local clock.
This extends the interval in which the local NTP state is (partially)
protected against replay attacks in order to complete a measurement
in the interleaved symmetric mode from [last valid RX, next TX] to
[last TX, next TX], i.e. it should be the same as in the basic mode.
Similarly to the maxdelaydevratio test, include in the maximum delay
dispersion which accumulated in the interval since the last sample.
Also, enable the test for symmetric associations.
Instead of giving NTP-specific data to sourcestats in order to perform
the test, provide a function to get all data needed for the test in
ntp_core. While at it, improve the naming of variables.
If the minimum delay is known (in a static network configuration), it
can replace the measured minimum from the register. This should improve
the stability of corrections for asymmetric jitter, sample weighting and
maxdelay* tests.
This allows transmissions in symmetric mode to be scheduled
independently from client transmissions. This reduces maximum delay
in scheduling when chronyd is configured with a larger number of
servers.
In basic client mode, set the origin and receive timestamp to zero.
This reduces the amount of information useful for fingerprinting and
improves privacy as the origin timestamp allows a passive observer to
track individual NTP clients as they move across networks. (With chrony
clients that assumes the timestamp wasn't reset by the chronyc offline
and online commands.)
This follows recommendations from the current version of IETF draft on
NTP data minimization [1].
The timestamp could be theoretically useful for enhanced rate limiting
which can limit individual clients behind NAT and better deal with DoS
attacks, but no server implementation is known to do that.
[1] https://tools.ietf.org/html/draft-ietf-ntp-data-minimization-01
In interleaved client mode, when so many consecutive requests were lost
that the first valid (interleaved) response would be dropped for being
too old, switch to basic mode so the response can be accepted if it
doesn't fail in the other tests.
This reworks commit 16afa8eb50.
In symmetric mode, don't send a packet in interleaved mode unless it is
the first response to the last valid request received from the peer and
there was just one response to the previous valid request. This prevents
the peer from matching the transmit timestamp with an older response if
it can't detect missed responses.
In interleaved symmetric mode, check if the remote TX timestamp is
before RX timestamp. Only the first response from the peer after
receiving a request should pass this test. Check also the interval
between last two remote transmit timestamps when we know the remote poll
can't be constrained by minpoll. Use the minimum of previous remote and
local poll as a lower bound of the actual interval between peer's
transmissions.
Check how many responses were missing before accumulating a sample using
old timestamps to avoid correcting the clock with an offset extrapolated
over a long interval.
This should be eventually done in sourcestats for all sources.
With the new selection of timestamps in the interleaved mode it's no
longer necessary to reverse the poll tracking in order to reduce the
local and remote intervals of measurements that makes the peer with
higher stratum.
This reverts commit 4a24368763.
Use previous local TX and remote RX timestamps for the new sample in the
interleaved mode if it will make the local and remote intervals
significantly shorter in order to improve the accuracy of the measured
delay.
Unlike in the basic mode, the peer with a higher stratum needs to wait
for a response before sending the next request in order to minimize the
delay of the measurement and error in the measured delay.
Slightly increase the delay adjustment to make it work with older chrony
versions.
Update the remote poll and remote stratum even for unsychronised peers,
and handle stratum of 0 as 16, so the peers work with the opposite
differences between their strata and can adjust their polling intervals
in order to interleave the packets.
When the poll value in a client request is smaller than the server's NTP
rate limiting interval, set poll in the response to the rate limiting
interval to suggest the client to increase its polling interval.
This follows ntpd as a server. No current client implementation seems to
be increasing its interval by the poll, but it may change in the future.
It was never used for anything and messages in debug output already
include filenames, which can be easily grepped if there is a need
to see log messages only from a particular file.
While the measurements log can be useful for debugging problems in NTP
configuration (e.g. authentication failures with symmetric keys), it
seems most users are interested only in valid measurements (e.g. for
producing graphs) and don't expect/handle entries where some of the RFC
5905 tests 1-7 failed. Modify the measurements log option to log only
valid measurements, and for debugging purposes add a new rawmeasurements
option.
When the server's transmit timestamp was updated with a kernel/HW
timestamp, it didn't include the time smoothing offset. If the offset
was larger than one second, the update failed and clients using the
interleaved mode received less accurate timestamps. If the update
succeeded, the clients received timestamps that were not adjusted for
the time smoothing offset, which added an error of up to 0.5s/1s to
their measured offset/delay.
Fix the update to include the smoothing offset in the new timestamp.