Dividing the weights by variance or unweighted variance seems to have a
significant negative impact on response with normally distributed
network delays.
Divide by the difference between the mean and minimum distance instead.
It should be stable as there is no loop and the response seems to be a
good compromise between the original minimum distance weighting which
works well with normally distributed delays and the variance weighting
which works well with exponentially distributed delays.
This is apparently needed on system which keep nameservers specified
in /etc/resolv.conf even when there is no network connection. Should be
used with care as invalid names will be retried forever.
Instead of always selecting the source with minimum stratum, add weighted
stratum to the distance when comparing selectable sources. The weight
can be configured with new stratumweight directive and can be set to
zero to ignore stratum completely, by default 1.0.
It seems it triggers even with one source alone if it's close and
accurate or polling interval is long enough. It would need to include
max_clock_error.
Each source has a score against currently selected source which is
updated (multiplied by ratio of their distances) when one of the two
sources has a new sample. When the score reaches a limit, the source
will be selected. This should allow to slowly select the source with
minimum distance without frequent reselecting.
To avoid switching between sources with very variable distances (e.g. on
LAN or when upstream server uses a longer polling interval), sources
that are currently not selected are penalized by a fixed distance. This
can be configured with new reselectdist directive (100 microseconds by
default).
On start, when servers are reachable and use the same polling interval,
wait for them to have the same reachability register (which corresponds
to the number of samples in sourcestats) before selecting one.