scispace - formally typeset
Open AccessJournal ArticleDOI

Precision synchronization of computer network clocks

David L. Mills
- Vol. 24, Iss: 2, pp 28-43
TLDR
In this article, the authors describe a series of incremental improvements in system hardware and software which result in significantly better accuracy and stability, especially in primary time servers directly synchronized to radio or satellite time services.
Abstract
This paper builds on previous work involving the Network Time Protocol, which is used to synchronize computer clocks in the Internet. It describes a series of incremental improvements in system hardware and software which result in significantly better accuracy and stability, especially in primary time servers directly synchronized to radio or satellite time services. These improvements include novel interfacing techniques and operating system features. The goal in this effort is to improve the synchronization accuracy for fast computers and networks from the tens of milliseconds regime of the present technology to the submillisecond regime of the future.In order to assess how well these improvements work, a series of experiments is described in which the error contributions of various modern Unix system hardware and software components are calibrated. These experiments define the accuracy and stability expectations of the computer clock and establish its design parameters with respect to time and frequency error tolerances. The paper concludes that submillisecond accuracies are indeed practical, but that further improvements will be possible only through the use of temperature-compensated local clock oscillators.

read more

Content maybe subject to copyright    Report

Precision Synchronization of Computer Network Clocks
1,2,3
David L. Mills
Electrical Engineering Department
University of Delaware
Abstract
This paper builds on previous work involving the Network Time Protocol, which is used to
synchronize computer clocks in the Internet. It describes a series of incremental improvements in
system hardware and software which result in significantly better accuracy and stability, especially
in primary time servers directly synchronized to radio or satellite time services. These improvements
include novel interfacing techniques and operating system features. The goal in this effort is to
improve the synchronization accuracy for fast computers and networks from the tens of milliseconds
regime of the present technology to the submillisecond regime of the future.
In order to assess how well these improvements work, a series of experiments is described in which
the error contributions of various modern Unix system hardware and software components are
calibrated. These experiments define the accuracy and stability expectations of the computer clock
and establish its design parameters with respect to time and frequency error tolerances. The paper
concludes that submillisecond accuracies are indeed practical, but that further improvements will be
possible only through the use of temperature-compensated local clock oscillators.
Keywords: disciplined oscillator, computer clock, net-
work time synchronization.
1. Introduction
This is one of a series of reports and papers on the
technology of synchronizing clocks in computer net-
works. Previous works have described The Network
Time Protocol (NTP) used to synchronize computer net-
work clocks in the Internet [MIL91a], modeling and
analysis of computer clocks [MIL92b], the chronology
and metrology of network timescales [MIL91b], and
measurement programs designed to establish the accu-
racy, stability and reliability in service [MIL90]. This
paper, which is a condensation of [MIL93], presents a
series of design improvements in interface hardware,
input/output driver software and Unix operating system
kernel software which improve the accuracy and stability
of the local clock, especially when directly synchronized
via radio or satellite to national time standards. Included
are descriptions of engineered software refinements in
the form of modified driver and kernel code that reduce
jitter relative to a precision timing source to the order of
a few tens of microseconds and timekeeping accuracy for
workstations on a common Ethernet to the order of a few
hundred microseconds.
This paper begins with an introduction describing the
NTP architecture and protocol and the local clock, which
is modeled as a disciplined oscillator and implemented
as a phase-lock loop (PLL). It describes several methods
designed to reduce clock reading errors due to various
causes at the hardware, driver and operating system
level. Some of these methods involve new or modified
device drivers which reduce latencies well below the
original system design. Others allow the use of special
PPS and IRIG signals generated by some radio clocks,
together with the audio codec included in some worksta-
tions, to avoid the latencies involved in reading serial
ASCII timecodes. Still others involve surgery on the
timekeeping software of three different Unix kernels for
Sun Microsystems and Digital Equipment machines.
The paper continues with descriptions of several experi-
ments intended to calibrate the success of these improve-
ments with respect to accuracy and stability. They
establish the latencies in reading the local clock, the
errors accumulated in synchronizing one computer clock
1 Sponsored by: Advanced Research Projects Agency under NASA Ames Research Center contract NAG 2-638,
National Science Foundation grant NCR-93-01002 and U.S. Navy Surface Weapons Center under Northeastern
Center for Engineering Education contract A30327-93.
2 Author’s address: Electrical Engineering Department, University of Delaware, Newark, DE 19716; Internet mail:
mills@udel.edu.
3 Reprinted from: Mills, D.L. Precision synchronization of computer network clocks. ACM Computer Communication
Review 24, 2 (April 1994). 16 pp.

to another and the errors due to the intrinsic instability of
the local clock oscillator. The paper concludes that it is
indeed possible to achieve reliable synchronization to
within a few hundred microseconds on an Ethernet or
FDDI network using fast, modern workstations, and that
the most important factor in limiting the accuracy is the
stability of the local clock oscillator.
2. Network Time Protocol
The Network Time Protocol (NTP) is used by Internet
time servers and their clients to synchronize clocks, as
well as automatically organize and maintain the time
synchronization subnet itself. It is evolved from the Time
Protocol [POS83] and the ICMP Timestamp Message
[DAR81b], but is specifically designed for high accu-
racy, stability and reliability, even when used over typi-
cal Internet paths involving multiple gateways and
unreliable networks. This section contains an overview
of the architecture and algorithms used in NTP. A de-
tailed description of the architecture and service model
is contained in [MIL91a], while the current protocol
specification, designated NTP Version 3, is defined by
RFC-1305 [MIL92a]. A subset of the protocol, desig-
nated Simple Network Time Protocol (SNTP), is de-
scribed in RFC-1361 [MIL92c].
NTP and its implementations have evolved and prolifer-
ated in the Internet over the last decade, with NTP
Version 2 adopted as an Internet Standard (Recom-
mended) [MIL89] and its successor NTP Version 3
adopted as a Internet Standard (Draft) [MIL92a]. NTP is
built on the Internet Protocol (IP) [DAR81a] and User
Datagram Protocol (UDP) [POS80], which provide a
connectionless transport mechanism; however, it is read-
ily adaptable to other protocol suites. The protocol can
operate in several modes appropriate to different scenar-
ios involving private workstations, public servers and
various subnet configurations. A lightweight associa-
tion-management capability, including dynamic reacha-
bility and variable poll-interval mechanisms, is used to
manage state information and reduce resource require-
ments. Optional features include message authentication
based on DES and MD5 algorithms, as well as provisions
for remote control and monitoring.
In NTP one or more primary servers synchronize directly
to external reference sources such as radio clocks. Sec-
ondary time servers synchronize to the primary servers
and others in the synchronization subnet. A typical sub-
net is shown in Figure 1a, in which the nodes represent
subnet servers, with normal level or stratum numbers
determined by the hop count from the primary (stratum
1) server, and the heavy lines the active synchronization
paths and direction of timing information flow. The light
lines represent backup synchronization paths where tim-
ing information is exchanged, but not necessarily used to
synchronize the local clocks. Figure 1b shows the same
subnet, but with the line marked x out of service. The
subnet has reconfigured itself automatically to use
backup paths, with the result that one of the servers has
dropped from stratum 2 to stratum 3. In practice each
NTP server synchronizes with several other servers in
order to survive outages and Byzantine failures using
methods similar to those described in [SHI87].
Figure 2 shows the overall organization of the NTP time
server model, which has much in common with the
phase-lock methods summarized in [RAM90]. Times-
tamps exchanged between the server and possibly many
other subnet peers are used to determine individual
roundtrip delays and clock offsets, as well as provide
reliable error bounds. As shown in the figure, the com-
puted delays and offsets for each peer are processed by
the clock filter algorithm to reduce incidental time jitter.
As described in [MIL92a], this algorithm selects from
among the last several samples the one with minimum
delay and presents the associated offset as the output.
1
22
3
33
1
23
333
(a) (b)
x
Figure 1. Subnet Synchronization Topologies
Clock Filter
Clock Filter
Clock Filter
Clock Selection
Clock
Combining
Loop Filter
NCO
Network
Phase-Locked Oscillator
Figure 2. Network Time Protocol
2

The clock selection algorithm determines from among
all peers a suitable subset capable of providing the most
accurate and trustworthy time using principles similar to
those described in [VAS88]. This is done using a cascade
of two subalgorithms, one based on interval intersections
to cast out faulty peers [MAR85] and the other based on
clustering and maximum likelihood principles to im-
prove accuracy [MIL91a]. The resulting offsets of this
subset are first combined on a weighted-average basis
using the algorithm described in [MIL92a] and then
processed by a phase-lock loop (PLL) using the algo-
rithms described in [MIL92b]. In the PLL the combined
effects of the filtering, selection and combining opera-
tions are to produce a phase correction term, which is
processed by the loop filter to control the numeric-con-
trolled oscillator (NCO) frequency. The NCO is imple-
mented as an adjustable-rate counter using a
combination of hardware and software components. It
furnishes the phase (timing) reference to produce the
timestamps used in all timing calculations.
Figure 3 shows how NTP timestamps are numbered and
exchanged between peers A and B. Let T
1
, T
2
, T
3
, T
4
be
the values of the four most recent timestamps as shown
and, without loss of generality, assume T
3
> T
2
. Also, for
the moment assume the clocks of A and B are stable and
run at the same rate. Let
a =
T
2
T
1
and b = T
3
T
4
.
If the delay difference from A to B and from B to A, called
differential delay, is small, the roundtrip delay δ and
clock offset θ of B relative to A at time T
4
are close to
δ = a b and θ =
a + b
2
.
Each NTP message includes the latest three timestamps
T
1
, T
2
and T
3
, while the fourth timestamp T
4
is deter-
mined upon arrival of the message. Thus, both peers A
and B can independently calculate delay and offset using
a single bidirectional message stream. This is a symmet-
ric, continuously sampled, time-transfer scheme similar
to those used in some digital telephone networks
[LIN80]. Among its advantages are that errors due to
missing or duplicated messages are avoided (see
[MIL92b] and [MIL93] for an extended discussion of
these issues and a comprehensive analysis of errors).
2.1. The NTP Local Clock Model
The Unix 4.3bsd clock model requires a periodic hard-
ware timer interrupt produced by an oscillator operating
in the 100-1000 Hz range. Each interrupt causes an
increment tick to be added to the kernel time variable.
The value of the increment is chosen so that the counter,
plus an initial offset established by the settimeofday()
call, is equal to the time of day in seconds and microsec-
onds. When the tick does not evenly divide the second in
microseconds, an additional increment fixtick is added to
the kernel time once each second to make up the differ-
ence.
The Unix clock can actually run at three different rates,
one at the intrinsic oscillator frequency, another at a
slightly higher frequency and a third at a slightly lower
frequency. The adjtime() system call can be used to
adjust the local clock to a given time offset. The argu-
ment is used to select which of the three rates and the
interval
t to run at that rate in order to amortize the
specified offset.
The NTP local clock model described in [MIL92b] in-
corporates the Unix local clock as a disciplined oscillator
controlled by an adaptive parameter, type-II phase-lock
loop. Its characteristics are determined by the transient
response of the loop filter, which for a type-II PLL
includes an integrator with a lead network for stability.
As a disciplining function for a computer clock, the NTP
model can be implemented as a sampled-data system
using a set of recurrence equations. A capsule overview
of the design extracted from [MIL92b] may be helpful in
understanding how the model operates.
The local clock is continuously adjusted in small incre-
ments at fixed adjustment intervals
σ. The increments
are computed from state variables representing the fre-
quency offset f and phase offset g. These variables are
determined from the timestamps in messages received at
nominal update intervals
µ, which are variable from
about 16 s to over 17 minutes. As part of update process-
ing, the compliance h is computed and used to adjust the
time constant
τ. Finally, the poll interval ρ for transmit-
ted NTP messages is determined as a multiple of
τ.
Details on how
τ is computed from h and how ρ is
determined from
τ are given in [MIL92a].
θ
0
T
1
T
4
T
2
T
3
B
A
Figure 3. Measuring Delay and Offset
t(i 1) t(i) t(i + 1)
µ(i + 1)µ(i)
time
Figure 4. Update Nomenclature
3

Updates are numbered from zero, with those in the
neighborhood of the ith update shown in Figure 4. All
variables are initialized at i = 0 to zero. After an interval
µ(i)
= t(i) t(i 1) (i > 0) from the previous update the
ith update arrives at time t(i) including the time off-
set v
s
(i). When the update v
s
(i) is received, the frequency
error f(i
+ 1) and phase error g(i + 1) are computed:
f(i
+ 1) = f(i) +
µ(i)v
s
(i)
τ
2
, g(i + 1) =
v
s
(i)
τ
.
The factor τ in the above determines the PLL time
constant, which determines its response to transient time
and frequency changes relative to the disciplining
source. It is determined by the NTP daemon as a function
of prevailing time dispersions measured by the clock
filter and clock selection algorithms. When the disper-
sions have been low over some relatively long period, τ
is increased and the bandwidth is decreased. In this mode
small timing fluctuations due to jitter in the subnet are
suppressed and the PLL attains the most accurate phase
estimate. On the other hand, if the dispersions become
high due to network congestion or a systematic fre-
quency change, for example, τ is decreased and the
bandwidth is increased. In this mode the PLL is most
adaptive to transients due to these causes and others due
to system reboot or missed timer interrupts.
The NTP daemon simulates the above recurrence rela-
tions and provides offsets to the kernel at intervals of
σ = 1 s using the adjtime() system call and the ntp_ad-
jtime() system call described later. However, provisions
have to be made for the additional jitter which results
when the timer interval does not evenly divide the second
in microseconds. Also, since the adjustment process
must complete within 1 s, larger adjustments must be
parceled out in a series of system calls. Finally, provi-
sions must be made to compensate for the roundoff error
in computing t. These factors add to the error budget,
increase system overhead and complicate the daemon
implementation.
3. Hardware and Software Interfaces for Preci-
sion Timekeeping
It has been demonstrated in previous work cited that it is
possible using NTP to synchronize a number of hosts on
an Ethernet or a moderately loaded T1 network within a
few tens of milliseconds with careful selection of timing
sources and the configuration of the time servers on the
network. This may be adequate for the majority of appli-
cations; however, modern workstations and high speed
networks can do much better than that, generally to
within some fraction of a millisecond, by taking special
care in the design of the hardware and software inter-
faces. The following sections discuss issues related to the
design of interfaces for external time sources such as
radio clocks and associated timing signals.
3.1. Interfaces for the ASCII Timecode
Most radio clocks produce an ASCII timecode with a
resolution of 1 ms. Depending on the system implemen-
tation, the maximum reading errors range from one to ten
milliseconds. For systems with microsecond-resolution
local clocks, this results in a maximum peak-to-peak
(p-p) jitter of 1 ms. However, assuming the read requests
are statistically independent of the clock update times,
the average over a large number of readings will make
the clock appear 0.5 ms late. To compensate for this, it
is only necessary to add 0.5 ms to the reading before
further processing by the NTP algorithms. For example,
Figure 5 shows the time offsets between a WWVB
receiver and the local clock over a typical day. The
readings are distributed over the approximate interval
-400 to -1400 µs, with mean about -900 µs; thus, with
the above assumptions, the true offset of the radio clock
is -400 µs.
Radio clocks are usually connected to the host computer
using a serial port operating at a speed of 9600 bps. The
on-time reference epoch for the timecode is usually the
beginning of the start bit of a designated character of the
timecode. The UART chip implementing the serial port
most often has a sample clock of eight to 16 times the
basic bit rate. Assuming the sample clock starts midway
in the start bit and continues to midway in the first stop
bit and there are eight bits per character, this creates a
processing delay of 9.5 bit times, or about 1 ms relative
to the start bit of the character. The jitter contribution is
usually no more than a couple of sample clock periods,
or about 26 µs p-p. This is small compared to the clock
reading jitter and can be ignored. Thus, the UART delay
can be considered constant, so the hardware contribution
to the total mean delay budget is 0.5 + 1.0 = 1.5 ms.
In some kernel serial port drivers, in particular, the Sun
zs driver, an intentional delay is introduced when char-
acters are received after an idle period. A batch of char-
MJD 49117 Time (s)
Offset (us)
0 20000 40000 60000 80000
-1500 -1000 -500
Figure 5. Time Offsets with Serial ASCII Timecode
4

acters is passed to the calling program when either (a) a
timeout in the neighborhood of 10 ms expires or (b) an
input buffer fills up. The intent in this design is to reduce
the interrupt load on the processor by batching the char-
acters where possible. Obviously, this can cause severe
problems for precision timekeeping. Judah Levine of the
National Institute of Science and Technology (NIST) has
developed patches for the zs driver which fixes this
problem for the native serial ports of the Sun SPARCsta-
tion
4
.
Good timekeeping depends strongly on the means avail-
able to capture an accurate timestamp at the instant the
stop bit of the on-time character is found; therefore, the
code path delay between the character interrupt routine
and the first place a timestamp can be captured is very
important, since on some systems, such as Sun
SPARCstations, this path can be astonishingly long. The
Unix scheduling mechanisms involve both a hardware
interrupt queue and a software interrupt queue. Entries
are made on the hardware queue as the interrupt is
signaled and generally with the lowest latency, estimated
at 20-30 µs for a Sun SPARCstation IPC
5
. Then, after
minimal processing, an entry is made on the software
queue for later processing in order of software interrupt
priority. Finally, the software interrupt unblocks the NTP
daemon, which then calculates the current local clock
offset and introduces corrections as required.
Opportunities exist to capture timestamps at the hard-
ware interrupt time, software interrupt time and at the
time the NTP daemon is activated, but these involve
various degrees of kernel trespass and hardware gim-
micks. To gain some idea of the severity of the errors
introduced at each of these stages, measurements were
made using a Sun IPC and a test setup that results in an
error between the local clock and a precision timing
source (calibrated cesium clock) no greater than 0.1 ms.
The total delay from the on-time epoch to when the NTP
daemon is activated was measured at 8.3 ms in an other-
wise idle system, but increased on rare occasion to over
25 ms under load, even when the NTP daemon was
operated at a relatively high software priority level. Since
1.5 ms of the total delay is due to the hardware, the
remaining 6.8 ms represents the total code path delay
accounting for all software processing from the hardware
interrupt to the NTP daemon.
On Unix systems which include support for the SIGIO
facility, it is possible to intervene at the time the software
interrupt is serviced. The NTP daemon code uses this
facility, when available, to capture a timestamp and save
it along with the timecode data in a buffer for later
processing. This reduces the total code path delay from
6.8 ms to 3.5 ms on an otherwise idle system. This design
is used for all input processing, including network inter-
faces and serial ports.
By far the best place to capture a serial-port timestamp
is right in the kernel interrupt routine, but this generally
requires intruding in the kernel code itself, which can be
intricate and architecture dependent. The next best place
is in some routine close to the interrupt routine on the
code path. There are two ways to do this, depending on
the ancestry of the Unix operating system variant. Older
systems based primarily on the original Unix 4.3bsd
support line discipline modules, which are hunks of code
with more-or-less well defined interface specifications
that can get in the way, so to speak, of the code path
between the interrupt routine and the remainder of the
serial port processing. Newer systems based on System
V Streams can do the same thing using streams modules.
Both approaches are supported in the NTP daemon im-
plementation. The CLK line discipline and streams mod-
ule operate in the same way. They look for a designated
character, usually <CR>, and stuff a Unix timeval times-
tamp in the data stream following that character when-
ever it is found. Eventually, the data arrive at the clock
driver, which then extracts the timestamp as the actual
time of arrival. In order to gain some insight as to the
effectiveness of this approach, measurements were made
using the same test setup described above. The total delay
from the on-time epoch to the instant when the timestamp
is captured was measured at 3.5 ms. Thus, the net code
path delay is this value less the hardware delay 3.5 - 1.5
= 2.0 ms. This represents close to the best that can be
achieved using the ASCII timecode.
3.2. Interfaces for the PPS Signal
Many radio clocks produce a 1 pulse-per-second (PPS)
signal of considerably better precision than the ASCII
timecode. Using this signal, it is possible to avoid the
1-ms p-p jitter and 1.5 ms hardware timecode adjustment
entirely. However, a device called a gadget box is re-
quired to interface this signal to the hardware and oper-
ating system. The gadget box includes a level converter
and pulse generator that turns the PPS signal on-time
transition into a valid character. Although many different
circuit designs could be used, a typical design generates
a single 26-µs start bit for each PPS signal on-time
transition. This appears to the UART operating at 38.4K
bps as an ASCII DEL (hex FF).
The character resulting from each PPS signal on-time
transition is intercepted by the CLK facility and a times-
5
4 Judah Levine, personal communication
5 Craig Leres, personal communication

Citations
More filters
Journal ArticleDOI

Fine-grained network time synchronization using reference broadcasts

TL;DR: Reference Broadcast Synchronization (RBS) as discussed by the authors is a scheme in which nodes send reference beacons to their neighbors using physical-layer broadcasts, and receivers use their arrival time as a point of reference for comparing their clocks.
Patent

System and method for synchronizing operations among a plurality of independently clocked digital data processing devices

TL;DR: In this paper, a system for maintaining synchrony of operations among a plurality of devices having independent clocking arrangements is described, where each task is associated with a time stamp that indicates a time, relative to a clock maintained by the task distribution device, at which group members are to execute the task.
Journal ArticleDOI

A media synchronization survey: reference model, specification, and case studies

TL;DR: This survey summarizes briefly synchronization requirements, presents a multimedia synchronization reference model, shows details of various specification approaches and applies the reference model to compare existing prominent approaches as case studies.
Journal ArticleDOI

Global clock synchronization in sensor networks

TL;DR: The diffusion-based protocol is presented, which is fully localized, and it is shown that, by imposing some constraints on the sensor network, global clock synchronization can be achieved in the presence of malicious nodes that exhibit Byzantine failures.
Proceedings ArticleDOI

Global clock synchronization in sensor networks

TL;DR: Three methods to achieve global synchronization in a sensor network are discussed: a node-based approach, a hierarchical cluster- based method, and a fully localized diffusion-based method.
References
More filters

User Datagram Protocol

J. Postel
TL;DR: UDP does not guarantee reliability or ordering in the way that TCP does, but its stateless nature is also useful for servers that answer small queries from huge numbers of clients.
Journal ArticleDOI

Internet time synchronization: the network time protocol

TL;DR: The NTP synchronization system is described, along with performance data which show that timekeeping accuracy throughout most portions of the Internet can be ordinarily maintained to within a few milliseconds, even in cases of failure or disruption of clocks, time servers, or networks.

Network Time Protocol (Version 3) Specification, Implementation and Analysis

TL;DR: This document describes the Network Time Protocol (NTP), specifies its formal structure and summarizes information useful for its implementation and describes the methods used for their implementation.

Network Time Protocol (Version 3) Specification, Implementation

TL;DR: The Network Time Protocol provides the mechanisms to synchronize time and coordinate time distribution in a large, diverse internet operating atrates from mundane to lightwave.
Journal ArticleDOI

Fault-tolerant clock synchronization in distributed systems

TL;DR: The worst-case clock skews guaranteed by representative algorithms are compared, along with other important aspects such as time, message, and cost overhead imposed by the algorithms.
Frequently Asked Questions (15)
Q1. What are the contributions in "Precision synchronization of computer network clocks1,2,3" ?

This paper builds on previous work involving the Network Time Protocol, which is used to synchronize computer clocks in the Internet. The goal in this effort is to improve the synchronization accuracy for fast computers and networks from the tens of milliseconds regime of the present technology to the submillisecond regime of the future. The paper concludes that submillisecond accuracies are indeed practical, but that further improvements will be possible only through the use of temperature-compensated local clock oscillators. 

In future work the authors plan to investigate methods to stabilize the local clock and to isolate the cause of the bias observed between two primary servers synchronized to the same PPS signal. Preliminary results suggest that residual frequency wander can be reduced about two orders of magnitude with this scheme. 

The wander component depends primarily on the ambient temperature and is the major source of timing errors in the quartz oscillators used in modern computers. 

Since the timestamp is captured at the on-time transition, the seconds-fraction portion is the offset between the local clock and the on-time epoch less the UART delay. 

The conclusion to be drawn is that adjusting the integration interval muchbelow or much above τ = 1000 s does not improve the oscillator stability. 

It has been demonstrated in previous work cited that it is possible using NTP to synchronize a number of hosts on an Ethernet or a moderately loaded T1 network within a few tens of milliseconds with careful selection of timing sources and the configuration of the time servers on the network. 

By far the best place to capture a serial-port timestamp is right in the kernel interrupt routine, but this generally requires intruding in the kernel code itself, which can be intricate and architecture dependent. 

Most of the latency burden can be avoided without kernel modifications, but some workstations will require additional hardware or kernel software to achieve submillisecond accuracy. 

Analysis confirms the x and y axes of the characteristic shown in Figure 14 scale directly as τ, which means the timing errors will scale as well. 

A replacement microtime() routine coded in assembler language is available in the NTP Version 3 distribution and is much faster at about 3 µs per call. 

For some applications it is useful to know the maximum error of the reported time due to all causes, including clock reading errors, oscillator frequency errors and accumulated latencies on the path to a primary reference source. 

The graph shows the cumulative probability distribution for the gettimeofday() latency over one full day, from which a conclusion can be drawn that the probability of exceeding even a threshold as low as about 60 µs is about 0.5 percent, or about the probability of colliding with a timer interrupt on a random request. 

In both the Ultrix and OSF/1 kernels the gettimeofday() system call uses the new microtime() routine, which returns the actual interpolated value. 

It is possible for an NTP-synchronized host to derive the latter information using other NTP peers, presumably properly synchronized within ±0.5 second, and to remove residual jitter using the PPS signal. 

When the update vs(i) is received, the frequency error f(i + 1) and phase error g(i + 1) are computed:f(i + 1) = f(i) + µ(i)vs(i) τ2 , g(i + 1) = vs(i)τ