scispace - formally typeset
Search or ask a question
Author

J.A. Barnes

Bio: J.A. Barnes is an academic researcher from National Institute of Standards and Technology. The author has contributed to research in topics: Frequency drift & Modified Allan variance. The author has an hindex of 7, co-authored 14 publications receiving 1293 citations.

Papers
More filters
Book
28 Oct 2017
TL;DR: In this article, the spectral density S y (f) of the function y(t) where the spectrum is considered to be one-sided on a per hertz basis is defined.
Abstract: Consider a signal generator whose instantaneous output voltage V(t) may be written as V(t) = [V 0 + ??(t)] sin [2??v 0 t + s(t)] where V 0 and v 0 are the nominal amplitude and frequency, respectively, of the output. Provided that ??(t) and ??(t) = (d??/(dt) are sufficiently small for all time t, one may define the fractional instantaneous frequency deviation from nominal by the relation y(t) - ??(t)/2??v o A proposed definition for the measure of frequency stability is the spectral density S y (f) of the function y(t) where the spectrum is considered to be one sided on a per hertz basis. An alternative definition for the measure of stability is the infinite time average of the sample variance of two adjacent averages of y(t); that is, if y k = 1/t ??? tk+r = y(t k ) y(t) dt where ?? is the averaging period, t k+1 = t k + T, k = 0, 1, 2 ..., t 0 is arbitrary, and T is the time interval between the beginnings of two successive measurements of average frequency; then the second measure of stability is ?? y 2(??) ??? (y k+1 - y k )2/2 where denotes infinite time average and where T = ??. In practice, data records are of finite length and the infinite time averages implied in the definitions are normally not available; thus estimates for the two measures must be used. Estimates of S y (f) would be obtained from suitable averages either in the time domain or the frequency domain.

725 citations

Proceedings ArticleDOI
27 May 1981
TL;DR: In this article, the modified Allan Variance has been used to distinguish white phase noise (a = +2) and flicker phase noise, which is common for the short term instabilities of quartz crystal oscillators and active hydrogen masers.
Abstract: Time and Frequency Division National Bureau of Standards Boulder, Colorado 80303 ;ummary Heretofore, the Allan Variance," u *(r), has become the de facto standard for measudng oscillator instabilitv in the time-domain. Often oscillator frequency instabilities are resonabl modelable with a power law spectrum: S (f) e" where y is the normalized frequency,yf is th; Fourier frequency, and a is a constant over some range of Fourier frequencies. It has $een shown that for power law spectrum a *(r) T and that IJ = -a-l for -3

263 citations

ReportDOI
01 Jan 1990
TL;DR: In this paper, the effect of distributed dead time on the convergence of the two-sample variance has been investigated and a new bias function, B,(2,M,r,p,p) has been proposed to handle the commonly occurring cases.
Abstract: The accepted definition of frequency stability in the time domain is the two-sample variance (or Allan variance). It is based on the measurement of average frequencies over adjacent time intervals, with no "dead time" between the intervals. The primary advantages of the Allan variance are that (1) it is convergent for many encountered noise models for which the conventional variance is divergent; (2) it can distinguish between many important and different spectral noise types; (3) the two-sample approach relates to many practical implementations; for example, the rms change of an oscillator's frequency from one period to the next; and (4) Allan variances can be easily estimated at integer multiples of the sample interval. In 1974 a table of bias functions which related variance estimates with various configurations of number of samples and dead time to the Allan variance was published [l]. The tables were based on noises with pure power-law spectral densities. Often situations occur that unavoidably have dead time between measurements, but still the conventional variances are not convergent. Some of these applications are outside of the time-andfrequency field. Also, the dead times are often distributed throughout a given average, and this distributed dead time is not treated in the 1974 tables. This paper reviews the bias functions B,(N,r,p), and BZ(r,p) and introduces a new bias function, B,(2,M,r,p), to handle the commonly occurring cases of the effect of distributed dead time on the computed variances. Some convenient and easy-to-interpret asymptotic limits are reported. A set of tables for the bias functions are included at the end of this paper.

33 citations


Cited by
More filters
Book
01 Feb 2006
TL;DR: Wavelet analysis of finite energy signals and random variables and stochastic processes, analysis and synthesis of long memory processes, and the wavelet variance.
Abstract: 1. Introduction to wavelets 2. Review of Fourier theory and filters 3. Orthonormal transforms of time series 4. The discrete wavelet transform 5. The maximal overlap discrete wavelet transform 6. The discrete wavelet packet transform 7. Random variables and stochastic processes 8. The wavelet variance 9. Analysis and synthesis of long memory processes 10. Wavelet-based signal estimation 11. Wavelet analysis of finite energy signals Appendix. Answers to embedded exercises References Author index Subject index.

2,734 citations

Journal ArticleDOI
TL;DR: In this article, the authors present the first systematic, extensive error analysis of the spacecraft radio occultation technique using a combination of analytical and simulation methods to establish a baseline accuracy for retrieved profiles of refractivity, geopotential, and temperature.
Abstract: The implementation of the Global Positioning System (GPS) network of satellites and the development of small, high-performance instrumentation to receive GPS signals have created an opportunity for active remote sounding of the Earth's atmosphere by radio occultation at comparatively low cost. A prototype demonstration of this capability has now been provided by the GPS/MET investigation. Despite using relatively immature technology, GPS/MET has been extremely successful [Ware et al., 1996; Kursinski et al., 1996], although there is still room for improvement. The aim of this paper is to develop a theoretical estimate of the spatial coverage, resolution, and accuracy that can be expected for atmospheric profiles derived from GPS occultations. We consider observational geometry, attenuation, and diffraction in defining the vertical range of the observations and their resolution. We present the first systematic, extensive error analysis of the spacecraft radio occultation technique using a combination of analytical and simulation methods to establish a baseline accuracy for retrieved profiles of refractivity, geopotential, and temperature. Typically, the vertical resolution of the observations ranges from 0.5 km in the lower troposphere to 1.4 km in the middle atmosphere. Results indicate that useful profiles of refractivity can be derived from ∼60 km altitude to the surface with the exception of regions less than 250 m in vertical extent associated with high vertical humidity gradients. Above the 250 K altitude level in the troposphere, where the effects of water are negligible, sub-Kelvin temperature accuracy is predicted up to ∼40 km depending on the phase of the solar cycle. Geopotential heights of constant pressure levels are expected to be accurate to ∼10 m or better between 10 and 20 km altitudes. Below the 250 K level, the ambiguity between water and dry atmosphere refractivity becomes significant, and temperature accuracy is degraded. Deep in the warm troposphere the contribution of water to refractivity becomes sufficiently large for the accurate retrieval of water vapor given independent temperatures from weather analyses [Kursinski et al., 1995]. The radio occultation technique possesses a unique combination of global coverage, high precision, high vertical resolution, insensitivity to atmospheric particulates, and long-term stability. We show here how these properties are well suited for several applications including numerical weather prediction and long-term monitoring of the Earth's climate.

1,249 citations

Journal ArticleDOI
TL;DR: A tutorial review of some time-domain methods of characterizing the performance of precision clocks and oscillators is presented, and both the systematic and random deviations are considered.
Abstract: A tutorial review of some time-domain methods of characterizing the performance of precision clocks and oscillators is presented. Characterizing both the systematic and random deviations is considered. The Allan variance and the modified Allan variance are defined, and methods of utilizing them are presented along with ranges and areas of applicability. The standa,rd deviation is contrasted and shoun not to be. in general. a good measure for precision clocks and oscillators. Once a proper characterization model has been developed, then optimum estimation and prediction techniques can be employed. Some important cases are illustrated. As precision clocks and oscillators become increasingly important in society. communication of their characteristics and specifications among the vendors, manufacturers. design engineers. managers, and metrologists of this equipment becomes increasingI> important.

784 citations

Journal ArticleDOI
TL;DR: The theoretical basis for the Allan variance for modeling the inertial sensors' error terms and its implementation in modeling different grades of inertial sensor units are covered.
Abstract: It is well known that inertial navigation systems can provide high-accuracy position, velocity, and attitude information over short time periods. However, their accuracy rapidly degrades with time. The requirements for an accurate estimation of navigation information necessitate the modeling of the sensors' error components. Several variance techniques have been devised for stochastic modeling of the error of inertial sensors. They are basically very similar and primarily differ in that various signal processings, by way of weighting functions, window functions, etc., are incorporated into the analysis algorithms in order to achieve a particular desired result for improving the model characterizations. The simplest is the Allan variance. The Allan variance is a method of representing the root means square (RMS) random-drift error as a function of averaging time. It is simple to compute and relatively simple to interpret and understand. The Allan variance method can be used to determine the characteristics of the underlying random processes that give rise to the data noise. This technique can be used to characterize various types of error terms in the inertial-sensor data by performing certain operations on the entire length of data. In this paper, the Allan variance technique will be used in analyzing and modeling the error of the inertial sensors used in different grades of the inertial measurement units. By performing a simple operation on the entire length of data, a characteristic curve is obtained whose inspection provides a systematic characterization of various random errors contained in the inertial-sensor output data. Being a directly measurable quantity, the Allan variance can provide information on the types and magnitude of the various error terms. This paper covers both the theoretical basis for the Allan variance for modeling the inertial sensors' error terms and its implementation in modeling different grades of inertial sensors.

741 citations

Book
28 Oct 2017
TL;DR: In this article, the spectral density S y (f) of the function y(t) where the spectrum is considered to be one-sided on a per hertz basis is defined.
Abstract: Consider a signal generator whose instantaneous output voltage V(t) may be written as V(t) = [V 0 + ??(t)] sin [2??v 0 t + s(t)] where V 0 and v 0 are the nominal amplitude and frequency, respectively, of the output. Provided that ??(t) and ??(t) = (d??/(dt) are sufficiently small for all time t, one may define the fractional instantaneous frequency deviation from nominal by the relation y(t) - ??(t)/2??v o A proposed definition for the measure of frequency stability is the spectral density S y (f) of the function y(t) where the spectrum is considered to be one sided on a per hertz basis. An alternative definition for the measure of stability is the infinite time average of the sample variance of two adjacent averages of y(t); that is, if y k = 1/t ??? tk+r = y(t k ) y(t) dt where ?? is the averaging period, t k+1 = t k + T, k = 0, 1, 2 ..., t 0 is arbitrary, and T is the time interval between the beginnings of two successive measurements of average frequency; then the second measure of stability is ?? y 2(??) ??? (y k+1 - y k )2/2 where denotes infinite time average and where T = ??. In practice, data records are of finite length and the infinite time averages implied in the definitions are normally not available; thus estimates for the two measures must be used. Estimates of S y (f) would be obtained from suitable averages either in the time domain or the frequency domain.

725 citations