scispace - formally typeset
Search or ask a question
Author

Leonard S. Cutler

Bio: Leonard S. Cutler is an academic researcher from Physical Research Laboratory. The author has contributed to research in topics: Sample variance. The author has an hindex of 2, co-authored 2 publications receiving 727 citations.

Papers
More filters
Book
28 Oct 2017
TL;DR: In this article, the spectral density S y (f) of the function y(t) where the spectrum is considered to be one-sided on a per hertz basis is defined.
Abstract: Consider a signal generator whose instantaneous output voltage V(t) may be written as V(t) = [V 0 + ??(t)] sin [2??v 0 t + s(t)] where V 0 and v 0 are the nominal amplitude and frequency, respectively, of the output. Provided that ??(t) and ??(t) = (d??/(dt) are sufficiently small for all time t, one may define the fractional instantaneous frequency deviation from nominal by the relation y(t) - ??(t)/2??v o A proposed definition for the measure of frequency stability is the spectral density S y (f) of the function y(t) where the spectrum is considered to be one sided on a per hertz basis. An alternative definition for the measure of stability is the infinite time average of the sample variance of two adjacent averages of y(t); that is, if y k = 1/t ??? tk+r = y(t k ) y(t) dt where ?? is the averaging period, t k+1 = t k + T, k = 0, 1, 2 ..., t 0 is arbitrary, and T is the time interval between the beginnings of two successive measurements of average frequency; then the second measure of stability is ?? y 2(??) ??? (y k+1 - y k )2/2 where denotes infinite time average and where T = ??. In practice, data records are of finite length and the infinite time averages implied in the definitions are normally not available; thus estimates for the two measures must be used. Estimates of S y (f) would be obtained from suitable averages either in the time domain or the frequency domain.

725 citations


Cited by
More filters
Book
01 Feb 2006
TL;DR: Wavelet analysis of finite energy signals and random variables and stochastic processes, analysis and synthesis of long memory processes, and the wavelet variance.
Abstract: 1. Introduction to wavelets 2. Review of Fourier theory and filters 3. Orthonormal transforms of time series 4. The discrete wavelet transform 5. The maximal overlap discrete wavelet transform 6. The discrete wavelet packet transform 7. Random variables and stochastic processes 8. The wavelet variance 9. Analysis and synthesis of long memory processes 10. Wavelet-based signal estimation 11. Wavelet analysis of finite energy signals Appendix. Answers to embedded exercises References Author index Subject index.

2,734 citations

Journal ArticleDOI
TL;DR: In this article, the authors present the first systematic, extensive error analysis of the spacecraft radio occultation technique using a combination of analytical and simulation methods to establish a baseline accuracy for retrieved profiles of refractivity, geopotential, and temperature.
Abstract: The implementation of the Global Positioning System (GPS) network of satellites and the development of small, high-performance instrumentation to receive GPS signals have created an opportunity for active remote sounding of the Earth's atmosphere by radio occultation at comparatively low cost. A prototype demonstration of this capability has now been provided by the GPS/MET investigation. Despite using relatively immature technology, GPS/MET has been extremely successful [Ware et al., 1996; Kursinski et al., 1996], although there is still room for improvement. The aim of this paper is to develop a theoretical estimate of the spatial coverage, resolution, and accuracy that can be expected for atmospheric profiles derived from GPS occultations. We consider observational geometry, attenuation, and diffraction in defining the vertical range of the observations and their resolution. We present the first systematic, extensive error analysis of the spacecraft radio occultation technique using a combination of analytical and simulation methods to establish a baseline accuracy for retrieved profiles of refractivity, geopotential, and temperature. Typically, the vertical resolution of the observations ranges from 0.5 km in the lower troposphere to 1.4 km in the middle atmosphere. Results indicate that useful profiles of refractivity can be derived from ∼60 km altitude to the surface with the exception of regions less than 250 m in vertical extent associated with high vertical humidity gradients. Above the 250 K altitude level in the troposphere, where the effects of water are negligible, sub-Kelvin temperature accuracy is predicted up to ∼40 km depending on the phase of the solar cycle. Geopotential heights of constant pressure levels are expected to be accurate to ∼10 m or better between 10 and 20 km altitudes. Below the 250 K level, the ambiguity between water and dry atmosphere refractivity becomes significant, and temperature accuracy is degraded. Deep in the warm troposphere the contribution of water to refractivity becomes sufficiently large for the accurate retrieval of water vapor given independent temperatures from weather analyses [Kursinski et al., 1995]. The radio occultation technique possesses a unique combination of global coverage, high precision, high vertical resolution, insensitivity to atmospheric particulates, and long-term stability. We show here how these properties are well suited for several applications including numerical weather prediction and long-term monitoring of the Earth's climate.

1,249 citations

Journal ArticleDOI
TL;DR: A tutorial review of some time-domain methods of characterizing the performance of precision clocks and oscillators is presented, and both the systematic and random deviations are considered.
Abstract: A tutorial review of some time-domain methods of characterizing the performance of precision clocks and oscillators is presented. Characterizing both the systematic and random deviations is considered. The Allan variance and the modified Allan variance are defined, and methods of utilizing them are presented along with ranges and areas of applicability. The standa,rd deviation is contrasted and shoun not to be. in general. a good measure for precision clocks and oscillators. Once a proper characterization model has been developed, then optimum estimation and prediction techniques can be employed. Some important cases are illustrated. As precision clocks and oscillators become increasingly important in society. communication of their characteristics and specifications among the vendors, manufacturers. design engineers. managers, and metrologists of this equipment becomes increasingI> important.

784 citations

Journal ArticleDOI
TL;DR: The theoretical basis for the Allan variance for modeling the inertial sensors' error terms and its implementation in modeling different grades of inertial sensor units are covered.
Abstract: It is well known that inertial navigation systems can provide high-accuracy position, velocity, and attitude information over short time periods. However, their accuracy rapidly degrades with time. The requirements for an accurate estimation of navigation information necessitate the modeling of the sensors' error components. Several variance techniques have been devised for stochastic modeling of the error of inertial sensors. They are basically very similar and primarily differ in that various signal processings, by way of weighting functions, window functions, etc., are incorporated into the analysis algorithms in order to achieve a particular desired result for improving the model characterizations. The simplest is the Allan variance. The Allan variance is a method of representing the root means square (RMS) random-drift error as a function of averaging time. It is simple to compute and relatively simple to interpret and understand. The Allan variance method can be used to determine the characteristics of the underlying random processes that give rise to the data noise. This technique can be used to characterize various types of error terms in the inertial-sensor data by performing certain operations on the entire length of data. In this paper, the Allan variance technique will be used in analyzing and modeling the error of the inertial sensors used in different grades of the inertial measurement units. By performing a simple operation on the entire length of data, a characteristic curve is obtained whose inspection provides a systematic characterization of various random errors contained in the inertial-sensor output data. Being a directly measurable quantity, the Allan variance can provide information on the types and magnitude of the various error terms. This paper covers both the theoretical basis for the Allan variance for modeling the inertial sensors' error terms and its implementation in modeling different grades of inertial sensors.

741 citations

Journal ArticleDOI
01 Sep 1978
TL;DR: A broad review of phase and frequency instability characterization can be found in this paper, including both classical widely used concepts and more recent less familiar approaches, including transfer functions that link frequency-domain and time-domain parameters.
Abstract: Precision frequency sources such as quartz oscillators, masers, and passive atomic frequency standards are affected by phase and frequency instabilities including both random and deterministic components. It is of prime importance to have a comprehensive characterization of these instabilities in order to be able to assess the potential utility of each source. For that purpose, many parameters have been proposed especially for dealing with random fluctuations. Some of them have been recommended by the IEEE Subcommittee on Frequency Stability and later by Study Group 7 on "Standard Frequencies and Time Signals" of the International Radio Consultative Committee (CCIR). Others are not so widely used but show interesting capabilities. This paper aims at giving a broad review of parameters proposed for phase and frequency instability characterization, including both classical widely used concepts and more recent less familiar approaches. Transfer functions that link frequency-domain and time-domain parameters are emphasized because they provide improved understanding of the properties of a given time-domain parameter or facilitate introducing of new parameters. As far as new approaches are concerned, an attempt has been made to demonstrate clearly their respective advantages. To this end, some developments that did not appear in the original references ate presented here, e.g, the modified three sample variance Σ y 2(τ), the expressions of 〈δy- T 2〉, the intetpretation of structure functions of phase and its relations with Σ y 2(τ) and the Hadamard variance. The effects of polynomial phase and frequency drifts on various parameters have also been pointed out in parallel with those of random processes modeled by power-law spectral densities.

548 citations