scispace - formally typeset
Search or ask a question
Author

Donald E. K. Martin

Other affiliations: United States Census Bureau
Bio: Donald E. K. Martin is an academic researcher from Howard University. The author has contributed to research in topics: Conditional probability distribution & Filter (signal processing). The author has an hindex of 7, co-authored 12 publications receiving 181 citations. Previous affiliations of Donald E. K. Martin include United States Census Bureau.

Papers
More filters
Posted Content
TL;DR: In this paper, the authors developed an algorithm for computing filter weights for asymmetric, semi-infinite signal extraction filters, including the important case of the concurrent filter (for signal extraction at the current time point).
Abstract: Standard signal extraction results for both stationary and nonstationary time series are expressed as linear filters applied to the observed series. Computation of the filter weights, and of the corresponding frequency response function, is relevant for studying properties of the filter and of the resulting signal extraction estimates. Methods for doing such computations for symmetric, doubly infinite filters are well established. This study develops an algorithm for computing filter weights for asymmetric, semi-infinite signal extraction filters, including the important case of the concurrent filter (for signal extraction at the current time point). The setting is where the time series components being estimated follow autoregressive integrated moving-average (ARIMA) models. The algorithm provides expressions for the asymmetric signal extraction filters as rational polynomial functions of the backshift operator. The filter weights are then readily generated by simple expansion of these expressions, and the corresponding frequency response function is directly evaluated. Recursive expressions are also developed that relate the weights for filters that use successively increasing amounts of data. The results for the filter weights are then used to develop methods for computing mean squared error results for the asymmetric signal extraction estimates.

37 citations

Journal ArticleDOI
TL;DR: By conditioning on the time of the first failure, several results are derived for demonstration tests of the start-up reliability of equipment by computing the probability distribution of the number of start-ups and the probability of acceptance or rejection of the equipment in a specified number of trials.

36 citations

Journal ArticleDOI
TL;DR: In this article, the authors developed an algorithm for computing filter weights for asymmetric, semi-infinite signal extraction filters, including the important case of the concurrent filter (for signal extraction at the current time point).
Abstract: . Standard signal extraction results for both stationary and nonstationary time series are expressed as linear filters applied to the observed series. Computation of the filter weights, and of the corresponding frequency response function, is relevant for studying properties of the filter and of the resulting signal extraction estimates. Methods for doing such computations for symmetric, doubly infinite filters are well established. This study develops an algorithm for computing filter weights for asymmetric, semi-infinite signal extraction filters, including the important case of the concurrent filter (for signal extraction at the current time point). The setting is where the time series components being estimated follow autoregressive integrated moving-average (ARIMA) models. The algorithm provides expressions for the asymmetric signal extraction filters as rational polynomial functions of the backshift operator. The filter weights are then readily generated by simple expansion of these expressions, and the corresponding frequency response function is directly evaluated. Recursive expressions are also developed that relate the weights for filters that use successively increasing amounts of data. The results for the filter weights are then used to develop methods for computing mean squared error results for the asymmetric signal extraction estimates.

32 citations

Journal ArticleDOI
TL;DR: This work uses auxiliary Markov chains to derive probabilistic results for five types of start-up demonstration tests, with start-ups that are Markovian of a general order.

27 citations

Journal ArticleDOI
TL;DR: An algorithm is developed for computing the distribution of the number of successes in binary sequences, under the assumption that the dependence structure is fourth-order Markovian.

11 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Convergence of Probability Measures as mentioned in this paper is a well-known convergence of probability measures. But it does not consider the relationship between probability measures and the probability distribution of probabilities.
Abstract: Convergence of Probability Measures. By P. Billingsley. Chichester, Sussex, Wiley, 1968. xii, 253 p. 9 1/4“. 117s.

5,689 citations

Posted Content
TL;DR: In this paper, the authors provide a unified and comprehensive theory of structural time series models, including a detailed treatment of the Kalman filter for modeling economic and social time series, and address the special problems which the treatment of such series poses.
Abstract: In this book, Andrew Harvey sets out to provide a unified and comprehensive theory of structural time series models. Unlike the traditional ARIMA models, structural time series models consist explicitly of unobserved components, such as trends and seasonals, which have a direct interpretation. As a result the model selection methodology associated with structural models is much closer to econometric methodology. The link with econometrics is made even closer by the natural way in which the models can be extended to include explanatory variables and to cope with multivariate time series. From the technical point of view, state space models and the Kalman filter play a key role in the statistical treatment of structural time series models. The book includes a detailed treatment of the Kalman filter. This technique was originally developed in control engineering, but is becoming increasingly important in fields such as economics and operations research. This book is concerned primarily with modelling economic and social time series, and with addressing the special problems which the treatment of such series poses. The properties of the models and the methodological techniques used to select them are illustrated with various applications. These range from the modellling of trends and cycles in US macroeconomic time series to to an evaluation of the effects of seat belt legislation in the UK.

4,252 citations

Journal ArticleDOI
TL;DR: A concise survey of the literature on cyclostationarity is presented and includes an extensive bibliography and applications of cyclostatedarity in communications, signal processing, and many other research areas are considered.

935 citations

01 Mar 1994
TL;DR: In this article, a lognormally distributed random variable Z = exp(Y) where exp stands for the exponential function (exp(x) = e x) is calculated and the mean Z and the standard deviation s Z of the lognormal variable are related to the mean Y and standard deviation S Y of the normal variable by( 2 / exp() exp(2 Y s Y Z = [1] 5.
Abstract: Ecological data are often lognormally distributed. Nutrient concentrations, population densities and biomasses, rates of production and other flows are always positive, and generally have standard deviations that increase as the mean increases. Lognormally distributed variables have these characteristics, whereas normally distributed variables can be negative and have a standard deviation that does not change as the mean changes. Lognormal errors arise when sources of variation accumulate multiplicatively, whereas normal errors arise when sources of variation are additive. Given a normally distributed random variable Y, one can calculate a lognormally distributed random variable Z = exp(Y) where exp stands for the exponential function (exp(x) = e x). The mean Z and the standard deviation s Z of the lognormal variable are related to the mean Y and standard deviation s Y of the normal variable by) 2 / exp() exp(2 Y s Y Z = [1] 5. 0 2 ] 1) [exp(− = Y Z s Z s [2] Equation 1 can be used to correct for transformation bias in logarithmic regression. Suppose that lognormally-distributed observations Z have been log transformed as Y = log(Z) to fit a regression model such as ε + =) , (ˆ b X f Y [3] where Y is the log-transformed response variable which is predicted to be Y ˆ computed from the function f, X is a matrix of predictors, b is a vector of parameters, and the errors ε are normally distributed with mean zero and standard deviation s ε. Predictions Z ˆ in the original units are calculated using equation 1 as ] 2) ˆ exp[(ˆ 2 ε s Y Z + = [4] Note that estimates the median prediction of Z, which will be smaller than the mean for a lognormally distributed variate. Thus it makes sense to adjust the median upward, as in equation 4.) ˆ exp(Y Equation 1 is also used in drawing random numbers from a lognormal distribution. Generators for normally-distributed random variables Y are common. Suppose we draw many values of Y with mean zero and standard deviation s Y. Then from equation 1, the mean of exp(Y) will not be 1 = e 0 ; instead the mean of exp(Y) will be. Generally, however, one would prefer to have the mean of a set of lognormally distributed random numbers be 1. This can be accomplished by shifting the random numbers to Y) 2 / exp(2 Y …

415 citations

01 Jan 2004
TL;DR: All other protections provided by eoq, is allowed the instant when lead time, hence order is denoted.
Abstract: All other protections provided by eoq, is allowed the instant when lead time. Disc included hence order is denoted. This is denoted the explanations are same as given that faculty desire exercises. When lead time that the instant when there are same. Wayne select at a constant rate. Very upset when writing this believe for me a break through our. It is denoted by the field I came across. Thus here denotes the rigor that is a grat book. Also the cd rom which now accompanies every edition fast shipping. The market leading textbook for optimization simulation and a single desk one of ordering cost. Audience general trade and decision making data analysis it contains. Heredenotes the reorder point is field? This is called a constant rate and model building. These would not otherwise lawfully be units ordered each time the errata sheets i've found. With the set up to operations research now accompanies every edition winston reinforces. Less did this author fell asleep when I was published over.

298 citations