scispace - formally typeset
Search or ask a question

Showing papers on "Parametric model published in 1997"



Journal ArticleDOI
TL;DR: This paper presents a framework for feature extraction predicated on parametric models for the radar returns, and presents statistical analysis of the scattering model to describe feature uncertainty, and provides a least-squares algorithm for feature estimation.
Abstract: High-frequency radar measurements of man-made targets are dominated by returns from isolated scattering centers, such as corners and flat plates. Characterizing the features of these scattering centers provides a parsimonious, physically relevant signal representation for use in automatic target recognition (ATR). In this paper, we present a framework for feature extraction predicated on parametric models for the radar returns. The models are motivated by the scattering behaviour predicted by the geometrical theory of diffraction. For each scattering center, statistically robust estimation of model parameters provides high-resolution attributes including location, geometry, and polarization response. We present statistical analysis of the scattering model to describe feature uncertainty, and we provide a least-squares algorithm for feature estimation. We survey existing algorithms for simplified models, and derive bounds for the error incurred in adopting the simplified models. A model order selection algorithm is given, and an M-ary generalized likelihood ratio test is given for classifying polarimetric responses in spherically invariant random clutter.

474 citations


Book ChapterDOI
01 Jan 1997
TL;DR: Many different model selection information criteria can be found in the literature in various contexts including regression and density estimation to select among a given collection of parametric models that model which minimizes an empirical loss plus some penalty term which is proportional to the dimension of the model.
Abstract: Many different model selection information criteria can be found in the literature in various contexts including regression and density estimation. There is a huge amount of literature concerning this subject and we shall, in this paper, content ourselves to cite only a few typical references in order to illustrate our presentation. Let us just mention AIC, C p , or C L , BIC and MDL criteria proposed by Akaike (1973), Mallows (1973), Schwarz (1978), and Rissanen (1978) respectively. These methods propose to select among a given collection of parametric models that model which minimizes an empirical loss (typically squared error or minus log-likelihood) plus some penalty term which is proportional to the dimension of the model. From one criterion to another the penalty functions differ by factors of log n, where n represents the number of observations.

412 citations


Proceedings ArticleDOI
17 Jun 1997
TL;DR: This work presents a variant of the EM algorithm that can segment image sequences by fitting multiple smooth flow fields to the spatiotemporal data and shows how the estimation of a single smooth flow field can be performed in closed form, thus making the multiple model estimation computationally feasible.
Abstract: Grouping based on common motion, or "common fate" provides a powerful cue for segmenting image sequences. Recently a number of algorithms have been developed that successfully perform motion segmentation by assuming that the motion of each group can be described by a low dimensional parametric model (e.g. affine). Typically the assumption is that motion segments correspond to planar patches in 3D undergoing rigid motion. Here we develop an alternative approach, where the motion of each group is described by a smooth dense flow field and the stability of the estimation is ensured by means of a prior distribution on the class of flow fields. We present a variant of the EM algorithm that can segment image sequences by fitting multiple smooth flow fields to the spatiotemporal data. Using the method of Green's functions, we show how the estimation of a single smooth flow field can be performed in closed form, thus making the multiple model estimation computationally feasible. Furthermore, the number of models is estimated automatically using similar methods to those used in the parametric approach. We illustrate the algorithm's performance on synthetic and real image sequences.

251 citations


Journal ArticleDOI
TL;DR: By formulating the optimum Bayesian decision criterion for a verification system and by assuming the data distributions to be multinormals, two statistical expressions are derived for calculating theoretically the false acceptance and false rejection rates.
Abstract: We address the problem of performance evaluation in biometric verification systems. By formulating the optimum Bayesian decision criterion for a verification system and by assuming the data distributions to be multinormals, we derive two statistical expressions for calculating theoretically the false acceptance and false rejection rates. Generally, the adoption of a Bayesian parametric model does not allow for obtaining explicit expressions for the calculation of the system errors. As far as biometric verification systems are concerned, some hypotheses can be reasonably adopted, thus allowing simple and affordable expressions to be derived. By using two verification system prototypes. Based on hand shape and human face, respectively, we show our results are well founded.

219 citations


Journal ArticleDOI
TL;DR: A precise understanding of how Occam's razor, the principle that simpler models should be preferred until the data justify more complex models, is automatically embodied by probability theory is arrived at.
Abstract: The task of parametric model selection is cast in terms of a statistical mechanics on the space of probability distributions. Using the techniques of low-temperature expansions, I arrive at a systematic series for the Bayesian posterior probability of a model family that significantly extends known results in the literature. In particular, I arrive at a precise understanding of how Occam’s razor, the principle that simpler models should be preferred until the data justify more complex models, is automatically embodied by probability theory. These results require a measure on the space of model parameters and I derive and discuss an interpretation of Jeffreys’ prior distribution as a uniform prior over the distributions indexed by a family. Finally, I derive a theoretical index of the complexity of a parametric family relative to some true distribution that I call the razor of the model. The form of the razor immediately suggests several interesting questions in the theory of learning that can be studied using the techniques of statistical mechanics.

206 citations


Journal ArticleDOI
TL;DR: The common statistical techniques employed to analyze survival data in public health research, including the Kaplan-Meier method for estimating the survival function and the Cox proportional hazards model to identify risk factors and to obtain adjusted risk ratios are reviewed.
Abstract: This paper reviews the common statistical techniques employed to analyze survival data in public health research. Due to the presence of censoring, the data are not amenable to the usual method of analysis. The improvement in statistical computing and wide accessibility of personal computers led to the rapid development and popularity of nonparametric over parametric procedures. The former required less stringent conditions. But, if the assumptions for parametric methods hold, the resulting estimates have smaller standard errors and are easier to interpret. Nonparametric techniques include the Kaplan-Meier method for estimating the survival function and the Cox proportional hazards model to identify risk factors and to obtain adjusted risk ratios. In cases where the assumption of proportional hazards is not tenable, the data can be stratified and a model fitted with different baseline functions in each stratum. Parametric modeling such as the accelerated failure time model also may be used. Hazard functions for the exponential, Weibull, gamma, Gompertz, lognormal, and log-logistic distributions are described. Examples from published literature are given to illustrate the various methods. The paper is intended for public health professionals who are interested in survival data analysis.

182 citations


Journal ArticleDOI
TL;DR: In this paper, the authors show that the limiting distribution arises naturally when one uses an efficient test statistic to test a single parameter in a semiparametric or parametric model.
Abstract: Authors have shown that the time-sequential joint distributions of many statistics used to analyze data arising from group-sequential time-to-event and longitudinal studies are multivariate normal with an independent increments covariance structure. In Theorem 1 of this article, we demonstrate that this limiting distribution arises naturally when one uses an efficient test statistic to test a single parameter in a semiparametric or parametric model. Because we are able to think of many of the statistics in the literature in this fashion, the limiting distribution under investigation is just a special case of Theorem 1. Using this general structure, we then develop an information-based design and monitoring procedure that can be applied to any type of model for any type of group-sequential study provided that there is a unique parameter of interest that can be efficiently tested.

151 citations


Journal ArticleDOI
TL;DR: A measure of predictability based on the ratio of the expected loss of a short-run forecast to the expected lost of a long- run forecast, which allows for general loss functions, univariate or multivariate information sets, and stationary or nonstationary data.
Abstract: SUMMARY We propose a measure of predictability based on the ratio of the expected loss of a short-run forecast to the expected loss of a long-run forecast. This predictability measure can be tailored to the forecast horizons of interest, and it allows for general loss functions, univariate or multivariate information sets, and covariance stationary or difference stationary processes. We propose a simple estimator, and we suggest resampling methods for inference. We then provide several macroeconomic applications. First, we illustrate the implementation of predictability measures based on fitted parametric models for several U.S. macroeconomic time series. Second, we analyze the internal propagation mechanism of a standard dynamic macroeconomic model by comparing the predictability of model inputs and model outputs. Third, we use predictability as a metric for assessing the similarity of data simulated from the model and actual data. Finally, we outline several nonparametric extensions of our approach.

136 citations


Journal ArticleDOI
TL;DR: In this paper, the authors construct adaptive estimators in a general GARCH in mean-type context including integrated GARCH models, based on a general LAN theorem for time-series models.

108 citations


Book
01 Feb 1997
TL;DR: This chapter discusses Empirical Modeling by Neural Networks, which automates the very labor-intensive and therefore time-heavy and expensive process of designing and implementing self-organizing systems.
Abstract: 1. Introduction.- 1.1 Goal.- 1.2 Relation to Other Scientific Fields.- 1.3 Plan of the Monograph.- 2. A Quantitative Description of Nature.- 2.1 Synergetics of Natural Phenomena.- 2.2 A Description of Nature.- 2.3 Fundamentals of Quantitative Description.- 2.4 Fundamentals of Physical Laws.- 2.5 The Random Character of Physical Variables.- 2.6 Expression of Natural Laws by Differential Equations.- 2.7 Methods of Empirical Modeling.- 2.7.1 The Role of Models.- 2.7.2 Piecewise Linear Models of Empirical Natural Laws.- 2.8 Introduction to Modeling by Neural Networks.- 2.8.1 Functional Properties of a Neuron.- 2.8.2 Empirical Modeling by a Perceptron.- 3. Transducers.- 3.1 The Role of Sensors and Actuators.- 3.2 Sensors and Actuators of Biological Systems.- 3.2.1 Performance Characteristics of Biological Sensors.- 3.2.2 Structure of Biological Sensors.- 3.2.3 Transduction Characteristics of Biological Sensors.- 3.3 Operational Characteristics of Transducers.- 3.3.1 Transducer Classification.- 3.3.2 Transduction Characteristics.- 3.3.3 Sensor Loading Effects.- 3.3.4 Transducer Field Characteristics.- 3.4 Fabricated Transducers.- 3.4.1 Microsensors and Integrated Sensors.- 3.4.2 Synthetic Bio-sensors and Neurobiology.- 3.5 Transducers in Intelligent Measurement Systems.- 3.6 Future Directions in Transducer Evolution.- 4. Probability Densities.- 4.1 Estimation of Probability Density.- 4.1.1 Parzen Window Approach.- 4.1.2 An Optimal Selection of the Window Function.- 4.1.3 Nearest Neighbor and Maximal Self-Consistency Approach.- 4.1.4 The Self-Consistent Method in the Multivariate Case..- 4.1.5 Numerical Examples.- 4.1.6 Conclusions About Filtering of the Empirical PDF.- 5. Information.- 5.1 Some Basic Ideas.- 5.2 Entropy of Information.- 5.3 Properties of Information Entropy.- 5.4 Relative Information.- 5.4.1 Information of Continuous Distributions.- 5.4.2 Information Gain from Experiments.- 5.5 Information Measure of Distance Between Distributions.- 6. Maximum Entropy Principles.- 6.1 Gibbs Maximum Entropy Principle.- 6.2 The Absolute Maximum Entropy Principle.- 6.3 Quantization of Continuous Probability Distributions.- 6.3.1 Quadratic Measure of Discrepancy Between Distributions.- 6.3.2 Information Divergence as a Measure of Discrepancy.- 6.3.3 Vector Quantization and Reconstruction Measure of Discrepancy.- 7. Adaptive Modeling of Natural Laws.- 7.1 Probabilistic Modeler of Natural Laws.- 7.2 Optimization of Adaptive Modeler Performance.- 7.3 Stochastic Approach to Adaptation Laws.- 7.4 Stochastic Adaptation of a Vector Quantizer.- 7.5 Perturbation Method of Adaptation.- 7.6 Evolution of an Optimal Modeler and Perturbation Method.- 7.7 Parametric Versus Non-Parametric Modeling.- 8. Self-Organization and Formal Neurons.- 8.1 Optimal Storage of Empirical Information in Discrete Systems.- 8.2 Adaptive Vector Quantization and Topological Mappings.- 8.3 Self-Organization Based on the Absolute Maximum-Entropy Principle.- 8.4 Derivation of a Generalized Self-Organization Rule.- 8.5 Numerical Examples of Self-Organized Adaptation.- 8.6 Formal Neurons and the Self-Organization Process.- 9. Modeling by Non-Parametric Regression.- 9.1 The Problem of an Optimal Prediction.- 9.2 Parzen'1. Introduction.- 1.1 Goal.- 1.2 Relation to Other Scientific Fields.- 1.3 Plan of the Monograph.- 2. A Quantitative Description of Nature.- 2.1 Synergetics of Natural Phenomena.- 2.2 A Description of Nature.- 2.3 Fundamentals of Quantitative Description.- 2.4 Fundamentals of Physical Laws.- 2.5 The Random Character of Physical Variables.- 2.6 Expression of Natural Laws by Differential Equations.- 2.7 Methods of Empirical Modeling.- 2.7.1 The Role of Models.- 2.7.2 Piecewise Linear Models of Empirical Natural Laws.- 2.8 Introduction to Modeling by Neural Networks.- 2.8.1 Functional Properties of a Neuron.- 2.8.2 Empirical Modeling by a Perceptron.- 3. Transducers.- 3.1 The Role of Sensors and Actuators.- 3.2 Sensors and Actuators of Biological Systems.- 3.2.1 Performance Characteristics of Biological Sensors.- 3.2.2 Structure of Biological Sensors.- 3.2.3 Transduction Characteristics of Biological Sensors.- 3.3 Operational Characteristics of Transducers.- 3.3.1 Transducer Classification.- 3.3.2 Transduction Characteristics.- 3.3.3 Sensor Loading Effects.- 3.3.4 Transducer Field Characteristics.- 3.4 Fabricated Transducers.- 3.4.1 Microsensors and Integrated Sensors.- 3.4.2 Synthetic Bio-sensors and Neurobiology.- 3.5 Transducers in Intelligent Measurement Systems.- 3.6 Future Directions in Transducer Evolution.- 4. Probability Densities.- 4.1 Estimation of Probability Density.- 4.1.1 Parzen Window Approach.- 4.1.2 An Optimal Selection of the Window Function.- 4.1.3 Nearest Neighbor and Maximal Self-Consistency Approach.- 4.1.4 The Self-Consistent Method in the Multivariate Case..- 4.1.5 Numerical Examples.- 4.1.6 Conclusions About Filtering of the Empirical PDF.- 5. Information.- 5.1 Some Basic Ideas.- 5.2 Entropy of Information.- 5.3 Properties of Information Entropy.- 5.4 Relative Information.- 5.4.1 Information of Continuous Distributions.- 5.4.2 Information Gain from Experiments.- 5.5 Information Measure of Distance Between Distributions.- 6. Maximum Entropy Principles.- 6.1 Gibbs Maximum Entropy Principle.- 6.2 The Absolute Maximum Entropy Principle.- 6.3 Quantization of Continuous Probability Distributions.- 6.3.1 Quadratic Measure of Discrepancy Between Distributions.- 6.3.2 Information Divergence as a Measure of Discrepancy.- 6.3.3 Vector Quantization and Reconstruction Measure of Discrepancy.- 7. Adaptive Modeling of Natural Laws.- 7.1 Probabilistic Modeler of Natural Laws.- 7.2 Optimization of Adaptive Modeler Performance.- 7.3 Stochastic Approach to Adaptation Laws.- 7.4 Stochastic Adaptation of a Vector Quantizer.- 7.5 Perturbation Method of Adaptation.- 7.6 Evolution of an Optimal Modeler and Perturbation Method.- 7.7 Parametric Versus Non-Parametric Modeling.- 8. Self-Organization and Formal Neurons.- 8.1 Optimal Storage of Empirical Information in Discrete Systems.- 8.2 Adaptive Vector Quantization and Topological Mappings.- 8.3 Self-Organization Based on the Absolute Maximum-Entropy Principle.- 8.4 Derivation of a Generalized Self-Organization Rule.- 8.5 Numerical Examples of Self-Organized Adaptation.- 8.6 Formal Neurons and the Self-Organization Process.- 9. Modeling by Non-Parametric Regression.- 9.1 The Problem of an Optimal Prediction.- 9.2 Parzen's Window Approach to General Regression.- 9.3 General Regression Modeler, Feedback and Recognition.- 9.4 Application of the General Regression Modeler.- 9.4.1 Empirical Modeling of Acoustic Phenomena.- 9.4.2 Prediction of the Seismic Capacity of Walls.- 9.4.3 Modeling of a Periodontal Disease Healing Process.- 10. Linear Modeling and Invariances.- 10.1 Relation Between Parametric Modeling and Invariances.- 10.2 Generalized Linear Regression Model.- 10.2.1 An Example of Iterative Determination of a Linear Regression Model.- 10.3 Sequential Adaptation of Linear Regression Model.- 10.4 Transition from the Cross- to Auto-Associator.- 10.4.1 Application of the Auto-Associator to Analysis of Ultrasonic Signals.- 11. Modeling and Forecasting of Chaotic Processes.- 11.1 Modeling of Chaotic Processes.- 11.2 Examples of Chaotic Process Forecasting.- 11.3 Forecasting of Chaotic Acoustic Emission Signals.- 11.4 Empirical Modeling of Non-Autonomous Chaotic Systems.- 11.4.1 Example of Economic Time-Series Forecasting.- 11.5 Cascade Modeling of Chaos Generators.- 11.5.1 Numerical Experiments.- 11.5.2 Concluding Remarks.- 12. Modeling by Neural Networks.- 12.1 From Biological to Artificial Neural Networks.- 12.1.1 Basic Blocks of Neural Networks and Their Dynamics.- 12.2 A Linear Associator.- 12.3 Multi-layer Perceptrons and Back-Propagation Learning.- 12.4 Radial Basis Function Neural Networks.- 12.5 Equivalence of a Radial Basis Function NN and Perceptrons.- 13. Fundamentals of Intelligent Control.- 13.1 Introduction.- 13.2 Basic Tasks of Intelligent Control.- 13.2.1 Empirical Description of a Controlled System.- 13.2.2 General Identification by Non-Parametric Modeling.- 13.3 The Tracking Problem.- 13.4 Cloning.- 13.5 An Empirical Approach to Optimal Control.- 13.5.1 The Theoretical Problem of Optimal Control.- 13.5.2 Experimental Description of Plant Performance and Optimal Control.- 13.5.3 Design of an Intelligent Optimal Controller.- 13.5.4 The Influence of the Environment on Optimal Control.- 13.5.5 The Problem of Phase Space Exploration.- 13.5.6 Numerical Simulations of Optimal Control.- 13.5.7 Summary and Conclusions.- 14. Self-Control and Biological Evolution.- 14.1 Modeling of Natural Phenomena by Biological Systems.- 14.2 Joint Modeling of Organism and Environment.- 14.3 An Operational Description of Consciousness.- 14.4 The Fundamental Problem of Evolution.- A. Fundamentals of Probability and Statistics.- A.1 Sample Points, Sample Space, Events and Relations.- A.2 Probability.- A.3 Random Variables and Probability Distributions.- A.4 Averages and Moments.- A.5 Random Processes.- A.6 Sampling, Estimation and Statistics.- B. Fundamentals of Deterministic Chaos.- B.1 Instability of Chaotic Systems.- B.2 Characterization of Strange Attractors.- B.3 Experimental Characterization of Chaotic Phenomena.- References.

Journal ArticleDOI
TL;DR: This work considers direction-of-arrival estimation when the signal sources are distributed, and Parametric and nonparametric models are proposed, and estimation methods are considered under these models.
Abstract: In direction-of-arrival (DOA) estimation, the direction of a signal is usually assumed to be a point. If the direction of a signal is distributed due to some environmental phenomenon, however, DOA estimation methods based on the point source assumption may result in poor performance. We consider DOA estimation when the signal sources are distributed. Parametric and nonparametric models are proposed, and estimation methods are considered under these models. In addition, the asymptotic distribution of estimation errors is obtained to show the models' statistical properties.

Journal Article
TL;DR: This paper discusses model selection from the point of view of robustness and points out the extreme sensitivity of many classical model selection procedures to outliers and other departures from the distributional assumptions of the model.
Abstract: Model selection is a key component in any statistical analysis. In this paper we discuss this issue from the point of view of robustness and we point out the extreme sensitivity of many classical model selection procedures to outliers and other departures from the distributional assumptions of the model. First, we focus on regression and review a robust version of Mallows's Cp as well as some related approaches. We then go beyond the regression model and discuss a robust version of the Akaike Information Criterion for general parametric models.


Journal ArticleDOI
TL;DR: In this article, the authors considered simple and composite hypotheses in statistical models with i.i.d. observations and with arbitrary families dominated by?-finite measures and parametrized by vector-valued variables.

Journal ArticleDOI
TL;DR: A tutorial review of the models and methods that are used to characterize site amplification from the pairs of rock and soil-site records, and introduces some new techniques with better theoretical foundations is presented in this paper.
Abstract: The paper presents a tutorial review of the models and methods that are used to characterize site amplification from the pairs of rock‐ and soil‐site records, and introduces some new techniques with better theoretical foundations. The models and methods discussed include spectral and cross‐spectral ratios, spectral ratios for downhole records, response spectral ratios, constant amplification factors, parametric models, physical models, and time‐varying filters. An extensive analytical and numerical error analysis of spectral and cross‐spectral ratios shows that probabilistically cross‐spectral ratios give more reliable estimates of site amplification. Spectral ratios should not be used to determine site amplification from downhole‐surface recording pairs because of the feedback in the downhole sensor. Response spectral ratios are appropriate for low frequencies, but overestimate the amplification at high frequencies. The best method to be used depends on how much precision is required in the esti...

Journal ArticleDOI
TL;DR: In this paper, a semi-nonparametric binary response model is used to estimate the willingness to pay for environmental quality improvement when the likelihood to pay is restricted to be nonnegative.

Journal ArticleDOI
TL;DR: In this paper, a new approach for identification of nonlinear dynamic systems is proposed based on a combination of generalized orthonormal basis functions and local linear model trees (LOLIMOT), which approximate an unknown function from data by the interpolation of many local linear models.

Journal ArticleDOI
TL;DR: In this paper, two new probability density functions (generalized beta and quadratic elasticity) are considered as models for the size distribution of income, and they are fit into five sets of US family income data for 1970, 1975, 1980, 1985 and 1990.

Journal ArticleDOI
TL;DR: A piecewise smooth parametric model is developed to characterize the local intensity function in an image and a small set of moment-based detectors are identified that can extract various kinds of primal sketches from intensity as well as range images.
Abstract: In this paper, a novel model-based approach is proposed for generating a set of image feature maps (or primal sketches). For each type of feature, a piecewise smooth parametric model is developed to characterize the local intensity function in an image. Projections of the intensity profile onto a set of orthogonal Zernike-moment-generating polynomials are used to estimate model-parameters and, in turn, generate the desired feature map. A small set of moment-based detectors is identified that can extract various kinds of primal sketches from intensity as well as range images. One main advantage of using parametric model-based techniques is that it is possible to extract complete information (i.e., model parameters) about the underlying image feature, which is desirable in many high-level vision tasks. Experimental results are included to demonstrate the effectiveness of proposed feature detectors.

Journal ArticleDOI
TL;DR: A class of baseband correlation receivers is developed for estimating the time-of-flight of a narrowband signal, in which the signal shape is unknown to the proposed receiver, and a parametric model of the signal envelope is built.
Abstract: A class of baseband correlation receivers is developed for estimating the time-of-flight of a narrowband signal. In contrast to conventional correlation receivers that contain a perfectly known replica of the signal, the signal shape is unknown to the proposed receiver. A parametric model of the signal envelope is therefore built, concurrently with the process of time-of-flight estimation. The identification of the model parameters can be pursued by assuming that a few signal constraints are satisfied at the time instant of signal onset: the constraints concern the value of the signal envelope and of its first time derivative. The correlation receivers perform a series expansion of the signal envelope by a set of Laguerre basis functions; the outputs of the Laguerre filter bank that computes the running Laguerre transform are used to build a parametric model of the unknown envelope and to estimate its time-of-flight. Simple variations of the correlation receivers are proposed, which differ from each other in the use they make of the signal constraints. The correlation receivers are applied to process narrowband echoes from in-air ultrasonic sensors. Potential applications are object identification and surface profiling in the robotic field. Computer simulations and experiments using a simple pulse-echo sensor system are presented.

Journal ArticleDOI
TL;DR: An efficient algorithm for worst-case and statistical kinematic tolerance analysis of mechanisms with parametric part tolerances that extends to linkage mechanisms when coupled with linkage analysis software.
Abstract: We present an efficient algorithm for worst-case and statistical kinematic tolerance analysis of mechanisms with parametric part tolerances. The algorithm derives the kinematic variation directly from the part geometry, the part degrees of freedom, and the allowable parameter variations. It first derives a geometric representation of the kinematic variation as parametric surfaces in the mechanism configuration space. It then performs sensitivity analysis on the surfaces and combines the results. In addition to traditional quantitative variations, the results reveal qualitative variations, such as play, under-cutting, interference, and jamming. Our implementation handles planar mechanisms with one degree of freedom per part, including ones with higher pairs and multiple contacts. It is fast enough to be practical for full parametric models of complex mechanisms and for parametric representations of geometric tolerances, such as offsets, which typically require many parameters. The algorithm extends to linkage mechanisms when coupled with linkage analysis software. We demonstrate the implementation on a 26 parameter model of a Geneva pair and on an 82 parameter model of a camera shutter mechanism.

Journal ArticleDOI
TL;DR: In this article, the authors describe multistage sampling designs and estimating equations based on the resulting data using parametric models and introduce Horvitz-Thompson estimates as alternatives to MLEs, their asymptotic distributions are derived and their strengths and weaknesses are evaluated.
Abstract: SUMMARY In some applications it is cost efficient to sample data in two or more stages. In the first stage a simple random sample is drawn and then stratified according to some easily measured attribute. In each subsequent stage a random subset of previously selected units is sampled for more detailed and costly observation, with a unit's sampling probability determined by its attributes as observed in the previous stages. This paper describes multistage sampling designs and estimating equations based on the resulting data. Maximum likelihood estimates (MLEs) and their asymptotic variances are given for designs using parametric models. Horvitz-Thompson estimates are introduced as alternatives to MLEs, their asymptotic distributions are derived and their strengths and weaknesses are evaluated. The designs and the estimates are illustrated with data on corn production.

Journal ArticleDOI
TL;DR: In this paper, the dependence of the Bartlett correction and the O(n -1/2 ) local power of the test on the way in which r(y; θ, λ) is constructed is studied.
Abstract: Misspecification tests for parametric models, f(y, θ), that examine data for failure of moment conditions implied by the maintained parametric distribution are interpreted as score tests of H 0 : λ = 0 in the context of a parametric family of distributions r(y; θ, λ). This family contains the maintained distribution as a special case (λ = 0) and has the property that only in that special case do the chosen moment conditions hold. A likelihood ratio test of H 0 : λ = 0 therefore constitutes an alternative test of the validity of the moment conditions. This test admits a Bartlett correction, unlike conventional moment tests for which adjustments based on second order asymptotic theory may behave badly. The dependence of the Bartlett correction and of the O(n -1/2 ) local power of the test on the way in which r(y; θ, λ) is constructed is studied. In many cases the correction can be made to vanish leading to a specification test whose distribution is chi-square to order O p (n -2 ).

Book ChapterDOI
TL;DR: In this paper, the concept of minimum distance estimation using density-based distances is discussed, and general disparities, residual adjustment functions, and related inference are introduced and the negative exponential disparity and weighted likelihood estimators are described.
Abstract: Publisher Summary This chapter discusses the concept of minimum distance estimation using density-based distances Density-based minimum distance methods have proven to be valuable additions to the theory of statistics as demonstrated by the rich literature of the past two decades In parametric models, the estimators often possess full asymptotic efficiency simultaneously with attractive robustness properties The chapter also discusses minimum Hellinger distance estimation, including the Hellinger deviance test and penalized Hellinger distance estimation In the chapter, General disparities, residual adjustment functions, and related inference are introduced and the negative exponential disparity and weighted likelihood estimators (including linear regression models) are described A generalized divergence measure and the resulting estimators are also discussed in the chapter

Journal ArticleDOI
TL;DR: In this article, the authors evaluate the overall importance of the discretization-induced tracking error by comparing the performance of Black-Scholes hedge ratios against those obtained from a novel estimation procedure, namely local parametric estimation.

Journal ArticleDOI
TL;DR: A sample of 95 sib pairs affected with insulin-dependent diabetes and typed with their normal parents for 28 markers on chromosome 6 has been analyzed, and a parametric model is equivalent to the beta model, which is superior to nonparametric alternatives both in single point tests and in multipoint tests.
Abstract: A sample of 95 sib pairs affected with insulin-dependent diabetes and typed with their normal parents for 28 markers on chromosome 6 has been analyzed by several methods. When appropriate parameters are efficiently estimated, a parametric model is equivalent to the β model, which is superior to nonparametric alternatives both in single point tests (as found previously) and in multipoint tests. Theory is given for meta-analysis combined with allelic association, and problems that may be associated with errors of map location and/or marker typing are identified. Reducing by multipoint analysis the number of association tests in a dense map can give a 3-fold reduction in the critical lod, and therefore in the cost of positional cloning.

Journal ArticleDOI
TL;DR: Spline smoothing is extended in this paper to express prior knowledge about general features of the curve in the form of a linear differential operator that annihilates a specified parametric model for the data.
Abstract: Nonparametric regression techniques, which estimate functions directly from noisy data rather than relying on specific parametric models, now play a central role in statistical analysis. We can improve the efficiency and other aspects of a nonparametric curve estimate by using prior knowledge about general features of the curve in the smoothing process. Spline smoothing is extended in this paper to express this prior knowledge in the form of a linear differential operator that annihilates a specified parametric model for the data. Roughness in the fitted function is defined in terms of the integrated square of this operator applied to the fitted function. A fastO(n) algorithm is outlined for this smart smoothing process. Illustrations are provided of where this technique proves useful.

Journal ArticleDOI
TL;DR: In this article, the authors established the robustness properties of the minimum negative exponential disparity estimator (MNEDE) and proved that it is asymptotically fully efficient under a specified regular parametric family of densities.

Journal ArticleDOI
TL;DR: It is concluded that good estimates of demographic parameters from stagefrequency data from natural populations will usually only be possible if extra information such as the durations of stages is known.
Abstract: A relatively simple method is proposed for the estimation of parameters of stage-structured populations from sample data for situation where (a) unit time survival rates may vary with time, and (b) the distribution of entry times to stage 1 is too complicated to be fitted with a simple parametric model such as a normal or gamma distribution. The key aspects of this model are that the entry time distribution is approximated by an exponential function withp parameters, the unit time survival rates in stages are approximated by anr parameter exponential polynomial in the stage number, and the durations of stages are assumed to be the same for all individuals. The new method is applied to four Zooplankton data sets, with parametric bootstrapping used to assess the bias and variation in estimates. It is concluded that good estimates of demographic parameters from stagefrequency data from natural populations will usually only be possible if extra information such as the durations of stages is known.