scispace - formally typeset
Search or ask a question

Showing papers on "Parametric statistics published in 1992"


Journal ArticleDOI
TL;DR: The authors apply flexible constraints, in the form of a probabilistic deformable model, to the problem of segmenting natural 2-D objects whose diversity and irregularity of shape make them poorly represented in terms of fixed features or form.
Abstract: Segmentation using boundary finding is enhanced both by considering the boundary as a whole and by using model-based global shape information. The authors apply flexible constraints, in the form of a probabilistic deformable model, to the problem of segmenting natural 2-D objects whose diversity and irregularity of shape make them poorly represented in terms of fixed features or form. The parametric model is based on the elliptic Fourier decomposition of the boundary. Probability distributions on the parameters of the representation bias the model to a particular overall shape while allowing for deformations. Boundary finding is formulated as an optimization problem using a maximum a posteriori objective function. Results of the method applied to real and synthetic images are presented, including an evaluation of the dependence of the method on prior information and image quality. >

888 citations


Journal ArticleDOI
TL;DR: In this article, the authors considered estimation of truncated and censored regression models with fixed effects and proposed two estimators: trimmed least absolute deviations and trimmed least squares estimators, which are consistent and asymptotically normal.
Abstract: This paper considers estimation of truncated.and censored regression models with fixed effects. Up until now, no estimator has been shown to be consistent as the cross-section dimension increases with the time dimension fixed. Trimmed least absolute deviations and trimmed least squares estimators are proposed for the case where the panel is of length two, and it is proven that they are consistent and asymptotically normal. It is not necessary to maintain parametric assumptions on the error terms to obtain this result. A small scale Monte Carlo study demonstrates that these estimators can perform well in small samples. Copyright 1992 by The Econometric Society.

774 citations


Journal ArticleDOI
TL;DR: In this paper, a model is proposed for the analysis of censored data which combines a logistic formulation for the probability of occurrence of an event with a proportional hazards specification for the time of occurrence.
Abstract: SUMMARY A model is proposed for the analysis of censored data which combines a logistic formulation for the probability of occurrence of an event with a proportional hazards specification for the time of occurrence of the event. The proposed model is a semiparametric generalization of a parametric model due to Farewell (1982). Estimates of the regression parameters are obtained by maximizing a Monte Carlo approximation of a marginal likelihood and the EM algorithm is used to estimate the baseline survivor function. We present some simulation results to verify the validity of the suggested estimation procedure. It appears that the semiparametric estimates are reasonably efficient with acceptable bias whereas the parametric estimates can be highly dependent on the parametric assumptions.

483 citations


Journal ArticleDOI
TL;DR: In this paper, the authors developed a new inversion method to reconstruct static images of seismic sources from geodetic data, using Akaike's Bayesian Information Criterion (ABIC).
Abstract: SUMMARY We developed a new inversion method to reconstruct static images of seismic sources from geodetic data, using Akaike’s Bayesian Information Criterion (ABIC). Coseismic surface displacements are generally related with a slip distribution on a fault surface by linear integral equations. Parametric expansion of the fault slip distribution by a finite number of known basis functions yields a set of observation equations expressed in a simple vector form. Incorporating prior constraints on the smoothness of slip distribution with the observation equations, we construct a Bayesian model with unknown hyperparameters. The optimal values of the hyperparameters, which control the structure of the Bayesian model, are objectively determined from observed data by using ABIC. Once the values of hyperparameters are determined, we can use the maximum likelihood method to find the optimal distribution of fault slip. We examined the validity of this method through a numerical experiment using theoretical data with random noise. We analysed geodetic data associated with the 1946 Nankaido earthquake (Ms = 8.2) by using this method. The result shows that the fault slip distribution of this earthquake has two main peaks of 4 and 6 m, located off Kii Peninsula and Muroto Promontory. These two high-slip areas are clearly separated by a low-slip zone extending along Kii Strait. Such a slip distribution corresponds with the fact that the rupture process of this earthquake in the western part is notably different from that in the eastern part.

389 citations


Journal ArticleDOI
TL;DR: In this paper, a general approach to estimating the parametric component of a semiparametric model is proposed, which is based on the idea of first estimating a one-dimensional subproblem of the original problem that is least favorable in the sense of Stein.
Abstract: In this paper, we outline a general approach to estimating the parametric component of a semiparametric model. For the case of a scalar parametric component, the method is based on the idea of first estimating a one-dimensional subproblem of the original problem that is least favorable in the sense of Stein. The likelihood function for the scalar parameter along this estimated subproblem may be viewed as a generalization of the profile likelihood for the problem. The scalar parameter is then estimated by maximizing this "generalized profile likelihood." This method of estimation is applied to a particular class of semiparametric models, where it is shown that the resulting estimator is asymptotically efficient.

307 citations


Journal ArticleDOI
TL;DR: In this paper, the efficiency bounds for conditional moment restrictions with a nonparametric component were derived, where the restriction is that a conditional expectation of this function is zero at some point in the parameter space.
Abstract: Efficiency bounds for conditional moment restrictions with a nonparametric component are derived. There is a given function of the data (a random sample from a distribution F) and a parameter. The restriction is that a conditional expectation of this function is zero at some point in the parameter space. The parameter has two parts: a finite-dimensional component and a general function evaluated at a subset of the conditioning variables. An example is a regression function that is additive in parametric and nonparametric component, as arises in sample selection models. Copyright 1992 by The Econometric Society.

295 citations


Journal ArticleDOI
TL;DR: The authors explored the influence of probability on risky choice, by proposing and estimating a parametric model of risky decision making, and found that the transformation differs for most subjects depending upon whether the risky outcomes are gains or losses.
Abstract: The appeal of expected utility theory as a basis for a descriptive model of risky decision making has diminished as a result of empirical evidence which suggests that individuals do not behave in a manner consistent with the prescriptive tenets of EUT. In this paper, we explore the influence of probability on risky choice, by proposing and estimating a parametric model of risky decision making. Our results suggest that models which provide for probability transformations are most appropriate for the majority of subjects. Further, we find that the transformation differs for most subjects depending upon whether the risky outcomes are gains or losses. Most subjects are considerably less sensitive to changes in mid-range probability than is proposed by the expected utility model and risk-seeking behavior over ‘long-shot’ odds is common.

279 citations


Journal ArticleDOI
TL;DR: In this article, it is shown that it is possible to identify binary threshold crossing models and binary choice models without imposing any parametric structure either on the systematic function of observable exogenous variables or on the distribution of the random term.
Abstract: In this paper, it is shown that it is possible to identify binary threshold crossing models and binary choice models without imposing any parametric structure either on the systematic function of observable exogenous variables or on the distribution of the random term. This identification result is employed to develop a fully nonparametric maximum likelihood estimator for both the function of observable exogenous variables and the distribution of the random term. The estimator is shown to be strongly consistent, and a two step procedure for its calculation is developed. The paper also includes examples of economic models that satisfy the conditions that are necessary to apply the results.

262 citations


Book ChapterDOI
TL;DR: The aim of this article is first to review how the standard econometric methods for panel data may be adapted to the problem of estimating frontier models and (in)efficiencies, and to clarify the difference between the fixed and random effect model.
Abstract: The aim of this article is first to review how the standard econometric methods for panel data may be adapted to the problem of estimating frontier models and (in)efficiencies. The aim is to clarify the difference between the fixed and random effect model and to stress the advantages of the latter. Then a semi-parametric method is proposed (using a non-parametric method as a first step), the message being that in order to estimate frontier models and (in)efficiences with panel data, it is an appealing method. Since analytic sampling distributions of efficiencies are not available, a bootstrap method is presented in this framework. This provides a tool allowing to assess the statistical significance of the obtained estimators. All the methods are illustrated in the problem of estimating the inefficiencies of 19 railway companies observed over a period of 14 years (1970–1983).

260 citations


Journal ArticleDOI
TL;DR: In this paper, a method for parameter set estimation in which the system model is assumed to contain both parametric and nonparametric uncertainty is presented, and the parameter set estimate is guaranteed to contain the true plant.
Abstract: A method for parameter set estimation in which the system model is assumed to contain both parametric and nonparametric uncertainty is presented. In the disturbance-free case, the parameter set estimate is guaranteed to contain the parameter set of the true plant. In the presence of stochastic disturbances, the parameter set estimate obtained from finite data records is shown to have the property that it contains the true-plant parameter set with probability one as the data length tends to infinity. >

230 citations


Journal ArticleDOI
TL;DR: In this article, a two-degree-of-freedom approximation of the model is employed to examine a class of in-plane/out-ofplane motions that are coupled through the quadratic nonlinearities.
Abstract: A theoretical model is derived which describes the non-linear response of a suspended elastic cable to small tangential oscillations of one support. The support oscillations, in general, result in parametric excitation of out-of-plane motion and simultaneous parametric and external excitation of in-plane motion. Cubic non-linearities due to cable stretching and quadratic nonlinearities due to equilibrium cable curvature couple these motion components in producing full, three-dimensional cable response. In this study, a two-degree-of-freedom approximation of the model is employed to examine a class of in-plane/out-of-plane motions that are coupled through the quadratic non-linearities. A first-order perturbation analysis is utilized to determine the existence and stability of the planar and non-planar periodic motions that result from simultaneous parametric and external resonances. The analysis leads to a bifurcation condition governing planar stability and results highlight how planar stability is reduced and non-planar response is enhanced whenever a “two-to-one” internal resonance condition exists between a pair of in-plane and out-of-plane cable modes. This two-to-one resonant behavior is clearly observed in experimental measurements of cable response which are also in good qualitative agreement with theoretical predictions.

Journal ArticleDOI
01 Sep 1992
TL;DR: The goal is to demonstrate the versatility of the parametric searching technique and obtain efficient solutions to several problems of Megiddo's parametric search technique.
Abstract: We present several applications in computational geometry of Megiddo's parametric searching technique. These applications include; (1) Finding the minimum Hausdorff distance in the Euclidean metric between two polygonal regions under translation; (2) Computing the biggest line segment that can be placed inside a simple polygon; (3) Computing the smallest width annulus that can contain a given set of points in the plane; (4) Solving the 1-segment center problem—given a set of points in the plane, find a placement for a given line segment (under translation and rotation) which minimizes the largest distance from the segment to the given points; (5) Given a set of n points in 3-space, finding the largest radius r such that if we place a ball of radius r around each point, no segment connecting a pair of points is intersected by a third ball. Besides obtaining efficient solutions to all these problems (which, in every case, either improve considerably previous solutions or are the first non-trivial solutions to these problems), our goal is to demonstrate the versatility of the parametric searching technique.

Journal ArticleDOI
01 Jun 1992
TL;DR: A trainable method of shape representation which can automatically capture the invariant properties of a class of shapes and provide a compact parametric description of variability is developed.
Abstract: We have developed a trainable method of shape representation which can automatically capture the invariant properties of a class of shapes and provide a compact parametric description of variability. We have applied the method to a family of flexible ribbons (worms) and to heart shapes in echocardiograms. We show that in both cases a natural parameterisation of shape results.

Journal ArticleDOI
23 Aug 1992
TL;DR: This work adopts randomized algorithms as the main approach toParametric query optimization and enhances them with a sideways information passing feature that increases their effectiveness in the new task, without much sacrifice in the output quality and with essentially zero run-time overhead.
Abstract: In most database systems, the values of many important run-time parameters of the system, the data, or the query are unknown at query optimization time. Parametric query optimization attempts to identify at compile time several execution plans, each one of which is optimal for a subset of all possible values of the run-time parameters. The goal is that at run time, when the actual parameter values are known, the appropriate plan should be identifiable with essentially no overhead. We present a general formulation of this problem and study it primarily for the buffer size parameter. We adopt randomized algorithms as the main approach to this style of optimization and enhance them with a sideways information passing feature that increases their effectiveness in the new task. Experimental results of these enhanced algorithms show that they optimize queries for large numbers of buffer sizes in the same time needed by their conventional versions for a single buffer size, without much sacrifice in the output quality and with essentially zero run-time overhead.

Journal ArticleDOI
TL;DR: A Wiener system, i.e., a system in which a linear dynamic part is followed by a nonlinear and memoryless one, is identified and a nonparametric algorithm recovering the characteristic from input-output observations of the whole system is proposed.
Abstract: A Wiener system, i.e., a system in which a linear dynamic part is followed by a nonlinear and memoryless one, is identified. No parametric restriction is imposed on the functional form of the nonlinear characteristic of the memoryless subsystem, and a nonparametric algorithm recovering the characteristic from input-output observations of the whole system is proposed. Its consistency is shown and the rate of convergence is given. An idea for identification of the impulse response of the linear subsystem is proposed. Results of numerical simulation are also presented. >

Journal ArticleDOI
TL;DR: In this article, an extension of the threshold method for extreme values is developed, to consider the joint distribution of extremes of two variables, based on the point process representation of bivariate extremes.
Abstract: SUMMARY An extension of the threshold method for extreme values is developed, to consider the joint distribution of extremes of two variables. The methodology is based on the point process representation of bivariate extremes. Both parametric and nonparametric models are considered. The simplest case to handle is that in which both marginal distributions are known. For the more realistic case in which the marginal distributions are unknown, a mixed parametric-nonparametric method is proposed. The techniques are illustrated with data on sulphate and nitrate levels taken from a major study of acid rain.

Journal ArticleDOI
TL;DR: In this paper, a new estimator is proposed for discrete choice models with choice-based sampling, which can incorporate information on the marginal choice probabilities in a straightforward manner and for that case leads to a procedure that is computationally and intuitively more appealing than the estimators that have been proposed before.
Abstract: In this paper, a new estimator is proposed for discrete choice models with choice-based sampling. The estimator is efficient and can incorporate information on the marginal choice probabilities in a straightforward manner and for that case leads to a procedure that is computationally and intuitively more appealing than the estimators that have been proposed before. The idea is to start with a flexible parametrization of the distribution of the explanatory variables and then rewrite the estimator to remove dependence on these parametric assumptions. Copyright 1992 by The Econometric Society.

Journal ArticleDOI
TL;DR: This paper shows how to evaluate the effect that perturbations to the model, data, or case weights have on maximum likelihood estimates from censored survival data and describes new interpretations for some local influence statistics.
Abstract: In this paper we show how to evaluate the effect that perturbations to the model, data, or case weights have on maximum likelihood estimates from censored survival data. The ideas and methods also apply to other nonlinear estimation problems. We review the ideas behind using log-likelihood displacement and local influence methods. We describe new interpretations for some local influence statistics and show how these statistics extend and complement traditional case deletion influence statistics for linear least squares. These statistics identify individual and combinations of cases that have important influence on estimates of parameters and functions of these parameters. We illustrate the methods by reanalyzing the Stanford Heart Transplant data with a parametric regression model.

Journal ArticleDOI
TL;DR: In this article, a global nonlinear predictor is introduced which attempts to correct systematic bias due to the inhomogeneous distribution of data common in strange attractors, and the significance of these results is established by comparison with results from similar surrogate series, generated so as not to contain the property of interest.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the limit distribution of an estimator under such outside-the-model circumstances, where the true hazard rate a(s) is different from the parametric hazard rates.
Abstract: The usual parametric models for survival data are of the following form Some parametrically specified hazard rate a(s, 0) is assumed for possibly censored random life times X, ,X'; one observes only X, = min{X?, ci} and 6& = Ir{X < c,} for certain censoring times c, that either are given or come from some censoring distribution We study the following problems: What do the maximum likelihood estimator and other estimators really estimate when the true hazard rate a(s) is different from the parametric hazard rates? What is the limit distribution of an estimator under such outside-the-model circumstances? How can traditional model-based analyses be made model-robust? Does the model-agnostic viewpoint invite alternative estimation approaches? What are the consequences of carrying out model-based and model-robust bootstrapping? How do theoretical and empirical influence functions generalise to situations with censored data? How do methods and results carry over to more complex models for life history data like regression models and Markov chains?

Journal ArticleDOI
TL;DR: A parametric study on experimental data gathered from the response of an array of twelve tin oxide gas sensors to five alcohols and three beers finds that this network outperforms principal-component and cluster analyses by identifying similar beer odours and offers considerable benefit in its ability to cope with non-linear and highly correlated data.
Abstract: Considerable interest has recently arisen in the use of arrays of gas sensors together with an associated pattern-recognition technique to identfy vapours and odours The performance of the pattern-recognition technique depends upon the choice of parametric expression used to define the array output At present, there is no generally agreed choice of this parameter for either individual sensors or arrays of sensors In this paper, we have initially performed a parametric study on experimental data gathered from the response of an array of twelve tin oxide gas sensors to five alcohols and three beers Five parametric expressions of sensor response are used to characterize the array output, namely, fractional conductance change, relative conductance, log of conductance change and normalized versions of the last two expressions Secondly, we have applied the technique of artificial neural networks (ANNs) to our preprocessed data The Rumelhart back-propagation technique is used to train all networks We find that nearly all of our ANNs can correctly identify all the alcohols using our array of twelve tin oxide sensors and so we use the total sum of squared network errors to determine their relative performance It is found that the lowest network error occurs for the response parameter defined as the fractional change in conductance, with a value of 13 × 10−4, which is almost half that for the relative conductance The normalized procedure is also found to improve network performance and so is worthwhile The optimal network for our data-set is found to contain a single hidden layer of seven elements with a learning rate of 10 and momentum term of 07, rather than the values of 09 and 06 recommended by Rumelhart and McClelland, respectively For this network, the largest output error is less than 01 We find that this network outperforms principal-component and cluster analyses (discussed in Part 1) by identifying similar beer odours and offers considerable benefit in its ability to cope with non-linear and highly correlated data

Journal ArticleDOI
TL;DR: A computational environment suitable for optimum design of structures in the general class of plane frames is described and the use of the environment is illustrated in a study of a cable‐stayed bridge structure.
Abstract: A computational environment suitable for optimum design of structures in the general class of plane frames is described. Design optimization is based on the use of a genetic algorithm in which a population of individual designs is changed generation by generation applying principles of natural selection and survival of the fittest. The fitness of a design is assessed using an objective function in which violations of design constraints are penalized. Facilities are provided for automatic data editing and reanalysis of the structure. The environment is particularly useful when parametric studies are required. The use of the environment is illustrated in a study of a cable‐stayed bridge structure.

Journal ArticleDOI
J.S. Hwang1
TL;DR: A method is presented for generating interference-free tool paths from parametric compound surfaces, and planar tool paths, which are suitable for metal cutting, are produced.
Abstract: A method is presented for generating interference-free tool paths from parametric compound surfaces. A parametric compound surface is a surface that consists of parametric surface elements. The method is largely composed of two steps: points are obtained from a compound surface to be converted into a triangular polyhedron; tool paths are then generated from the polyhedron. An efficient algorithm is used in the calculation of cutter-location data, and planar tool paths, which are suitable for metal cutting, are produced. The time taken to obtain all the tool paths from a surface model that consists of a large number of parametric surfaces is short. Some real applications are presented.

Journal ArticleDOI
TL;DR: In this paper, composed error models for maximum likelihood estimation from nonparametrically specified classes of frontiers are proposed. But the problem of frontier estimation is not solved by using parametric functional forms, but by combining parametric and nonparametric approaches.
Abstract: In this paper we bring together the previously separate parametric and nonparametric approaches to production frontier estimation by developing composed error models for maximum likelihood estimation from nonparametrically specified classes of frontiers. This approach avoids the untestable restrictions of parametric functional forms and also provides a statistical foundation for nonparametric frontier estimation. We first examine the single output setting and then extend our formulation to the multiple output setting. The key step in developing the estimation problems is to identify operational constraint sets to ensure estimation from the desired class of frontiers. We also suggest algorithms for solving the resulting constrained likelihood function optimization problems.

Journal ArticleDOI
TL;DR: This work presents an algorithm to efficiently find the optimal alignments for all choices of the penalty parameters, then it is then possible to systematically explore thesealignments for those with the most biological or statistical interest.
Abstract: Current algorithms can find optimal alignments of two nucleic acid or protein sequences, often by using dynamic programming. While the choice of algorithm penalty parameters greatly influences the quality of the resulting alignments, this choice has been done in an ad hoc manner. In this work, we present an algorithm to efficiently find the optimal alignments for all choices of the penalty parameters. It is then possible to systematically explore these alignments for those with the most biological or statistical interest. Several examples illustrate the method.

Journal ArticleDOI
TL;DR: In this article, a parametric approach based on the Gompertz distribution is presented to the problem of cure rate estimation, and the results show that the correction will lower the plateau of the Kaplan-Meier curve and, hence, the associated estimated cure rate.
Abstract: If a patient's failure time is incorrectly recorded as being too early, the correction will lower the plateau of the Kaplan-Meier curve and, hence, the associated estimated cure rate. Implications of this counter-intuitive observation are discussed. In addition, a parametric approach, based on the Gompertz distribution, to the problem of cure rate estimation is presented.

Journal ArticleDOI
TL;DR: In this article, the authors compared five parametric and nonparametric methods in terms of predictive accuracy, using data from Montgomery County, Pennsylvania, and found that the repeat sales method produces the most accurate estimate among the parametric methods.

Journal ArticleDOI
TL;DR: A formulation for maximum-likelihood (ML) blur identification based on parametric modeling of the blur in the continuous spatial coordinates makes it possible to find the ML estimate of the extent of arbitrary point spread functions that admit a closed-form parametric description in the continuously coordinates.
Abstract: A formulation for maximum-likelihood (ML) blur identification based on parametric modeling of the blur in the continuous spatial coordinates is proposed. Unlike previous ML blur identification methods based on discrete spatial domain blur models, this formulation makes it possible to find the ML estimate of the extent, as well as other parameters, of arbitrary point spread functions that admit a closed-form parametric description in the continuous coordinates. Experimental results are presented for the cases of 1-D uniform motion blur, 2-D out-of-focus blur, and 2-D truncated Gaussian blur at different signal-to-noise ratios. >

01 Jan 1992
TL;DR: A unifying survey of the published methods for solving variants of the problem of passing a surface through a set of data points to identify causes of shape defects, and suggestions for improving the aesthetic quality of the interpolants are presented.
Abstract: This paper has been published as a chapter in \Curve and Surface Design", H. Ha-gen, (ed), SIAM, 1992 Some of the gures from that paper are missing from this version, as are all of the black-and-white photographs. There are currently a number of methods for solving variants of the following problem: Given a triangulated polyhedron P in three space with or without boundary, construct a smooth surface that interpolates the vertices of P. In general, while the methods satisfy the continuity and interpolation requirements of the problem, they often fail to produce pleasing shapes. The purpose of this paper is to present a unifying survey of the published methods, to identify causes of shape defects, and to ooer suggestions for improving the aesthetic quality of the interpolants. The problem of passing a surface through a set of data points arises in numerous areas of application such as medical imaging, geological modeling, scientiic visualization, and geometric modeling. Variants of this problem have been approached from many directions. Tensor-product B-splines work well for modeling surfaces based on rectilinear control nets but are not suu-cient for more general topologies. Triangulated data, however, can represent 1

Journal ArticleDOI
TL;DR: 2-D tracking provides the information for an inexpensive technique for estimating 3-D shape from motion, and a framework for dynamic world modeling for dynamically modeling the 2-D appearance and3-D geometry of a scene is described.
Abstract: This article describes techniques for dynamically modeling the 2-D appearance and 3-D geometry of a scene by integrating information from a moving camera. These techniques are illustrated by the design of a system that constructs a geometric description of a scene from the motion of a camera mounted on a robot arm. A framework for dynamic world modeling is described. The framwork presents the fusion of perceptual information as a cyclic process composed of three phases: Predict, Match, and Update. A set of mathematical tools are presented for each of these phases. The use of these tools is illustrated by the design of a system for tracking edge lines in image coordinates and inferring the 3-D position from a known camera motion. The movement of edge-lines in a sequence of images is measured by tracking to maintain an image-plane description of movement. This description is composed of a list of edge-segments represented as a parametric primitive. Each parameter is composed of an estimated value, a temporal derivative, and a covariance matrix. Line-segment parameters are updated using a Kalman filter. The correspondence between observed and predicted segments is determined by a nearest-neighbor matching algorithm using distance between parameters normalized by covariance. It is observed that constraining the acceleration of edge-lines between frames permits the use of a very simple matching algorihtm, thus yielding a very short cycle time. Three-dimensional structure is computed using the correspondence provided by the 2-D segment tracking process. Fusion of 3-D data from different view points provides an accurate representation of the geometry of objects in the scene. An extended Kalman filter is applied to the inference of the 3-D position and orientation parameters of 2-D segments. This process demonstrates that 2-D tracking provides the information for an inexpensive technique for estimating 3-D shape from motion. Results from several image sequences taken from a camera mounted on a robot arm are presented to illustrate the reliability and precision of the technique.