scispace - formally typeset
Search or ask a question

Showing papers on "Parametric statistics published in 1984"


Journal ArticleDOI
TL;DR: In this paper, a trade-off between tracking precision and robustness to modelling uncertainty is presented, where tracking accuracy is sot according to the extent, of parametric uncertainty and the frequency range of unmodelled dynamics.
Abstract: New results are presented on the sliding control methodology introduced by Slotine and Sastry (1983) to achieve accurate tracking for a class of non-linear time-varying multivariate systems in the presence of disturbances and parameter variations. An explicit trade-off is obtained between tracking precision and robustness to modelling uncertainty : tracking accuracy is sot according to the extent, of parametric uncertainty and the frequency range of unmodelled dynamics. The trade-off is further refined to account for time-dependence of model uncertainty.

1,178 citations


Journal ArticleDOI
TL;DR: Methods of inference which can be used for implicit statistical models whose distribution theory is intractable are developed, and the kernel method of probability density estimation is advocated for estimating a log-likelihood from simulations of such a model.
Abstract: A prescribed statistical model is a parametric specification of the distribution of a random vector, whilst an implicit statistical model is one defined at a more fundamental level in terms of a generating stochastic mechanism. This paper develops methods of inference which can be used for implicit statistical models whose distribution theory is intractable. The kernel method of probability density estimation is advocated for estimating a log-likelihood from simulations of such a model. The development and testing of an algorithm for maximizing this estimated log-likelihood function is described. An illustrative example involving a stochastic model for quantal response assays is given. Possible applications of the maximization algorithm to ad hoc methods of parameter estimation are noted briefly, and illustrated by an example involving a model for the spatial pattern of displaced amacrine cells in the retina of a rabbit.

441 citations


Journal ArticleDOI
TL;DR: This paper describes a practical algorithm for large-scale mean-variance portfolio optimization that can be made extremely efficient by "sparsifying" the covariance matrix with the introduction of a few additional variables and constraints, and by treating the transaction cost schedule as an essentially nonlinear nondifferentiable function.
Abstract: This paper describes a practical algorithm for large-scale mean-variance portfolio optimization. The emphasis is on developing an efficient computational approach applicable to the broad range of portfolio models employed by the investment community. What distinguishes these from the "usual" quadratic program is i the form of the covariance matrix arising from the use of factor and scenario models of return, and ii the inclusion of transactions limits and costs. A third aspect is the question of whether the problem should be solved parametrically in the risk-reward trade off parameter, λ, or separately for several discrete values of λ. We show how the parametric algorithm can be made extremely efficient by "sparsifying" the covariance matrix with the introduction of a few additional variables and constraints, and by treating the transaction cost schedule as an essentially nonlinear nondifferentiable function. Then we show how these two seemingly unrelated approaches can be combined to yield good approximate solutions when minimum trading size restrictions "buy or sell at least a certain amount, or not at all" are added. In combination, these approaches make possible the parametric solution of problems on a scale not heretofore possible on computers where CPU time and storage are the constraining factors.

418 citations


Journal ArticleDOI
TL;DR: A framework for classification learning is presented that assumes that learners use presented instances to infer the density functions of category exemplars over a feature space and that subsequent classification decisions employ a relative likelihood decision rule based on these inferred density functions.
Abstract: We present a framework for classification learning that assumes that learners use presented instances (whether labeled or unlabeled) to infer the density functions of category exemplars over a feature space and that subsequent classification decisions employ a relative likelihood decision rule based on these inferred density functions. A specific model based on this general framework, the category density model} was proposed to account for the induction of normally distributed categories either with or without error correction or provision of labeled instances'. The model was implemented as a computer simulation. Results of five experiments indicated that people could learn category distributions not only without error correction, but without knowledge of the number of categories or even that there were categories to be learned. These and other findings dictated a more general learning model that integrated distributional representations based on both parametric descriptions and stored instances. In this article we present a new model of category learning and classification based on the acquisition and use of distributiona l knowledge. This category density model, derived from work by Fried (1979), makes the central assumption that the goal of the category learner is to develop a schematic description Experiment IA was reported at the meeting of the Psy

347 citations


Journal ArticleDOI
TL;DR: In this article, it was shown that the accuracy of computed second moments can be improved greatly by extending from the second order closure (Gaussian closure) to the fourth order closure and that further refinement is unnecessary for practical purposes.
Abstract: The statistical moments of a non-linear system responding to random excitations are governed by an infinite hierarchy of equations; therefore, suitable closure schemes are needed to compute the more important lower order moments approximately. One easily implemented and versatile scheme is to set the cumulants of response variables higher than a given order to zero. This is applied to three non-linear oscillators with very different dynamic properties, and with Gaussian white noises acting as external and/or parametric excitations. It is found that the accuracy of computed second moments can be improved greatly by extending from the second order closure (Gaussian closure) to the fourth order closure, and that further refinement is unnecessary for practical purposes. Treatment of nonstationary transient response is also illustrated.

220 citations


Journal ArticleDOI
TL;DR: In this article, the authors presented that kernel estimates of acceleration and velocity of height, and of height itself, might offer advantages over a parametric fitting via functional models, but the parametric one shows qualitative and quantitative distortion which both are not easily predictable.
Abstract: In recent years, nonparametric curve estimates have been extensively explored in theoretical work. There has, however, been a certain lack of convincing applications, in particular involving comparisons with parametric techniques. The present investigation deals with the analysis of human height growth, where longitudinal measurements were collected for a sample of boys and a sample of girls. Evidence is presented that kernel estimates of acceleration and velocity of height, and of height itself, might offer advantages over a parametric fitting via functional models recently introduced. For the specific problem considered, both approaches are biased, but the parametric one shows qualitative and quantitative distortion which both are not easily predictable. Data-analytic problems involved with kernel estimation concern the choice of kernels, the choice of the smoothing parameter, and also whether the smoothing parameter should be chosen uniformly for all subjects or individually.

198 citations


Journal ArticleDOI
TL;DR: It is believed that recursive partitioning analysis will often be the preferred multivariate method and is especially useful for identifying interaction terms that may then be included in parametric multivariate analyses.

169 citations


Journal ArticleDOI
TL;DR: In this article, a parametric model for lanthanide and actinide atomic and crystal energy levels is presented that correlates trends in Hartree-Fock calculations with empirically determined atomic parameters in such a way that predictions for unclassified complex cases can be made from the analysis of simpler ones.
Abstract: A parametric model for lanthanide and actinide atomic and crystal energy levels is presented that correlates trends in Hartree–Fock calculations with empirically determined atomic parameters in such a way that predictions for unclassified complex cases can be made from the analysis of simpler ones. When appropriate effective operators, including electrostatic operators up to the third rank for fN configurations and magnetic operators up to the second rank for fN and fNd shells, are used in the parametric Hamiltonian, statistical errors of 10–20 cm−1 are typical for the simpler examples presented.

167 citations


Journal ArticleDOI
TL;DR: One way to interpret the recent work in flexible functional forms is to see it as the use of richer parametric families of models in an attempt to reduce these two sorts of statistical biases--estimator bias and excess rejection probability.
Abstract: One perspective on the recent work in flexible functional forms is that the use of such forms represents an attempt to remove the modelinduced augmenting hypothesis that is inevitably linked to parametric statistical inference. The Fourier flexible form is discussed from this perspective. The discussion relies on heuristic and graphical arguments rather than formal mathematics. The generality of parametric statistical inference is limited by the augmenting hypothesis induced by model specification. For instance, to conclude that rejection of the integrability conditions in a translog consumer demand system implies rejection of the theory of consumer demand requires the augmenting hypothesis that all possible consumer demand systems must belong to the translog family (Christensen, Jorgenson, and Lau). This reliance on an assumed parametric model is not only philosophically distasteful but is of practical importance in applications. Estimators can be seriously biased by specification error; a test can reject a null hypothesis with a probability that greatly exceeds its nominal rejection probability (Gallant 1981). One way to interpret the recent work in flexible functional forms is to see it as the use of richer parametric families of models in an attempt to reduce these two sorts of statistical biases--estimator bias and excess rejection probability. Most of the flexible forms that have appeared in the literature are second-order (or Diewert-flexible) forms. Technically, this means that if g(x) is to be approximated by g(xIO), then at any given point xo there is a corresponding choice of parameters 01 such that

149 citations


Journal ArticleDOI
TL;DR: In this paper, a robust control technique is developed at the second level forcing the controlled manipulator to follow the behaviour of a decoupled linear time invariant system, and the problem of converting coordinates is reformulated as a nonlinear dynamic problem and is solved again by making use of robust adaptive techniques.

146 citations



Journal ArticleDOI
TL;DR: The examples presented show that acceleration curves might allow a better quantification of the mid-growth spurt (MS) and a more differentiated analysis of the pubertalSpurt (PS) by comparison with parameters defined in terms of velocity.
Abstract: SummaryA method is introduced for estimating acceleration, velocity and distance of longitudinal growth curves and it is illustrated by analysing human height growth. This approach, called kernel estimation, belongs to the class of smoothing methods and does not assume an a priori fixed functional model, and not even that one and the same model is applicable for all children. The examples presented show that acceleration curves might allow a better quantification of the mid-growth spurt (MS) and a more differentiated analysis of the pubertal spurt (PS). Accelerations are prone to follow random variations present in the data, and parameters defined in terms of acceleration are, therefore, validated by a comparison with parameters defined in terms of velocity. Our non-parametric-curve-fitting approach is also compared with parametric fitting via a model suggested by Preece and Baines (1978).

Journal ArticleDOI
TL;DR: In this approach, the parametric form is applied without the usual computational nightmare, relying on subdivision algorithms.
Abstract: In this approach, the parametric form is applied without the usual computational nightmare. The key is to view the parametric range as an interval, relying on subdivision algorithms.

Journal ArticleDOI
TL;DR: In this article, new penetration, scabbing and perforation formulae are derived for use in the design of reinforced concrete barriers to withstand impact by hard missiles using dimensional analysis together with physical theories for the various impact processes.

Journal ArticleDOI
TL;DR: In this article, it is argued that the prior distribution judged reasonable in the observabilistic case implies a prior distribution for the parametric case that is more compelling than others derived especially for the latter.
Abstract: Although there are basically two models for binary trials—a parametric model and an observabilistic or predictive model—for purposes of inference the former can be considered a special or limiting case of the latter. This being so, when little is known or it is desired to adopt an impartial stance about the object of inference before conducting a series of binary trials, applying a Bayesian approach to the predictive case is shown to suffice for the parametric case as well. It is argued that the prior distribution judged reasonable in the observabilistic case implies a prior distribution for the parametric case that is more compelling than others derived especially for the latter. This prior is, incidentally, attributed to Bayes and Laplace.

Journal ArticleDOI
TL;DR: In this article, a parametric model with short-channel capabilities is presented for MOS transistors, which covers the subthreshold and strong inversion regions with a continuous transition between these regions.
Abstract: A parametric model with short-channel capabilities is presented for MOS transistors. It covers the subthreshold and strong inversion regions with a continuous transition between these regions. The effects included in the model are mobility reduction, carrier velocity saturation, body effect, source-drain resistance, drain-induced barrier lowering, and channel length modulation. The model simulates accurately the current characteristics as well as the transconductance and output conductance characteristics which are important for analog circuit simulation.

Journal ArticleDOI
TL;DR: The article compares the data-based 1and 3-parameter estimators in a simulation experiment to the maximum likelihood estimator assuming the correct failure distribution and censoring mechanism and presents a data- based algorithm for smoothing parameter selection.
Abstract: Two general classes of nonparametric kernel estimators of the hazard function are introduced, which include both a 1-parameter estimator and a more complex 3-parameter estimator. In addition, employing the idea of cross-validation, the authors present a data-based algorithm for smoothing parameter selection. The article compares the data-based 1and 3-parameter estimators in a simulation experiment to the maximum likelihood estimator assuming the correct failure distribution and censoring mechanism. The 3-parameter estimator is found to perform well over a wide range of settings. On the average, the estimator recovers the shape of the underlying failure hazard and is competitive with the parametric estimator over a subset of the positive half line. Two examples illustrate possible uses of the nonparametric estimators.

Journal ArticleDOI
TL;DR: In this article, the authors proposed the residual-based stochastic predictor (RBP) as an alternative procedure for obtaining forecasts with a static nonlinear econometric model, which modifies the usual Monte Carlo approach to Stochastic simulations of the model in that calculated residuals over the sample period are used as proxies for disturbances instead of random draws from some assumed parametric distribution.
Abstract: This paper proposes the residual-based stochastic predictor as an alternative procedure for obtaining forecasts with a static nonlinear econometric model. This procedure modifies the usual Monte Carlo approach to stochastic simulations of the model in that calculated residuals over the sample period are used as proxies for disturbances instead of random draws from some assumed parametric distribution. In compar-ison with the Monte Carlo predictor, the residual-based should be less sensitive to distributional assumptions concerning disturbances in the system. It is also less demanding computationally. The large-sample asymptotic moments of the residual-based predictor are derived in this paper and compared with those of the Monte Carlo predictor. Both procedures are asymptotically unbiased. In terms of asymptotic mean squared prediction error (AMSPE), the Monte Carlo is efficient relative to the residual-based when the number of replications in the Monte Carlo simulations is large relative to sample size. This order of relative efficiency is reversed, however, when replication and sample sizes are similar. In any event, the amount by which the AMSPE of either predictor exceeds the lower bound for AMSPE is small as a percentage of the lower bound AMSPE when sample and replication sizes are at least of moderate magnitude. The paper also discusses the extension of the residual-based anld Monte Carlo procedures to the estimation of higher order moments and cumulative distribution functions of endogenous variables in the system.


Journal ArticleDOI
TL;DR: In this article, the authors considered the problem of estimating the parameters of a Gaussian linear model under the assumption that the model is invalid and a larger model should be assumed, and they showed that the most robust estimate is the least square estimate under the model's maximum risk.
Abstract: We study estimation of the parameters of a Gaussian linear model $\mathscr{M}_0$ when we entertain the possibility that $\mathscr{M}_0$ is invalid and a larger model $\mathscr{M}_1$ should be assumed. Estimates are robust if their maximum risk over $\mathscr{M}_1$ is finite and the most robust estimate is the least squares estimate under $\mathscr{M}_1$. We apply notions of Hodges and Lehmann (1952) and Efron and Morris (1971) to obtain (biased) estimates which do well under $\mathscr{M}_0$ at a small price in robustness. Extensions to confidence intervals, simultaneous estimation of several parameters and large sample approximations applying to nested parametric models are also discussed.

Journal ArticleDOI
TL;DR: GONO as mentioned in this paper is a numerical wave prediction model used for the preparation of forecasts as well as hindcasts, which is a hybrid model: the wind sea is described in a parametric way, but swell is treated in a spectral manner.
Abstract: GONO is a numerical wave prediction model used for the preparation of forecasts as well as hind-casts. It is a hybrid model: the wind sea is described in a parametric way, but swell is treated in a spectral manner. For the wind sea there are two prognostic parameters: the zero-moment wave height and the mean direction. Pure wind-sea spectra are assumed to have a quasi-universal shape: above the spectral peak, ƒ−5 behavior is assumed; below the peak, a linear frequency dependence is taken. The directional dependence is of the cos2 θ type. Empirical relations are used to derive the full set of wind-sea parameters from the prognostic variables and the wind vector. The equations for the prognostic variables are solved on a discrete grid with the help of a simple finite-difference scheme. For the accurate propagation of swell, possibly over large distances, a ray technique is used. The full two-dimensional spectrum is reconstructed for selected grid points for which the results of the ray technique and the wind-sea calculations are combined. The model is a shallow water model because bottom dissipation effects are taken into account, but effects of refraction are disregarded. Depending on wind speed, these effects may be important in areas where the depth is less than about 100 m. The model has not been applied in regions with depths less than 15 m, therefore extreme shallow water effects are not considered. The behavior of the model was studied in quite some detail during the recent Sea Wave Modeling Project (SWAMP) in a few idealized situations. Knowledge of the model behavior in more realistic situations stems from its routine operational application. Runs are made four times a day on a grid covering the North Sea and the Norwegian Sea, and the results are monitored continuously. Presently, we have a data base containing about four years of observations and model predictions. From this data base we discuss a few interesting storms, and we present a statistical analysis of all of the available material. As part of this analysis we consider the effect of the quality of the input winds on the model performance.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the properties of four-wave mixing and compared the results with those for optical parametric processes and non-degenerate hyper-Raman scattering.

Journal ArticleDOI
TL;DR: A new nonparametric estimate for nonlinear discrete-time dynamic systems is considered that is weakly consistent under a specific condition on the transition probability operator of a stationary Markov process.
Abstract: A new nonparametric estimate for nonlinear discrete-time dynamic systems is considered. The new algorithm is weakly consistent under a specific condition on the transition probability operator of a stationary Markov process. The estimate is applicable when a parametric state model of the system is difficult to choose.

Journal ArticleDOI
TL;DR: In this article, bounds on the statistical efficiency of estimators of the poles and zeros of an ARMA process based on estimates of the process autocorrelation function (ACF) are considered.
Abstract: This paper considers bounds on the statistical efficiency of estimators of the poles and zeros of an ARMA process based on estimates of the process autocorrelation function (ACF). Special attention is paid to autoregressive (AR) and AR plus white noise processes. It is seen that reducing the ARMA process data to a given set of consecutive lags of the popular lagged-product ACF estimates prior to parameter estimation increases Cramer-Rao bounds on the generalized error covariance. A parametric study of the bound deterioration for some illustrative signal and noise situations reveals some empirical strategies for choosing ACF estimate lags to preserve statistical information. Analysis is based on the relative information index (RII) [2], and derivations of the large sample Fisher's information matrix for the raw data and for the lagged-product ACF estimate of an ARMA process are included.

Journal Article
TL;DR: In this article, the authors derive likelihood expressions for parametric statistical models under such general circumstances, and split each marked point into two characteristic parts, called innovation and non-innovation, and then characterize this representation in terms of the statistical model.
Abstract: Complicated failure time data which can involve, e.g., random covariates, censored observations and multiple failures, is here considered as a sample path of a marked point process (MPP). Our main task is to derive likelihood expressions for parametric statistical models under such general circumstances. To do this, and motivated by concrete examples, we split each marked point into two characteristic parts, called innovation and non-innovation, and then characterize this representation in terms of the statistical model. Technically the paper is based on the martingale approach to point processes.

Proceedings ArticleDOI
01 Mar 1984
TL;DR: The problem of estimating sinusoidal or narrowband signals with a time-varying center frequency is considered and the overdetermined modified Yule-Walker equations are used to estimate a set of constant model parameters.
Abstract: The problem of estimating sinusoidal or narrowband signals with a time-varying center frequency is considered. The signal parameters are estimated by fitting an autoregressive model with time-varying coefficients to the data. The overdetermined modified Yule-Walker equations are used to estimate a set of constant model parameters. Some numerical examples illustrating the behavior of the estimator are presented, and its accuracy aspects are briefly discussed.

Journal ArticleDOI
TL;DR: In this paper, sensitivity analysis is extended to find the parametric dependencies of systems of ordinary differential equations which exhibit limit cycle oscillations, and quantitative relations between the system parameters and the observable period, amplitude, phase and cycle shape are developed.

Journal ArticleDOI
TL;DR: In this paper, an approximate renormalized equation of evolution for an arbitrary nonlinear single-degree-of-freedom system externally driven by Gaussian parametric fluctuations of finite correlation time was determined.
Abstract: We determine an approximate renormalized equation of evolution for an arbitrary nonlinear single-degree-of-freedom system externally driven by Gaussian parametric fluctuations of finite correlation time . The renormalization scheme used here gives a second order equation with a time-and-state-dependent “diffusion coefficient”. We are able to calculate the diffusion coefficient in closed form. The steady-state distribution can easily be obtained from the evolution equation. We are thus able to determine the parameter dependence of the steady-state distribution and, in particular, the influence of a correlation time of the fluctuations, which does not vanish, on the steady-state distribution.

Journal ArticleDOI
TL;DR: In this article, a new mechanism of the instability of an electron (positron) beam under conditions of parametric X-rays is discovered, which defines the amplitude increment of instability and the starting current.

Journal ArticleDOI
TL;DR: In this paper, a parametric solution for the linear state-feedback eigenstructure assignment problem is developed, modified in such a way as to accommodate the case where the set of closed-loop eigenvalues and the sets of open-loop Eigenvalues have elements in common, and each common eigenvalue introduces a number of partially free design parameter vectors in addition to the original free ones.
Abstract: In Fahmy and O'Reilly (1982 a, b), a parametric solution for the linear state-feedback eigenstructure-assignment problem is developed. This approach is here modified in such a way as to accommodate the case where the set of closed-loop eigenvalues and the set of open-loop eigenvalues have elements in common. It is particularly shown that each common eigenvalue introduces a number of partially free design parameter vectors in addition to the original free ones. The paper concludes with illustrative numerical examples.