scispace - formally typeset
Search or ask a question

Showing papers on "Parametric statistics published in 1982"


Journal ArticleDOI
15 Sep 1982
TL;DR: In this paper, it is shown that by taking this overdetermined parametric evaluation approach, a reduction in data-induced model parameter hypersensitivity is obtained, and a corresponding improvement in modeling performance results.
Abstract: In seeking rational models of time series, the concept of approximating second-order statistical relationships (i.e., the Yule-Walker equations) is often explicitly or implicitly invoked. The parameters of the hypothesized rational model are typically selected so that these relationships "best represent" a set of autocorrelation lag estimates computed from time series observations. One of the objectives of this paper will be that of establishing this fundamental approach to the generation of rational models. An examination of many popular contemporary spectral estimation methods reveals that the parameters of a hypothesized rational model are estimated upon using a "minimal" set of Yule-Walker equation evaluations. This results in an undesired parameter hypersensitivity and a subsequent decrease in estimation performance. To counteract this parameter hypersensitivity, the concept of using more than the minimal number of Yule-Walker equation evaluations is herein advocated. It is shown that by taking this overdetermined parametric evaluation approach, a reduction in data-induced model parameter hypersensitivity is obtained, and a corresponding improvement in modeling performance results. Moreover, upon adapting a singular value decomposition representation of an extended-order autocorrelation matrix estimate to this procedure, a desired model order determination method is obtained and a further significant improvement in modeling performance is achieved. This approach makes possible the generation of low-order high-quality rational spectral estimates from short data lengths.

516 citations


Journal ArticleDOI
TL;DR: In this paper, the Fourier flexible functional form is used to determine whether an industry exhibits constant returns to scale, whether the production function is homothetic, or whether inputs are separable.

331 citations


Book ChapterDOI
TL;DR: The theory of belief functions assesses evidence by fitting it to a scale of canonical examples in which the meaning of a message depends on chance as discussed by the authors, and in order to analyse parametric statistical problems within the framework of this theory, we must specify the evidence on which the parametric model is based.
Abstract: The theory of belief functions assesses evidence by fitting it to a scale of canonical examples in which the meaning of a message depends on chance. In order to analyse parametric statistical problems within the framework of this theory, we must specify the evidence on which the parametric model is based. This article gives several examples to show how the nature of this evidence affects the analysis. These examples also illustrate how the theory of belief functions can deal with problems where the evidence is too weak to support a parametric model.

128 citations


Journal ArticleDOI
TL;DR: In this article, a single-degree-of-freedom system with a special type of nonlinear damping and both external and parametric white-noise excitations is considered.
Abstract: A single-degree-of-freedom system with a special type of non-linear damping and both external and parametric white-noise excitations is considered. For the special case, when the intensities of coordinates and velocity modulation satisfy a certain condition an exact analytical solution is obtained to the corresponding stationary Fokker-Planck-Kolmogorov equation yielding an expression for joint probability density of coordinate and velocity. This solution is analyzed particularly in connection with stochastic stability problem for the corresponding linear system; certain implications are illustrated for the system, which is stable with respect to probability but unstable in the mean square. The solution obtained may be used to check different approximate methods for analysis of systems with randomly varying parameters.

126 citations


Journal ArticleDOI
TL;DR: In this paper, a heuristic rule is proposed to reduce the number of individual MILPs that have to be solved and thus reduce the total computational effort, assuming that if the same values of the integer variables are optimal at two different values of a change parameter, these same integer variable values will also be optimal at all intermediate parameter values.
Abstract: A method is developed for carrying out parametric analysis on a mixed integer linear program (MILP) as either objective function coefficients or right-hand-side values of the constraints are varied continuously. The method involves solving MILPs at point values of the parameters of variation and joining the results by LP parametric analysis. The procedure for parametric analysis on the objective function can be continued until a theoretical result proves that the analysis is complete. However, a heuristic rule that is presented may greatly reduce the number of individual MILPs that have to be solved and thus reduce the total computational effort. The rule assumes that if the same values of the integer variables are optimal at two different values of the change parameter, these same integer variable values will also be optimal at all intermediate parameter values. If the rule is applied, a complete parametric analysis requires solving 2n different MILPs, where n is the number of different sets of optimal i...

110 citations


Book
01 Jan 1982

95 citations



Journal ArticleDOI
TL;DR: In this paper, the threshold pump intensity for singly and doubly resonant parametric oscillators and the efficiency of doubly resonance parametric OO was calculated for the general case of unequal confocal beam parameters.
Abstract: The threshold pump intensity for singly and doubly resonant parametric oscillators and the efficiency of doubly resonant parametric oscillators is calculated for the general case of unequal confocal beam parameters.

83 citations


Journal ArticleDOI
TL;DR: In this paper, a procedure for making statistical inferences about differences between population means from the output of general circulation model (GCM) climate experiments is presented, yielding a potentially powerful technique for detecting climatic change than the simpler schemes used heretofore.
Abstract: A procedure for making statistical inferences about differences between population means from the output of general circulation model (GCM) climate experiments is presented. A parametric time series modeling approach is taken, yielding a potentially mere powerful technique for detecting climatic change than the simpler schemes used heretofore. The application of this procedure is demonstrated through the use of GCM control data to estimate the variance of winter and summer time averages of daily mean surface air temperature. The test application provides estimates of the magnitude of climatic change that the procedure should be able to detect. A related result of the analysis is that autoregressive processes of higher than first order are needed to adequately model the majority of the GCM time series considered.

78 citations


Journal ArticleDOI
TL;DR: Algorithms are presented which are extensions of an existing semiautomatic curve fitting algorithm which describe a periodic function or some parametric curve, closed, or otherwise and their applicability in image processing and related fields.

66 citations



Journal ArticleDOI
TL;DR: In this paper, the authors used Ferguson's (1973) nonparametric priors in a Bayesian analysis of finite popula- tions, and showed that the usual estimates and confidence intervals for the population mean in simple and stratified random samples can be justified in Bayesian terms.
Abstract: SUMMARY Using Ferguson's (1973) non-parametric priors in a Bayesian analysis of finite popula- tions, we show that asymptotically, at least, the usual estimates and confidence intervals for the population mean in simple and stratified random samples can be justified in Bayesian terms. We then apply these models for estimating population percentiles, and a new procedure for interval estimates in stratified sampling is developed. AT present there are a number of general approaches in the statistical literature for making inferences based on data selected from a finite population. Many of these are summarized in Smith (1976). Probably the most widely used methodology, in practice, is the design-based inference from random samples, where criteria such as bias and mean-squared error, derived from the sampling distribution of the data, are used. This methodology does not preclude the possibility that the estimator may be based on an inherent superpopulation model; however, confidence intervals and mean-squared errors are based only on the probability distribution of the design, and ignore the model. One of the reasons for the popularity of this approach may be that it is non-parametric in nature and sample sizes tend to be large for many applications, so that the loss of efficiency compared with parametric superpopulation approaches is considered less important than the robustness of the design-based approach with respect to a wide class of models.

Journal ArticleDOI
TL;DR: In this paper, the authors describe a method for analyzing survival data with a generic family of continuously differentiable survival distributions, which can be related by a common differential equation (Turner and Pruitt, 1978).
Abstract: SUMMARY A generic family of survival models is defined by a common differential equation. A biological justification is provided for a subset of the family. These survival distributions have the usual advantages of a parametric method, while retaining much of the flexibility of a nonparametric method. Competing survival models are placed in a single parametric frame so that potential users can see the relations between various family members. Any of the model parameters can be made a function of concomitant variables. The existence of a closed-form solution for the differential equation which defines the generic family makes it possible to apply the maximum likelihood method to censored and interval-censored data with computational ease. Thus, longitudinal, cross-sectional and concomitant information can be combined in a single study. The hierarchical structure of the family enhances the statistical tractability of the method. The aim of this paper is to describe a method for analyzing survival data with a generic family of continuously-differentiable survival distributions. These models can be related by a common differential equation (Turner and Pruitt, 1978). The generic family is characterized in §2, and the biological justification for a subset of the family is provided. A specific model is described, relating the generic family parameters to concomitant variables. The procedure for defining the likelihood function for a specific survival model, and for obtaining maximum likelihood estimates for its parameters, is presented in §3. Uncensored, censored and interval-censored data can be utilized in any combination. An approach to the selection of a specific model is outlined in §4, together with a method for approximating the covariance matrix of its parameters. Two examples are presented in §5. First, the so-called natural history of tetralogy of Fallot is studied in order that the effectiveness of surgical intervention can be assessed. Second, total anomalous pulmonary venous connection is studied in order to define those risk factors that indicate early surgical intervention.

Proceedings ArticleDOI
01 May 1982
TL;DR: This paper develops the use of probability density function (pdf) estimation for text-independent speaker identification and compares the performance of two parametric and one non-parametric pdf estimation methods to one distance classification method that uses the Mahalanobis distance.
Abstract: Most text-independent speaker identification methods to date depend on the use of some distance metric for classification. In this paper we develop the use of probability density function (pdf) estimation for text-independent speaker identification. We compare the performance of two parametric and one non-parametric pdf estimation methods to one distance classification method that uses the Mahalanobis distance. Under all conditions tested, the pdf estimation methods performed substantially better than the Mahalanobis distance method. The best method is a non-parametric pdf estimation method.

Journal Article
TL;DR: A number of tests based on the likelihood ratio statistic have been developed as discussed by the authors, and available information on their power are summarized in this paper and some of them can be used to compare models with substantially different functional forms or models that are based on different behavioural paradigms.
Abstract: Probabilistic choice models, such as logit and probit models, are highly sensitive to a variety of specification errors, including the use of incorrect functional forms for the systematic component of the utility function, incorrect specification of the probability distribution of the random component of the utility function, and incorrect specification of the choice set. Specification errors can cause large forecasting errors, so it is of considerable importance to have means of testing models for the presence of these errors. A number of tests based on the likelihood ratio statistic have been developed. These tests and available information on their power are summarized in this paper. The likelihood ratio test can entail considerable computational difficulty, owing to the need to evaluate the likelihood function for both the null and alternative hypotheses. Substantial gains in computational efficiency can be achieved through the use of a test that requires evaluating the likelihood function only for the null hypothesis. A lagrangian multiplier test that has this property is described, and numerical examples of its computational properties are given. An important disadvantage of conventional specification tests is that they do not permit comparisons of models that belong to different parametric families in order to determine which model best explains the available data. Thus, these tests cannot be used to compare models whose utility functions have substantially different functional forms or models that are based on different behavioural paradigms. Several methods for dealing with this problem, including the construction of hybrid models and the Cox test of separate families of hypotheses, are described. (Author/TRRL)

Journal ArticleDOI
Robert J. Fitzgerald1
TL;DR: In this paper, the effects of combining velocity measurements with the usual position measurements in a simple form of target tracking filter are presented. And the effects on steady-state performance and filter gains are shown, as well as data on time required for convergence to steady state.
Abstract: Parametric data is presented showing the effects of combining velocity measurements with the usual position measurements in a simple form of target tracking filter. Effects on steady-state performance and filter gains are shown, as well as data on time required for convergence to steady state.

Journal ArticleDOI
TL;DR: A (two- person) problem of decentralized observation, detection, and coordination is studied via simulation via simulation and the effects of the prior probability and parametric dependencies on the decision rules are studied.
Abstract: A (two-person) problem of decentralized observation, detection, and coordination is studied via simulation. The effects of the prior probability and parametric dependencies on the decision rules are studied. Sensitivity to the data, asymmetries in the decision rules, and other phenomena are investigated.

Proceedings ArticleDOI
01 Dec 1982
TL;DR: In this article, the class of assignable eignevectors and generalized eigenvectors associated with the assigned eigenvalues is explicitly described by a complete set of n r-dimensional free parameter vectors.
Abstract: This paper generalizes a recently reported method [1] of closed-loop eigenstructure assignment via state feedback in a linear multivariable system (with n states and r control inputs). By introducing a lemma on the differentiation of determinants, the class of assignable eignevectors and generalized eigenvectors associated with the assigned eigenvalues is explicitly described by a complete set of n r-dimensional free parameter vectors. This parametric characterization conveniently organizes the nonuniqueness of the solution of the eigenvalue-assignment problem and thereby provides an efficient means of further modifying the system dynamic response.

Journal ArticleDOI
TL;DR: A new cost-effective method allowing nonlinear microwave circuits to be designed by computer is demonstrated by application to parametric frequency dividers.
Abstract: A new cost-effective method allowing nonlinear microwave circuits to be designed by computer is demonstrated by application to parametric frequency dividers. The method is based on frequency-domain representations of both nonlinear circuit components and network voltages and currents. A special optimization strategy determines the unknown parameters of the linear part of the circuit while estimating the need for a complete analysis of the nonlinear network at each step of the iterative process.

Journal ArticleDOI
TL;DR: An examination of some of the experimental and statistical properties of repeated measures designs and the relative merits of four parametric statistical procedures for testing a repeated measures hypothesis.

Journal ArticleDOI
TL;DR: In this paper, a simple procedure using a modification of the Studentized maximum modulus technique is proposed for constructing confidence bands for isotonic dose-response curves, which assumes no parametric model for the doseresponse curves.
Abstract: A simple procedure using a modification of the Studentized maximum modulus technique is proposed for constructing confidence bands for isotonic dose‐response curves. This procedure assumes no parametric model for the dose‐response curves and is shown by an example to compare well with a parametric procedure.

Journal ArticleDOI
TL;DR: In this paper, a method for analyzing multidegree-of-freedom systems having a repeated natural frequency subjected to a parametric excitation is presented, where the analysis is based on the method of multiple scales.

Journal ArticleDOI
TL;DR: In this article, the authors apply the Markov model to the study of the slurry-supernate interface in batch sedimentation and show that the final velocity of the interface is equal to the mean of the steady state pdf of particle velocities.

Book ChapterDOI
01 Jan 1982
TL;DR: Multivariate analysis can be thought of as a methodology for detection, description and validation of structure in p-dimensional (p > 1) point clouds, where structure cannot be perceived by looking at a set of estimated parameters.
Abstract: Multivariate analysis can be thought of as a methodology for detection, description and validation of structure in p-dimensional (p > 1) point clouds. Classical multivariate analysis relies on the assumption that the observations forming the point cloud(s) have a Gaussian distribution. All information about structure is then contained in the means and covariance matrices, and the well-known apparatus for estimation and inference in parametric families can be brought to bear. The uncomfortable ingredient in this approach is the Gaussianity assumption. The data may be Gaussian with occasional outliers or even the bulk of the data simply might not conform to a Gaussian distribution. Methods are discussed that do not involve any distributional assumptions. In this case, structure cannot be perceived by looking at a set of estimated parameters. An obvious remedy is to look at the data themselves, at the p-dimensional point cloud(s), and to base the description of structure on those views. As perception in more than three dimensions is difficult, the dimensionality of the data first has to be reduced, most simply by projection. Projection of the data generally implies loss of information. As a consequence, multivariate structure does not usually show up in all projections, and nomore » single projection might contain all the information. It is therefore important to judiciously choose the set of projections on which the model of the structure is to be based. This is the goal of projection pursuit procedures.« less

Journal ArticleDOI
TL;DR: In this article, the properties of parametric and nonparametric tests for the problem of validating stochastic simulation models are discussed and asymptotic efficiencies for various types of alternatives are developed for some of the tests.
Abstract: Properties of proposed parametric and nonparametric tests for the problem of validating stochastic simulation models are discussed. Asymptotic efficiencies for various types of alternatives are developed for some of the tests. Numerical procedures along with simulation methods are used to evaluate the adequacy of normal approximations to the null distributions of the statistics and the small-sample power of the tests.

Journal ArticleDOI
TL;DR: In this paper, a unified way of looking at parametric and combination resonances in systems with periodic coefficients and different amounts of damping in the various modes of vibration is presented, where the stability boundaries of the coupled Mathieu equations are determined by the harmonic balance method, Fourier series with periods 2T and T being assumed.

Journal ArticleDOI
TL;DR: A number of tests based on the likelihood ratio statistic have been developed as discussed by the authors, and available information on their power are summarized in this paper and numerical examples of their computational properties are given.

Journal ArticleDOI
TL;DR: An algorithm for solving a particular (PNP) which implements the prototype algorithm by forming compact partitions and underestimating functions based upon rules given by Falk and Soland.
Abstract: In this paper, we consider a general family of nonconvex programming problems. All of the objective functions of the problems in this family are identical, but their feasibility regions depend upon a parameter ϑ. This family of problems is called a parametric nonconvex program (PNP). Solving (PNP) means finding an optimal solution for every program in the family. A prototype branch-and-bound algorithm is presented for solving (PNP). By modifying a prototype algorithm for solving a single nonconvex program, this algorithm solves (PNP) in one branch-and-bound search. To implement the algorithm, certain compact partitions and underestimating functions must be formed in an appropriate manner. We present an algorithm for solving a particular (PNP) which implements the prototype algorithm by forming compact partitions and underestimating functions based upon rules given by Falk and Soland. The programs in this (PNP) have the same concave objective function, but their feasibility regions are described by linear constraints with differing right-hand sides. Computational experience with this algorithm is reported for various problems.

Journal ArticleDOI
TL;DR: In this paper, the nonparametric several sample scale problem is considered and some tests are proposed for the hypothesis of homogeneity versus ordered alternatives, which are based on statistics that are weighted linear combinations of Sugiura (1965,Osaka J. Math.,2, 385-426) type statistics proposed for testing homogeneity of scale against the omnibus alternative.
Abstract: In this paper the nonparametric several sample scale problem is considered and some tests are proposed for the hypothesis of homogeneity versus ordered alternatives. These tests are based on statistics that are weighted linear combinations of Sugiura (1965,Osaka J. Math.,2, 385–426) type statistics proposed for testing homogeneity of scale against the omnibus alternative. For each class of test statistics suggested, the member with maximum Pitman efficiency is identified. The optimal statistics are compared with their parametric and nonparametric competitors.

Journal ArticleDOI
Wm. Ted Brooks1
TL;DR: In this article, a parametric analysis of the star grain configuration involves parametric evaluation of the six dimensionless independent geometric variables that define the star, and the analysis applied in this paper identifies, for a given volumetric loading fraction, the most neutral-burning star with given web fraction, symmetry number, and two small radii.
Abstract: Ballistic design analysis of the star grain configuration involves parametric evaluation of the six dimensionless independent geometric variables that define the star. Because of the large, virtually limitless number of combinations of these variables that will satisfy requirements for volumetric loading and web fraction, a parametric approach is necessary if one is to insure that the final design is the optimal one based on performance. The analysis applied in this paper identifies, for a given volumetric loading fraction, the most neutral-burning star with given web fraction, symmetry number, and two small radii. The related computer program was used to generate data for evaluating various other optimization criteria for star designs and to establish some universal limits of the capability of the star in terms of neutrality and sliver.