scispace - formally typeset
Search or ask a question

Showing papers on "Parametric statistics published in 1981"


Journal ArticleDOI
TL;DR: Rank as mentioned in this paper is a nonparametric procedure that is applied to the ranks of the data instead of to the data themselves, and it can be viewed as a useful tool for developing non-parametric procedures to solve new problems.
Abstract: Many of the more useful and powerful nonparametric procedures may be presented in a unified manner by treating them as rank transformation procedures. Rank transformation procedures are ones in which the usual parametric procedure is applied to the ranks of the data instead of to the data themselves. This technique should be viewed as a useful tool for developing nonparametric procedures to solve new problems.

3,637 citations


Journal ArticleDOI
TL;DR: A simple and efficient method of simulation is discussed for point processes that are specified by their conditional intensities, based on the thinning algorithm which was introduced recently by Lewis and Shedler for the simulation of nonhomogeneous Poisson processes.
Abstract: A simple and efficient method of simulation is discussed for point processes that are specified by their conditional intensities. The method is based on the thinning algorithm which was introduced recently by Lewis and Shedler for the simulation of nonhomogeneous Poisson processes. Algorithms are given for past dependent point processes containing multivariate processes. The simulations are performed for some parametric conditional intensity functions, and the accuracy of the simulated data is demonstrated by the likelihood ratio test and the minimum Akaike information criterion (AIC) procedure.

750 citations


01 Jan 1981
TL;DR: In the preliminary analysis of environmental problems, mathematical modelling studies can sometimes aid in hypothesis development and in the integration of preliminary data, it is proposed that parameters be assigned statistical distributions which reflect the degree of parametric uncertainty and that these distributions be used in Monte Carlo simulation analyses.
Abstract: In the preliminary analysis of environmental problems, mathematical modelling studies can sometimes aid in hypothesis development and in the integration of preliminary data. Circumstances usually require models used in this way to be simulation models closely based on traditional scientific descriptions of component processes. As a result, such models contain many ill-defined parameters, a fact which severely limits the reliance that can be placed on the outcome of any single simulation. In an attempt to overcome this difficulty, it has been proposed that parameters be assigned statistical distributions which reflect the degree of parametric uncertainty and that these distributions be used in Monte Carlo simulation analyses. The authors propose a variation on this theme in which they first stipulate the systems' problem-defining behavior and define a classification algorithm to be applied to the model's output. This algorithm results in each simulation run being classified as a behavior, B, or not a behavior, anti B. The parameters leading to the result are stored according to the behavioural outcome. Subsequently, all parameter vectors are subjected to analysis to determine the degree to which the a prior; distributions separate under the behavioral mapping. This separation, or lack thereof, forms the basis for amore » generalized sensitivity analysis in which parameters and their related processes important to the simulation of the behaviour are singled out. The procedure has been applied to a eutrophication problem in the Peel-Harvey Inlet of Western Australia with encouraging results.« less

557 citations


Journal ArticleDOI
01 Apr 1981
TL;DR: The rapid advance of computer technology makes the processed EEG an increasingly viable tool in research and clinical practice and the main properties the electroencephalogram (EEG), and points out several influential factors.
Abstract: Fifty years ago Berger made the first registrations of the electrical activity of the brain with electrodes placed on the intact skull. It immediately became clear that the frequency content of recorded signals plays an important role in describing these signals and also the state of the brain. This paper briefly surveys the main properties the electroencephalogram (EEG), and points out several influential factors. A number of methods have been developed to quantify the EEG in order to complement visual screening; these are conveniently classified as being parametric or nonparametric. The paper emphasizes parametric methods, in which signal analysis is based on a mathematical model of the observed process. The scalar or multivariate model is typically linear, with parameters being either time invariant or time variable. Algorithms to fit the model to observed data are surveyed. Results from the analysis my be used to describe the spectral properties of the EEG, including the way in which characteristic variables change with time. Parametric models have successfully been applied to detect the occurrence of transients with epiliptic origin, so-called spikes and sharp waves. Interesting results have also been obtained by combining parameter estimation with classification algorithms in order to recognize significant functional states of the brain. The paper emphasizes methodology but includes also brief accounts of applications for research and clinical use. These mainly serve to illustrate the progress being made and to indicate the need for further work. The rapid advance of computer technology makes the processed EEG an increasingly viable tool in research and clinical practice.

201 citations


Journal ArticleDOI
TL;DR: In this article, the Vorticity Area Index was used as the response variable for the solar sector boundary crossing analysis and the results showed that the randomization procedure leads to markedly different results from those obtained from parametric tests.
Abstract: Superposed epoch analyses, based on solar sector boundary crossings as key times and the Vorticity Area Index as the response variable, are tested for significance using both parametric and randomization techniques. We conclude from a comparison of these techniques that the randomization procedure leads to markedly different results from those obtained from parametric tests. In particular, the results are strongly affected by the modest skewness of the Vorticity Area Index distribution.

164 citations


Journal ArticleDOI
TL;DR: In this article, the application of multi-response permutation procedures (MRPP) is suggested as an appropriate approach for the examination of sea-level pressure patterns, in order to avoid the likely insurmountable difficulties involving parametric methods.
Abstract: This paper considers the examination of possible differences in monthly sea-level pressure patterns, The satisfactory examination of such differences requires appropriate multi-response parametric methods based on unknown multivariate distributions (i.e., an appropriate parametric technique is probably non-existent). In order to avoid the likely insurmountable difficulties involving parametric methods, the application of multi-response permutation procedures (MRPP) is suggested as an appropriate approach for the examination of such differences.

144 citations


Journal ArticleDOI
TL;DR: The most costly phase of statistical design,statistical simulation, may be carried out only once, and equivalent or superior designs for intermediate size networks are obtained with less computational effort than previously published methods.
Abstract: A new statistical circuit design centering and tolerancing methodology based on a synthesis of concepts from network analysis, recent optimization methods, sampling theory, and statistical estimation and hypothesis testing is presented. The method permits incorporation of such realistic manufacturing constraints as tuning, correlation, and end-of-life performance specifications. Changes in design specifications and component cost models can be handled with minimal additional computational requirements. A database containing the results of a few hundred network analyses is first constructed. As the nominal values and tolerances are changed by the optimizer, each new yield and its gradient are evaluated by a new method called Parametric sampling without resorting to additional network analyses. Thus the most costly phase of statistical design,-statistical simulation, may be carried out only once, which leads to considerable computational efficiency. Equivalent or superior designs for intermediate size networks are obtained with less computational effort than previously published methods. For example, a worst-case design for an eleventh-order Chebychev filter gives a filter cost of 44 units, a centered worst-case design reduces the cost to 18 units and statistical design using Parametric sampling further reduces the cost to 5 units (800 analyses, 75 CPU seconds on an IBM 370/158).

131 citations


Journal ArticleDOI
TL;DR: In this article, a class of specification tests developed by Hausman (1978) is extended to accomodate a singular covariance matrix, and an application to limited information tests for the exogeneity of instrumental variables is presented.

121 citations


Journal ArticleDOI
TL;DR: It is proved that a sequence of approximated solutions converges to the correct Stackelberg solution, or the min-max solution, which is a series of nonlinear programming problems approximating the original two-level problem by application of a penalty method to a constrained parametric problem in the lower level.
Abstract: This paper is concerned with the Stackelberg problem and the min-max problem in competitive systems. The Stackelberg approach is applied to the optimization of two-level systems where the higher level determines the optimal value of its decision variables (parameters for the lower level) so as to minimize its objective, while the lower level minimizes its own objective with respect to the lower level decision variables under the given parameters. Meanwhile, the min-max problem is to determine a min-max solution such that a function maximized with respect to the maximizer's variables is minimized with respect to the minimizer's variables. This problem is also characterized by a parametric approach in a two-level scheme. New computational methods are proposed here; that is, a series of nonlinear programming problems approximating the original two-level problem by application of a penalty method to a constrained parametric problem in the lower level are solved iteratively. It is proved that a sequence of approximated solutions converges to the correct Stackelberg solution, or the min-max solution. Some numerical examples are presented to illustrate the algorithms.

109 citations


Journal ArticleDOI
TL;DR: In this paper, it was shown that the power of the modified approximate randomization test is significantly greater when the populations sampled are made up of two normal distributions with different means, and that it is preferable to use an F test for analysis of variance designs.
Abstract: In experimental psychology it is usually difficult to show that populations sampled meet the requirements for the use of t or F tests, or even that they are similar to populations sampled in Monte Carlo experiments designed to demonstrate the robustness of these parametric tests. Consequently, a test which makes weaker requirements without sacrificing power or versatility should be preferred. It is shown that this is true of modified approximate randomization tests, which, like the randomization tests on which they are based, use observed scores to set up a sampling distribution. The tests are versatile, since they can be used on factorial designs of any complexity and the results of Monte Carlo experiments indicate that their power approximates that of the F test when the assumptions underlying the latter are met and is significantly greater when the populations sampled are made up of two normal distributions with different means. It is concluded that, where adequate computing facilities are available, the approximate randomization test is preferable to an F test for analysis of variance designs.

81 citations



Journal ArticleDOI
TL;DR: In this article, a new analytical approach to the investigation of the regions of instability of multiple-degree-of-freedom parametric dynamic systems is presented, based on the harmonic balance method.

Journal ArticleDOI
TL;DR: In this article, an interior penalty function is used to convert the original constrained problem into an unconstrained parametric problem, and then the search for the optimal solution to the parametric problems is based on a discrete direction gradient.
Abstract: A new method for solving discrete structural optimization problems is presented. An interior penalty function is used to convert the original constrained problem into an unconstrained parametric problem. Then the search for the optimal solution to the parametric problem is based on a discrete direction gradient. Solving an appropriate sequence of these unconstrained parametric problems is equivalent to solving the original constrained optimization problem. This method is illustrated first on a small reinforced concrete problem, and then to the design of steel building frames which are made up of standard sections. Results for a one-story four-bay unsymmetrical frame and an eight-story three-bay symmetrical frame are described.

Journal ArticleDOI
TL;DR: Methods are presented for obtaining parametric measures of information from the non-parametric ones and from information matrices and the properties of these measures are examined.
Abstract: In this paper methods are presented for obtaining parametric measures of information from the non-parametric ones and from information matrices. The properties of these measures are examined. The one-dimensional parametric measures which are derived from the non-parametric are superior to Fisher's information measure because they are free from regularity conditions. But if we impose the regularity conditions of the Fisherian theory of information, these measures become linear functions of Fisher's measure.

Journal ArticleDOI
TL;DR: The conditions leading to a point of inflexion, loop cusp for parametric cubic curves and parametric B-spline cubic curves are investigated and some useful conclusions are obtained.
Abstract: The conditions leading to a point of inflexion, loop cusp for parametric cubic curves and parametric B-spline cubic curves are investigated. Some useful conclusions are obtained.

Journal ArticleDOI
TL;DR: In this article, the propagation and interaction of finite amplitude sound waves produced by a baffled piston source in a thermoviscous fluid are considered, and basic equations are derived and their ranges of validity established.
Abstract: The propagation and interaction of finite amplitude sound waves produced by a baffled piston source in a thermoviscous fluid are considered. Basic equations are derived and their ranges of validity established. This is used to relate some earlier works by others on nonlinear model equations in acoustics. Applications are made to the theory of parametric acoustic arrays, where the effects of nonlinear attenuation are discussed.

Journal ArticleDOI
TL;DR: This paper first reformulates the model as a linear complementarity problem and then applies the parametric principal pivoting algorithm for its solution, leading to the study of an “arc—arc weighted adjacency matrix” associated with a simple digraph having weights on the nodes.
Abstract: This paper presents a parametric linear complementarity technique for the computation of equilibrium prices in a single commodity spatial model We first reformulate the model as a linear complementarity problem and then apply the parametric principal pivoting algorithm for its solution This reformulation leads to the study of an “arc—arc weighted adjacency matrix” associated with a simple digraph having weights on the nodes Several basic properties of such a matrix are derived Using these properties, we show how the parametric principal pivoting algorithm can be greatly simplified in this application Finally, we report some computational experience with the proposed technique for solving some large problems

Journal ArticleDOI
TL;DR: In this article, the photon statistics of quadratic parametric two-mode processes are derived starting from the results of the first part of this paper and a general theory is presented and particular processes are discussed.
Abstract: The photon statistics of quadratic parametric two-mode processes are derived starting from the results of the first part of this paper. A general theory is presented and particular processes are discussed. Assuming the initial field to be coherent, fluctuations of the pumping field are taken into account. The effects of antibunching and anticorrelation of photons on the photon distribution, its factorial moments, fluctuations in modes and their correlations for the amplification process are demonstrated and their dependence on levels of pumping field fluctuations is shown.

Journal ArticleDOI
TL;DR: In this paper, non-parametric estimates of mixing proportions based on kernel-type density estimators are badly suited to several types of data, and their construction involves the crucial choice of the "window size", or smoothing parameter.
Abstract: SUMMARY Non-parametric estimates of mixing proportions based on kernel-type density estimators are badly suited to several types of data. They suffer from aberrations due to rounding or truncation of the measurements, and their construction involves the crucial choice of the "window size", or smoothing parameter. In many circumstances estimators based on the empiric distribution function would be more suitable, and in this paper we investigate their properties. The estimators we introduce lead in a natural way to non-parametric forms of well-known parametric estimators. Their efficiency approaches 100 per cent as the distances between the component distributions increase.

Journal ArticleDOI
TL;DR: In a drug study the application of the coefficients of the AR model as input parameters in the discriminant analysis, instead of arbitrary chosen frequency bands, brought a significant improvement in distinguishing the effects of the medication.


Journal ArticleDOI
TL;DR: In this paper, the authors make the point that a wide variety of spectrum types admit to modal analysis wherein the modes are characterized by amplitudes, frequencies, and damping factors, and the associated modal decomposition is appropriate for both continuous and discrete components of the spectrum.
Abstract: Parametric methods of spectrum analysis are founded on finite-dimensional models for covariance sequences. Rational spectrum approximants for continuous spectra are based on autoregressive (AR), moving average (MA), or autoregressive moving average (ARMA) models for covariance sequences. Line spectrum approximants to discrete spectra are based on cosinusoidal models for covariance sequences. In this paper we make the point that a wide variety of spectrum types admit to modal analysis wherein the modes are characterized by amplitudes, frequencies, and damping factors. The associated modal decomposition is appropriate for both continuous and discrete components of the spectrum. The domain of attraction for the decomposition includes ARMA sequences, harmonically or nonharmonically related sinusoids, damped sinusoids, white noise, and linear combinations of these. The parametric spectrum analysis problem now becomes one of identifying mode parameters. This we achieve by solving two modified least squares problems. Numerical results are presented to illustrate the identification of mode parameters and corresponding spectra from finite records of perfect and estimated covariance sequences. The results for sinusoids and sinusoids in white noise are interpreted in terms of in-phase and quadrature effects attributable to the finite record length.

15 Feb 1981
TL;DR: A parametric software cost estimation model prepared for Deep Space Network (DSN) Data Systems implementation tasks is presented, which incorporates principles and data from a number of existing models.
Abstract: A parametric software cost estimation model prepared for Deep Space Network (DSN) Data Systems implementation tasks is presented The resource estimation model incorporates principles and data from a number of existing models The model calibrates task magnitude and difficulty, development environment, and software technology effects through prompted responses to a set of approximately 50 questions Parameters in the model are adjusted to fit DSN software life cycle statistics The estimation model output scales a standard DSN Work Breakdown Structure skeleton, which is then input into a PERT/CPM system, producing a detailed schedule and resource budget for the project being planned

Journal ArticleDOI
TL;DR: A unified treatment of the description of the statistical dynamical properties of two-mode quadratic optical parametric processes with intense coherent or stochastic pumping (parametric amplification, frequency conversion, second subharmonic generation, etc. as special cases are involved) including the lossy mechanism is developed in this article.
Abstract: A unified treatment of the description of the statistical dynamical properties of two-mode quadratic optical parametric processes with intense coherent or stochastic pumping (parametric amplification, frequency conversion, second subharmonic generation, etc. as special cases are involved) including the lossy mechanism is developed. The Heisenberg-Langevin equations are derived and their solutions are used to obtain the quantum characteristic functions and the corresponding quasidistributions. The Fokker-Planck equation approach is adopted as well and the equivalence of both the approaches is explicitly found.

Book ChapterDOI
01 Jan 1981
TL;DR: An algorithm is presented that generates an MRF on a finite toroidal square lattice from an independent identically distributed (i.i.d.) array of random variables and a given set of independent real-valued statistical parameters.
Abstract: We propose Markov Random Fields (MRFs) as probabilistic models of digital image texture where a textured region is viewed as a finite sample of a two-dimensional random process describable by its statistical parameters. MRFs are multidimensional generalizations of Markov chains defined in terms of conditional probabilities associated with spatial neighborhoods. We present an algorithm that generates an MRF on a finite toroidal square lattice from an independent identically distributed (i.i.d.) array of random variables and a given set of independent real-valued statistical parameters. The parametric specification of a consistent collection of MRF conditional probabilities is a general result known as the MRF-Gibbs Random Field (GRF) equivalence. The MRF statistical parameters control the size and directionality of the clusters of adjacent similar pixels which are basic to texture discrimination and thus seem to constitute an efficient model of texture. In the last part of this paper we outline an MRF parameter estimation method and goodness of fit statistical tests applicable to MRF models for a given unknown digital image texture on a finite toroidal square lattice. The estimated parameters may be used as basic features in texture classification. Alternatively these parameters may be used in conjunction with the MRF generation algorithm as a powerful data compression scheme.



Journal ArticleDOI
TL;DR: In this paper, a new parameter for measuring deterministic preference tradeoffs between pairs of attributes is introduced, and necessary and sufficient conditions for a value function to be additive are established.
Abstract: This paper introduces a new parameter for measuring deterministic preference tradeoffs between pairs of attributes. In terms of this parameter, necessary and sufficient conditions for a value function to be additive are established. When additivity conditions are satisfied, a set of these parameters is shown to characterize the value function uniquely up to a set of scaling constants. Implications for assessment, for transforming a nonadditive representation into an additive one, and for multiattribute utility functions are presented.

Journal ArticleDOI
TL;DR: In this article, phase-matched parametric coupling between forward and backward waves on a high frequency transmission line is achieved by periodic nonlinearity, and an efficiency of more than 90% in the case of frequency doubling is experimentally observed as well as parametric mixing and parametric amplification.
Abstract: Phase-matched parametric coupling between forward and backward waves on a high frequency transmission line is achieved by periodic nonlinearity. An efficiency of more than 90% in the case of frequency doubling is experimentally observed as well as parametric mixing and parametric amplification with a gain of nearly 30 dB.

Journal ArticleDOI
TL;DR: In this article, a robust version of the classical multi-sided hypothesis testing problem concerning θ, or a subvector of θ is formulated and solved, where the usual parametric null hypothesis and alternatives are both replaced with larger, more realistic, sets of possible distributions for each observation.
Abstract: Let {P θ :θ∈Θ}, Θ an open subset of R k , be a regular parametric model for a sample of n independent, identically distributed observations. Formulated and solved in this paper is a robust version of the classical multi-sided hypothesis testing problem concerning θ, or a subvector of θ. In the robust testing problem, the usual parametric null hypothesis and alternatives are both replaced with larger, more realistic, sets of possible distributions for each observation. These sets, defined in terms of a Hellinger metric projection of the actual distribution onto a subspace associated with the parametric null hypothesis, are required to shrink as sample size increases, so as to avoid trivial asymptotics. One construction of an asymptotically minimax test for the robust testing problem is based upon the robust estimate of θ developed in Beran (1979); another construction amounts to an adaptively modified C(α) test.