scispace - formally typeset
Search or ask a question

Showing papers on "Parametric statistics published in 1985"


Book
01 Jan 1985
TL;DR: In this article, the authors present a model for estimating the effect size from a series of experiments using a fixed effect model and a general linear model, and combine these two models to estimate the effect magnitude.
Abstract: Preface. Introduction. Data Sets. Tests of Statistical Significance of Combined Results. Vote-Counting Methods. Estimation of a Single Effect Size: Parametric and Nonparametric Methods. Parametric Estimation of Effect Size from a Series of Experiments. Fitting Parametric Fixed Effect Models to Effect Sizes: Categorical Methods. Fitting Parametric Fixed Effect Models to Effect Sizes: General Linear Models. Random Effects Models for Effect Sizes. Multivariate Models for Effect Sizes. Combining Estimates of Correlation Coefficients. Diagnostic Procedures for Research Synthesis Models. Clustering Estimates of Effect Magnitude. Estimation of Effect Size When Not All Study Outcomes Are Observed. Meta-Analysis in the Physical and Biological Sciences. Appendix. References. Index.

9,769 citations


Book
01 Jan 1985
TL;DR: This text accurately treats the parametric geometry of curves, surfaces and solids, the construction of geometric models and the analytical techniques that can be used and applications.
Abstract: From the Publisher: The first comprehensive and complete coverage of geometric modeling. This text accurately treats the parametric geometry of curves, surfaces and solids, the construction of geometric models and the analytical techniques that can be used and applications. Extensive use is made of end-of-chapter problems.

1,323 citations


Book ChapterDOI
01 Jan 1985
TL;DR: In this paper, the authors focus on the study of parametric and nonparametric methods for estimating the effect size (standardized mean difference) from a single experiment, which is based on the belief that the population effect size is actually the same across studies.
Abstract: This chapter focuses on the study of parametric and nonparametric methods for estimating the effect size (standardized mean difference) from a single experiment. It is important to recognize that estimating and interpreting a common effect size is based on the belief that the population effect size is actually the same across studies. Otherwise, estimating a mean effect may obscure important differences between the studies. The chapter discusses several alternative point estimators of the effect size δ from a single two-group experiment. These estimators are based on the sample standardized mean difference but differ by multiplicative constants that depend on the sample sizes involved. Although the estimates have identical large sample properties, they generally differ in terms of small sample properties. The statistical properties of estimators of effect size depend on the model for the observations in the experiment. A convenient and often realistic model is to assume that the observations are independently normally distributed within groups of the experiment.

699 citations


Journal ArticleDOI
TL;DR: The methodology is compared with standard algorithms such as the computed torque method and is shown to combine in practice improved performance with simpler and more tractable controller designs.
Abstract: A new scheme is presented for the accurate tracking control of robot manipulators. Based on the more general suction control methodology, the scheme addresses the following problem: Given the extent of parametric uncertainty (such as imprecisions or inertias, geometry, loads) and the frequency range of unmodeled dynamics (such as unmodeled structural modes, neglected time delays), design a nonlinear feedback controller to achieve optimal tracking performance, in a suitable sense. The methodology is compared with standard algorithms such as the computed torque method and is shown to combine in practice improved performance with simpler and more tractable controller designs.

689 citations


Journal ArticleDOI
TL;DR: A new algorithm is presented for adaptive notch filtering and parametric spectral estimation of multiple narrow-band or sine wave signals in an additive broad-band process and uses a special constrained model of infinite impulse response with a minimal number of parameters.
Abstract: A new algorithm is presented for adaptive notch filtering and parametric spectral estimation of multiple narrow-band or sine wave signals in an additive broad-band process. The algorithm is of recursive prediction error (RPE) form and uses a special constrained model of infinite impulse response (IIR) with a minimal number of parameters. The convergent filter is characterized by highly narrow bandwidth and uniform notches of desired shape. For sufficiently large data sets, the variances of the sine wave frequency estimates are of the same order of magnitude as the Cramer-Rao bound. Results from simulations illustrate the performance of the algorithm under a wide range of conditions.

472 citations


Journal ArticleDOI
TL;DR: It is shown that the proposed parametric approach to bispectrum estimation based on a non-Gaussian white noise driven autoregressive (AR) model provides bispectral estimates that are far superior to the conventional estimates in terms of bispectrals fidelity and that in the case of detecting phase coupling among sinusoids, the method provides significantly better resolution.
Abstract: Higher order spectra contain information about random processes that is not contained in the ordinary power spectrum such as the degree of nonlinearity and deviations from normality. Estimation of the bispectrum, which is a third-order spectrum, has been applied in various fields to obtain information regarding quadratic phase coupling among harmonic components and non-Gaussianness of processes. Existing methods of bispectrum estimation are patterned after the conventional methods of power spectrum estimation which are known to possess certain limitations. The paper proposes a parametric approach to bispectrum estimation based on a non-Gaussian white noise driven autoregressive (AR) model. The AR parameter estimates are obtained by solving the third-order recursion equations which may be Toeplitz in form but not symmetric. It is shown that the method provides bispectral estimates that are far superior to the conventional estimates in terms of bispectral fidelity and that in the case of detecting phase coupling among sinusoids, the method provides significantly better resolution.

275 citations


Journal ArticleDOI
TL;DR: This work uses Bayes' rule to show how prior information can improve the uniqueness of the optimal estimate, while stabilizing the iterative search for this estimate, and develops quantitative criteria for the relative importance of prior and observational data and for the effects of nonlinearity.
Abstract: Powerful methods are now available for solving linear parametric inverse problems. However, many inverse problems which arise in geohysics are nonlinear. Fortunately, it is possible to treat most of these with the air of linear perturbation theory and liner inversion. But a convenient method is needed for assessing the importance of nonlinearity in these quasi-linear problems. The present paper provides such a method. Matsu'ura and Jackson (1984) have presented a simple algorithm for evaluating the asymptotic covariance matrix fo estimation errors. In the present investigation, aspects of linear inversion are discussed, taking into account linear parametric inverse problems, nonuniqueness, prior information, confidence limits, conditional and marginal statistics, the relative importance of the prior and observational data, and standardized variables. Attention is also given to nonlinear inversion, and the application of the considered approaches to a number of examples.

272 citations


Journal ArticleDOI
01 Jul 1985
TL;DR: A new interpolation system is developed and implemented which incorporates second-derivative continuity (continuity of acceleration), local control, convenient kinetic control, and joining and phrasing of successive motions.
Abstract: Parametric keyframing is a popular animation technique where values for parameters which control the position, orientation, size, and shape of modeled objects are determined at key times, then interpolated for smooth animation. Typically the parameter values defined by the keyframes are interpolated by spline techniques with the result that the parameter change kinetics are implicitly defined by the given keyframe times and data points. Existing interpolation systems for animation are examined and found to lack certain desirable features such as continuity of acceleration or convenient kinetic control. The requirements of interpolation for animation are analyzed in order to determine the characteristics of a satisfactory system. A new interpolation system is developed and implemented which incorporates second-derivative continuity (continuity of acceleration), local control, convenient kinetic control, and joining and phrasing of successive motions. Phrasing control includes the ability to parametrically control the degree and extent of smooth motion flow between separately defined motions.

146 citations


Journal ArticleDOI
TL;DR: In this paper, the authors compare parametric and nonparametric estimators of asset pricing with or without assuming a distribution for security returns and show that the parametric estimator is robust to departures from any particular distribution, and it is more consistent with the spirit underlying utility-based asset pricing models.
Abstract: Utility-based models of asset pricing may be estimated with or without assuming a distribution for security returns; both approaches are developed and compared here The chief strength of a parametric estimator lies in its computational simplicity and statistical efficiency when the added distributional assumption is true In contrast, the nonparametric estimator is robust to departures from any particular distribution, and it is more consistent with the spirit underlying utility-based asset pricing models since the distribution of asset returns remains unspecified even in the empirical work The nonparametric approach turns out to be easy to implement with precision nearly indistinguishable from its parametric counterpart in this particular application The application shows that log utility is consistent with the data over the period 1926-1981 THE DISTRIBUTION OF ASSET returns is a fundamental quantity to be explained by financial economics Consequently, utility-based models of asset pricing are of special interest since they allow the distributions of returns to be explained rather than assumed as in distribution-based models

144 citations


Journal ArticleDOI
TL;DR: From the study it followed that the partial coherences are the proper measure of the synchronization of brain structures and their intrinsic relationships.
Abstract: A parametric autoregressive model was applied to the multichannel EEG time series. Small statistical fluctuations of the spectral estimates obtained from the short data strings made possible to follow the time changes of the signals. The multiple and partial coherences were calculated for the four channel process and compared with the coherences computed between the pairs of channels. From the study it followed that the partial coherences are the proper measure of the synchronization of brain structures and their intrinsic relationships. The partial phase spectra give the information about the phase delays. The advantages of the parametric description of signals in the frequency domain in respect to the modelling of dynamic systems was pointed out.

138 citations


Journal ArticleDOI
TL;DR: In this paper, a nonparametric procedure for estimating probability distribution function is proposed, which is a viable alternative with the advantage of not requiring a distributional assumption, and has the ability of estimating multimodal distributions.
Abstract: A currently used approach to flood frequency analysis is based on the concept of parametric statistical inference. In this analysis the assumption is made that the distribution function describing flood data is known, for example, a log-Pearson type III distribution. However, such an assumption is not always justified and often leads to other difficulties; it could also result in considerable variability in the estimation of design floods. A new method is developed in this article based on the nonparametric procedure for estimating probability distribution function. The results indicate that design floods computed from the different assumed distribution and from the nonparametric method provide comparable results. However, the nonparametric method is a viable alternative with the advantage of not requiring a distributional assumption, and has the ability of estimating multimodal distributions.

Journal ArticleDOI
TL;DR: In contrast to "ordinary" sensitivity analysis in linear programming, the tolerance approach in this paper considers simultaneous and independent changes in the objective function coefficients and in the right-hand side terms and yields a maximum tolerance percentage such that, as long as selected coefficients or terms are accurate to within that percentage of their estimated values, the same basis is optimal.
Abstract: In constrast to "ordinary" sensitivity analysis in linear programming, the tolerance approach considers simultaneous and independent changes in the objective function coefficients and in the right-hand side terms. This approach yields a maximum tolerance percentage such that, as long as selected coefficients or terms are accurate to within that percentage of their estimated values, the same basis is optimal. In particular, if the objective function coefficients are accurate to within the maximum tolerance percentage of their specified values, then the same solution is optimal.

Book ChapterDOI
01 Jan 1985
TL;DR: In this article, the authors investigate the problem of testing for a change point in stationary statistical models and estimating the change parameters in an off-line framework and provide asymptotic results which emphasize the need for weighting the classical test statistics when the change time is completely unknown.
Abstract: We investigate the problem of testing for a change-point in stationary statistical models and of estimating the change parameters in an off-line framework. This paper provides asymptotic results which emphasize the need for weighting the classical test statistics when the change time is completely unknown. The asymptotic expansions can also give new detectors which are simpler for using.

Journal ArticleDOI
TL;DR: An automated system has been developed to obtain the process statistical variations and extract SPICE model parameters for a large number of MOS devices, and a linear approximation for the yield body boundary is used to make an accurate prediction of parametric yield.
Abstract: Large statistical variations are often found in the performance of VLSI circuits; as a result, only a fraction of the circuits manufactured may meet performance goals An automated system has been developed to obtain the process statistical variations and extract SPICE model parameters for a large number of MOS devices Device length and width, oxide capacitance, and flat-band voltage are shown to be the principal process factors responsible for the statistical variation of device characteristics Intradie variations are much smaller than the interdie variations, therefore, only the interdie variations are responsible for variations in circuit performance This accurate and simple statistical modeling approach uses only four statistical variables, and thus enables the development of a very computationally efficient statistical parametric yield estimator (SPYE) A linear approximation for the yield body boundary is used to make an accurate prediction of parametric yield With the addition of temperature and supply voltage as operating condition variables, a maximum of seven simulations are required; only slightly more than the three to five required for "worst case analysis" The method has also been adapted statistical parametric specification of standard cells; performance ranges of circuit building blocks can be characterized once the statistical variations of process-dependent parameters are known Predicted performance variations from SPYE have been compared with measured variations in delay and power consumption for a 7000-gate n-MOS inverter chain Agreement with the mean delay and power are better than 5 percent where SPICE model parameters were obtained from the same slice used for circuit characterization Excellent agreement was obtained in the predicted spread in the circuit delay and power consumption using measured variations in the statistical variables

01 Aug 1985
TL;DR: In this paper, the notion of geometric continuity as a parametrization independent measure is extended for arbitrary order n(Gn), and for objects of arbitrary parametric dimension p. The approach taken is important for several reasons: it generalizes geometric continuity to arbitrary order for both curves and surfaces.
Abstract: : Parametric spline curves and surfaces are typically constructed so that some number of derivatives match where the curve segments or surface patches abut. If derivatives up to order n are continuous, the segments or patches are said to meet with Cn, or nth order parametric continuity. It has been shown previously that parametric continuity is sufficient, but not necessary, for geometric smoothness. The geometric measures of unit tangent and curvature vectors for curves (objects of parametric dimension one), and tangent plane and Dupin indicatrix for surfaces (objects of parametric dimension two), have been used to define first and second order geometric continuity. These measures are intrinsic in that they are independent of the parametrizations used to describe the curve or surface. In this work, the notion of geometric continuity as a parametrization independent measure is extended for arbitrary order n(Gn), and for objects of arbitrary parametric dimension p. Two equivalent characterizations of geometric continuity are developed: one based on the notion of reparametrization, and one based on the theory of differentiable manifolds. From the basic definitions, a set of necessary and sufficient constraint equations is developed. The constraints (known as the Beta constraints) result from a direct application of the univariate chain rule for curves and the bivariate chain rule for surfaces. In the spline construction process the Beta constraints provide for the introduction of freely selectable quantities known as shape parameters. For polynomial splines, the use of the Beta constraints allows greater design flexibility through the shape parameters without raising the polynomial degree. The approach taken is important for several reasons. First, it generalizes geometric continuity to arbitrary order for both curves and surfaces. Second, it shows the fundamental connection between geometric continuity of curves and that of surfaces.

Journal ArticleDOI
TL;DR: In this paper, the authors provide three families of discrete parametric distributions which are versatile in fitting increasing, decreasing, and constant failure rate models to either uncensored or right-censored discrete life-test data.
Abstract: In some situations, discrete failure ``time'' distributions are appropriate to model ``lifetimes''. For example, a discrete distribution is appropriate when a piece of equipment operates in cycles and the number of cycles prior to failure is observed. This paper provides three families of discrete parametric distributions which are versatile in fitting increasing, decreasing, and constant failure rate models to either uncensored or right-censored discrete life-test data. The maximum likelihood estimation of parameters, survival probabilities, and mean lifetime is investigated. The MLEs can be computed by simple numerical methods.

Journal ArticleDOI
TL;DR: In this article, a new method proposed as an alternative works adaptively by repeated application of the parametric bootstrap idea over slowly shrinking confidence regions, and its finite-sample behavior is studied in a number of examples.
Abstract: Theoretical and empirical evidence are reported showing that the tests given in Cox (1961) and Williams (1970a, b) for the problem of testing separate families of hypotheses suffer from fundamental difficulties. Neither test is guaranteed to be asymptotically level a, even for very smooth problems. A new method proposed here as an alternative works adaptively by repeated application of the parametric bootstrap idea over slowly shrinking confidence regions. Theoretical properties of the method are derived, and its finite-sample behavior is studied in a number of examples.

Journal ArticleDOI
TL;DR: Interdie variations in length and width, oxide capitance, and flat band voltage are shown to be responsible for process induced variations in circuit performance, which is the basis for SPYE, Statistical Parametric Yield Estimator.
Abstract: Large statistical variations are often found in the performance of VLSI circuits; as a result, only a fraction of the circuits manufactured may meet performance goals. An automated system has been developed to obtain the process statistical variations and extract SPICE model parameters for a large number of MOS devices. Device length and width, oxide capacitance, and flat-band voltage are shown to be the principal process factors responsible for the statistical variation of device characteristics. Intradie variations are much smaller than the interdie variations, therefore, only the interdie variations are responsible for variations its circuit performance. This accurate and simple statistical modeling approach uses only four statistical variables, and thus enables the development of a very computationally efficient statistical parametric yield estimator (SPYE). A linear approximation for the yield body boundary is used to make an accurate prediction of parametric yield. With the addition of temperature and supply voltage as operating condition variables, a maximum of seven simulations are required; only slightly more than the three to five required for "worst case analysis". The method has also been adapted statistical parametric specification of standard cells; performance ranges of circuit building blocks can be characterized once the statistical variations of process-dependent parameters are known. Predicted performance variations from SPYE have been compared with measured variations in delay and power consumption for a 7000-gate n-MOS inverter chain. Agreement with the mean delay and power are better than 5 percent where SPICE model parameters were obtained from the same slice used for circuit characterization. Excellent agreement was obtained in the predicted spread in the circuit delay and power consumption using measured variations in the statistical variables.

Journal ArticleDOI
TL;DR: In this paper, a maximum likelihood procedure for estimating simultaneously the concentration parameter of the Dimroth-Watson distribution and either the length or the surface density for the structure is described and illustrated with the capillary data.
Abstract: SUMMARY Classical stereological methods are based on the assumption of isotropy, either of the structure under study or of the sectioning probe relative to the structure. Thus, with "distribution-free stereology", anisotropic structures can be studied only via isotropic uniform random probes, which are often difficult to generate or inefficient in practice. Here we present a parametric approach to model the direction of a randomly chosen line or surface element, whereby length and surface densities can be estimated by probes at fixed and known directions. For the material studied, namely skeletal muscle capillaries, the Dimroth-Watson axial distribution provides a good fit. A maximum likelihood procedure for estimating simultaneously the concentration parameter of the Dimroth-Watson distribution and either the length or the surface density for the structure is described and illustrated with the capillary data. When a particular axial model is known to be satisfactory, a short-cut estimation method of practical value can be used.

Journal ArticleDOI
TL;DR: In this paper, the time evolution operator for a general linearly driven parametric quantum oscillator is constructed for a collinear collision of an atom with a diatomic molecule.
Abstract: The time‐evolution operator is explicitly constructed for a general linearly driven parametric quantum oscillator, equivalent to a harmonic oscillator driven by linear plus quadratic potentials. The method is based on an algebra of operators which are bilinear in the position and momentum operators, and form a closed set with respect to commutation. The obtained result requires only integrals over time and the solution of two coupled first order linear differential equations related to the classical equations of motion. The model is used to obtain vibration‐translation probabilities in a collinear collision of an atom with a diatomic molecule. Numerical calculations have been performed for systems with several mass combinations and potential parameters. Approximation methods are compared, and criteria are established to determine when it is necessary to go beyond the popular linearly driven harmonic oscillator.

01 Nov 1985
TL;DR: The accuracy of the Ibrahim time Domain (ITD) identification algorithm in extracting structural model parameters from free response functions was studied using computer simulated data for 65 positions on an isotropic, uniform thickness plate with mode shapes obtained by NASTRAN analysis.
Abstract: The accuracy of the Ibrahim time Domain (ITD) identification algorithm in extracting structural model parameters from free response functions was studied using computer simulated data for 65 positions on an isotropic, uniform thickness plate with mode shapes obtained by NASTRAN analysis. Natural frequencies were used to study identification results over ranges of modal parameter values and user selectable algorithm constants. Effects of superimposing various levels of noise onto the functions were investigated. No detrimental effects were observed when the number of computational degrees of freedom allowed in the algorithm was made many times larger than the minimum necessary for adequate identification. The use of a high number of degrees of freedom when analyzing experimental data, for the simultaneous identification of many modes in one computer run are suggested.

Proceedings ArticleDOI
26 Apr 1985
TL;DR: A new algorithm is presented for adaptive notch filtering and parametric spectral estimation of multiple narrow-band or sineWave signals in an additive broadband process and approaches the Cramer-Rao bound in estimating the frequencies of the sine wave signals.
Abstract: A new algorithm is presented for adaptive notch filtering and parametric spectral estimation of multiple narrow-band or sine wave signals in an additive broadband process. The algorithm is of recursive prediction error (RPE) form and uses a special constrained model of proper infinite impulse response (IIR) with a minimal number of parameters. The convergent filter is characterized by highly narrow bandwidth and uniform notches of desired shape. For sufficiently large data sets, the algorithm approaches the Cramer-Rao bound in estimating the frequencies of the sine wave signals.

Journal ArticleDOI
TL;DR: This work investigates the case where the random field is not necessarily stationary, where the data are so scarce and so scattered in space that sample covariance function estimates are not meaningful and where, therefore, an analytical parametric 'variogram' model is used in lieu of the covariance.

Journal ArticleDOI
TL;DR: In this paper, the authors considered the possibility of physically and logically reversible processing of digital information in Josephson-junction circuits of a reasonable complexity and showed that the power dissipation in the whole device can still be as low as ∼30nW.
Abstract: A possibility of physically and logically reversible processing of digital information in Josephson-junction circuits of a reasonable complexity has been considered. As example, a 8-bit 1024-point fast convolver has been designed on the basis of a two-dimentional quasi-uniform array of \sim 250\times30,000 parametric quantrons. This completely reversible conveyer device can operate with the estimated rate of at least \sim10^{9} numbers per second, which corresponds to \sim10^{14} binary logical operations per second. At this rate the power dissipation in the whole device can still be as low as ∼30nW. A new mode of the parametric quantron operation with larger parameter margins is also described.

Journal ArticleDOI
TL;DR: This paper describes an approach for determining the optimal solution of the linear fractional programming problem based mainly on a parametric analysis of a related linear substitution problem.
Abstract: This paper describes an approach for determining the optimal solution of the linear fractional programming problem. This approach is based mainly on a parametric analysis of a related linear substitution problem. Whereas existing algorithms for linear fractional programs concentrate solely on determining the optimal solution, the approach described in this paper provides the decision maker with additional insights and valuable information on the underlying decision problem. When the usual algorithms are applied, this additional information, which is gained by the parametric analysis of the substitution problem, can be obtained only by further computational effort.

Journal ArticleDOI
TL;DR: Three phases in the generation of kinematic error functions from volumetric error measurements are described in this paper: establishing a kinematics model, acquisition of volumetrical error measurements, and determination of the parametric error function functions from the volumetry error measurements.

Journal ArticleDOI
TL;DR: In this paper, distribution-free alternatives to parametric analysis of covariance are presented and demonstrated using a specific data example, and the results of simulation studies investigating these procedures regarding their respective Type I error rate under a null condition and their statistical power are also reviewed.
Abstract: Five distribution-free alternatives to parametric analysis of covariance are presented and demonstrated using a specific data example. The results of simulation studies investigating these procedures regarding their respective Type I error rate under a null condition and their statistical power are also reviewed. The results indicate that the nonparametric procedures have appropriate Type I error rates only for those situations in which para metric A NCO VA is robust to violations of data assumptions. In terms of statistical power, nonparametric alternatives to parametric ANCOVA provide a considerable power advan tage only for situations in which extreme violations of assumptions have occurred and the linear relationship between measures is weak.

Journal ArticleDOI
TL;DR: Two algorithms are developed: a parametric algorithm and a primal-dual algorithm that solve a linear programming problem with parametric upper bounds and a sequence of related dual feasible linear programming problems.

Journal ArticleDOI
TL;DR: In this article, an ARMA model was used to represent accelerograms which were first processed using a variance-stabilizing transformation, and relationships were found between ARMA parameters and physical variables making it possible to simulate earthquake ground motions throughout the region.

Journal ArticleDOI
TL;DR: In this paper, the singular value decomposition (SVD) is used to analyze the resolving power of the matrix forward operators of the parametric inverse operators of a single trace forward model.
Abstract: An alternative to the conventional time series approach to single‐trace modeling and inversion by convolution and inverse filtering is a parametric approach. To obtain insight into the potential of the parametric approach, the solution of the single‐trace forward problem is formulated in matrix terms. For the nonlinear reflector lag time parameters this is achieved by linearization, which is shown to be a valid approximation over a sufficiently large region. The matrix forward operators are analyzed by means of the singular value decomposition (SVD). The SVD can be considered a generalization of the Fourier transform of convolution operators. On the basis of the SVD analysis, inverse operators are designed which combine stability with high resolving power. A method to determine the resolving power of the parametric inverse operators is presented. Several examples show how wavelet bandwidth, data noise level, and model complexity influence the resolving power of the data for the reflection coefficient and ...