scispace - formally typeset
Search or ask a question

Showing papers on "Parametric statistics published in 1996"


Journal ArticleDOI
TL;DR: The article consists of background material and of the basic problem formulation, and introduces spectral-based algorithmic solutions to the signal parameter estimation problem and contrast these suboptimal solutions to parametric methods.
Abstract: The quintessential goal of sensor array signal processing is the estimation of parameters by fusing temporal and spatial information, captured via sampling a wavefield with a set of judiciously placed antenna sensors. The wavefield is assumed to be generated by a finite number of emitters, and contains information about signal parameters characterizing the emitters. A review of the area of array processing is given. The focus is on parameter estimation methods, and many relevant problems are only briefly mentioned. We emphasize the relatively more recent subspace-based methods in relation to beamforming. The article consists of background material and of the basic problem formulation. Then we introduce spectral-based algorithmic solutions to the signal parameter estimation problem. We contrast these suboptimal solutions to parametric methods. Techniques derived from maximum likelihood principles as well as geometric arguments are covered. Later, a number of more specialized research topics are briefly reviewed. Then, we look at a number of real-world problems for which sensor array processing methods have been applied. We also include an example with real experimental data involving closely spaced emitters and highly correlated signals, as well as a manufacturing application example.

4,410 citations


Journal ArticleDOI
TL;DR: Functional magnetic resonance imaging is used to probe PFC activity during a sequential letter task in which memory load was varied in an incremental fashion, providing a "dose-response curve" describing the involvement of both PFC and related brain regions in WM function.

1,609 citations


Journal ArticleDOI
TL;DR: In this paper, the authors dealt with H ∞ control problem for systems with parametric uncertainty in all matrices of the system and output equations and derived necessary and sufficient conditions for quadratic stability with disturbance attenuation.
Abstract: This paper deals with H ∞ control problem for systems with parametric uncertainty in all matrices of the system and output equations. The parametric uncertainty under consideration is of a linear fractional form. Both the continuous and the discrete-time cases are considered. Necessary and sufficient conditions for quadratic stability with H ∞ disturbance attenuation are obtained.

1,557 citations


Journal ArticleDOI
TL;DR: These LMI-based tests are applicable to constant or time-varying uncertain parameters and are less conservative than quadratic stability in the case of slow parametric variations, and they often compare favorably with /spl mu/ analysis for time-invariant parameter uncertainty.
Abstract: This paper presents new tests to analyze the robust stability and/or performance of linear systems with uncertain real parameters. These tests are extensions of the notions of quadratic stability and performance where the fixed quadratic Lyapunov function is replaced by a Lyapunov function with affine dependence on the uncertain parameters. Admittedly with some conservatism, the construction of such parameter-dependent Lyapunov functions can be reduced to a linear matrix inequality (LMI) problem and hence is numerically tractable. These LMI-based tests are applicable to constant or time-varying uncertain parameters and are less conservative than quadratic stability in the case of slow parametric variations. They also avoid the frequency sweep needed in real-/spl mu/ analysis, and numerical experiments indicate that they often compare favorably with /spl mu/ analysis for time-invariant parameter uncertainty.

999 citations


Journal ArticleDOI
Jorma Rissanen1
TL;DR: A sharper code length is obtained as the stochastic complexity and the associated universal process are derived for a class of parametric processes by taking into account the Fisher information and removing an inherent redundancy in earlier two-part codes.
Abstract: By taking into account the Fisher information and removing an inherent redundancy in earlier two-part codes, a sharper code length as the stochastic complexity and the associated universal process are derived for a class of parametric processes. The main condition required is that the maximum-likelihood estimates satisfy the central limit theorem. The same code length is also obtained from the so-called maximum-likelihood code.

906 citations


Journal ArticleDOI
TL;DR: The overall adaptive scheme is shown to guarantee global uniform ultimate boundedness.

860 citations


Journal ArticleDOI
TL;DR: In this article, a nonparametric approach to significance testing for statistic images from activation studies is presented, which is based on a simple rest-activation study, and relies only on minimal assumptions about the design of the experiment, with Type I error (almost) exactly that specified, and hence is always valid.
Abstract: The analysis of functional mapping experiments in positron emission tomography involves the formation of images displaying the values of a suitable statistic, summarising the evidence in the data for a particular effect at each voxel These statistic images must then be scrutinised to locate regions showing statistically significant effects The methods most commonly used are parametric, assuming a particular form of probability distribution for the voxel values in the statistic image Scientific hypotheses, formulated in terms of parameters describing these distributions, are then tested on the basis of the assumptions Images of statistics are usually considered as lattice representations of continuous random fields These are more amenable to statistical analysis There are various shortcomings associated with these methods of analysis The many assumptions and approximations involved may not be true The low numbers of subjects and scans, in typical experiments, lead to noisy statistic images with low degrees of freedom, which are not well approximated by continuous random fields Thus, the methods are only approximately valid at best and are most suspect in single-subject studies In contrast to the existing methods, we present a nonparametric approach to significance testing for statistic images from activation studies Formal assumptions are replaced by a computationally expensive approach In a simple rest-activation study, if there is really no activation effect, the labelling of the scans as “active” or “rest” is artificial, and a statistic image formed with some other labelling is as likely as the observed one Thus, considering all possible relabellings, a p value can be computed for any suitable statistic describing the statistic image Consideration of the maximal statistic leads to a simple nonparametric single-threshold test This randomisation test relies only on minimal assumptions about the design of the experiment, is (almost) exact, with Type I error (almost) exactly that specified, and hence is always valid The absence of distributional assumptions permits the consideration of a wide range of test statistics, for instance, “pseudo” t statistic images formed with smoothed variance images The approach presented extends easily to other paradigms, permitting nonparametric analysis of most functional mapping experiments When the assumptions of the parametric methods are true, these new nonparametric methods, at worst, provide for their validation When the assumptions of the parametric methods are dubious, the nonparametric methods provide the only analysis that can be guaranteed valid and exact

817 citations


Journal ArticleDOI
TL;DR: Alternative techniques drawn from the fields of resistant, robust and non-parametric statistics are usually much less affected by the presence of ‘outliers’ and other forms of non-normality and are presented.
Abstract: Basic traditional parametric statistical techniques are used widely in climatic studies for characterizing the level (central tendency) and variability of variables, assessing linear relationships (including trends), detection of climate change, quality control and assessment, identification of extreme events, etc. These techniques may involve estimation of parameters such as the mean (a measure of location), variance (a measure of scale) and correlatiodregression coefficients (measures of linear association); in addition, it is often desirable to estimate the statistical significance of the difference between estimates of the mean from two different samples as well as the significance of estimated measures of association. The validity of these estimates is based on underlying assumptions that sometimes are not met by real climate data. Two of these assumptions are addressed here: normality and homogeneity (and as a special case statistical stationarity); in particular, contamination from a relatively few ‘outlying values’ may greatly distort the estimates. Sometimes these common techniques are used in order to identify outliers; ironically they may fail because of the presence of the outliers! Alternative techniques drawn from the fields of resistant, robust and non-parametric statistics are usually much less affected by the presence of ‘outliers’ and other forms of non-normality. Some of the theoretical basis for the alternative techniques is presented as motivation for their use and to provide quantitative measures for their performance as compared with the traditional techniques that they may replace. Although this work is by no means exhaustive, typically a couple of suitable alternatives are presented for each of the common statistical quantitiedtests mentioned above. All of the technical details needed to apply these techniques are presented in an extensive appendix. With regard to the issue of homogeneity of the climate record, a powerfd non-parametric technique is introduced for the objective identification of ‘change-points’ (discontinuities) in the mean. These may arise either naturally (abrupt climate change) or as the result of errors or changes in instruments, recording practices, data transmission, processing, etc. The change-point test is able to identify multiple discontinuities and requires no ‘metadata’ or comparison with neighbouring stations; these are important considerations because instrumental changes are not always documented and, particularly with regard to radiosonde observations, suitable neighbouring stations for ‘buddy checks’ may not exist. However, when such auxiliary information is available it may be used as independent confirmation of the artificial nature of the discontinuities. The application and practical advantages of these alternative techniques are demonstrated using primarily actual radiosonde station data and in a few cases using some simulated (artificial) data as well. The ease with which suitable examples were obtained from the radiosonde archive begs for serious consideration of these techniques in the analysis of climate data.

574 citations


Book ChapterDOI
01 Jan 1996
TL;DR: In this paper, the authors review the history of local regression and discuss four basic components that must be chosen in using local regression in practice: the weight function, the parametric family that is fitted locally, the bandwidth, and the assumptions about the distribution of the response.
Abstract: Local regression is an old method for smoothing data, having origins in the graduation of mortality data and the smoothing of time series in the late 19th century and the early 20th century. Still, new work in local regression continues at a rapid pace. We review the history of local regression. We discuss four of its basic components that must be chosen in using local regression in practice — the weight function, the parametric family that is fitted locally, the bandwidth, and the assumptions about the distribution of the response. A major theme of the paper is that these choices represent a modeling of the data; different data sets deserve different choices. We describe polynomial mixing, a method for enlarging polynomial parametric families. We introduce an approach to adaptive fitting,assessment of parametric localization. We describe the use of this approach to design two adaptive procedures: one automatically chooses the mixing degree of mixing polynomials at each x using cross-validation, and the other chooses the bandwidth at each x using C p . Finally, we comment on the efficacy of using asymptotics to provide guidance for methods of local regression.

469 citations


Journal ArticleDOI
TL;DR: It is shown that the search for robustly stabilizing controllers may be limited to controllers with the same order as the original plant, and sufficient conditions for the existence of parameter-dependent Lyapunov functions are given in terms of a criterion reminiscent of Popov's stability criterion.
Abstract: In this paper, the problem of robust stability of systems subject to parametric uncertainties is considered. Sufficient conditions for the existence of parameter-dependent Lyapunov functions are given in terms of a criterion which is reminiscent of, but less conservative than, Popov's stability criterion. An equivalent frequency-domain criterion is demonstrated. The relative sharpness of the proposed test and existing stability criteria is then discussed. The use of parameter-dependent Lyapunov functions for robust controller synthesis is then considered. It is shown that the search for robustly stabilizing controllers may be limited to controllers with the same order as the original plant. A possible synthesis procedure and a numerical example are then discussed.

415 citations


Journal ArticleDOI
TL;DR: It is shown that the block size plays an important role in determining the success of the block bootstrap, and a data-based block size selection procedure is proposed, which would account for lag order uncertainty in resampling.
Abstract: In recent years, several new parametric and nonparametric bootstrap methods have been proposed for time series data. Which of these methods should applied researchers use? We provide evidence that for many applications in time series econometrics parametric methods are more accurate, and we identify directions for future research on improving nonparametric methods. We explicitly address the important but often neglected issue of model selection in bootstrapping. In particular, we emphasize the advantages of the AIC over other lag order selection criteria and the need to account for lag order uncertainty in resampling. We also show that the block size plays an important role in determining the success of the block bootstrap, and we propose a data-based block size selection procedure.

Book
01 Aug 1996
TL;DR: This tutorial discusses model building, a process that automates the very labor-intensive and therefore time-heavy and expensive process of building a model.
Abstract: PART 1: MODEL BUILDING PART 2: INFERENCE PART 3: APPROXIMATIONS PART 4: DECISIONS PART 5: EXAMPLES PART 6: APPENDICES

Journal ArticleDOI
TL;DR: In this paper, the authors proposed a calibration algorithm that estimates the calibration matrix consisting of the unknown gain, phase, and mutual coupling coefficients as well as the sensor positions using a set of calibration sources in known locations.
Abstract: High-resolution array processing algorithms for source localization are known to be sensitive to errors in the model for the sensor-array spatial response. In particular, unknown gain, phase, and mutual coupling as well as errors in the sensor positions can seriously degrade the performances of array-processing algorithms. This paper describes a calibration algorithm that estimates the calibration matrix consisting of the unknown gain, phase, and mutual-coupling coefficients as well as the sensor positions using a set of calibration sources in known locations. The estimation of the various parameters is based on a maximum likelihood approach. Cramer-Rao lower-bound (CRB) expressions for the sensor positions and the calibration matrix parameters are also derived. Numerical results are shown to illustrate the potential usefulness of the proposed calibration algorithm toward better accuracy and resolution in parametric array-processing algorithms.

Journal ArticleDOI
TL;DR: This paper compares partially parametric and fully parametric regression-based multiple-imputation methods for handling data sets with missing values and provides an example of how multiple imputation can be used to combine information from two cohorts to estimate quantities that cannot be estimated directly from either one of the cohorts separately.


Journal ArticleDOI
TL;DR: A new model-based segmentation technique combining desirable properties of physical models, shape representation by Fourier parametrization, and modelling of natural shape variability is described, which achieves a uniform mapping between object surface and parameter space.

Journal ArticleDOI
TL;DR: In this article, a unified process design framework for obtaining integrated process and control systems design, which are economically optimal and can cope with parametric uncertainty and process disturbances, is described.
Abstract: Fundamental developments of a unified process design framework for obtaining integrated process and control systems design, which are economically optimal and can cope with parametric uncertainty and process disturbances, are described. Based on a dynamic mathematical model describing the process, including path constraints, interior and end-point constraints, a model that describes uncertain parameters and time-varying disturbances (for example, a probability distributions or lower/upper bounds), and a set of process design and control alternatives (together with a set of control objectives and types of controllers), the problem is posed as a mixed-integer stochastic optimal control formulation. An iterative decomposition algorithm proposed alternates between the solution of a multiperiod “design” subproblem, determining the process structure and design together with a suitable control structure (and its design characteristics) to satisfy a set of “critical” parameters/periods (for uncertainty disturbance) over time, and a time-varying feasibility analysis step, which identifies a new set of critical parameters for fixed design and control. Two examples are detailed, a mixing-tank problem to show the analytical steps of the procedure, and a ternary distillation design problem (featuring a rigorous tray-by-tray distillation model) to demonstrate the potential of the novel approach to reach solutions with significant cost savings over sequential techniques.

Journal ArticleDOI
TL;DR: In this article, a systematic way to combine adaptive control and sliding mode control (SMC) for trajectory tracking of robot manipulators in the presence of parametric uncertainties and uncertain nonlinearities is developed.
Abstract: A systematic way to combine adaptive control and sliding mode control (SMC) for trajectory tracking of robot manipulators in the presence of parametric uncertainties and uncertain nonlinearities is developed. Continuous sliding mode controllers without reaching transients and chattering problems are first developed by using a dynamic sliding mode. Transient performance is guaranteed and globally uniformly ultimately bounded (GUUB) stability is obtained. An adaptive scheme is also developed for comparison. With some modifications to the adaptation law, the control law is redesigned by combining the design methodologies of adaptive control and sliding mode control. The suggested controller preserves the advantages of both methods, namely, asymptotic stability of the adaptive system for parametric uncertainties and GUUB stability with guaranteed transient perfonnance of sliding mode control for both parametric uncertainties and uncertain nonlinearities. The control law is continuous and the chattering problem of sliding mode control is avoided. A prior knowledge of bounds on parametric uncertainties and uncertain nonlinearities is assumed. Experimental results conducted on the UCB/NSK SCARA direct drive robot show that the combined method reduces the final tracking error to more than half of the smoothed SMC laws for a payload uncertainty of 6 kg, and validate the advantage of introducing parameter adaptation in the smoothed SMC laws.

Journal ArticleDOI
TL;DR: In this paper, an analytical method for planning an efficient toolpath in machining free-form surfaces on 3-axis milling machines is presented, which uses a nonconstant offset of the previous toolpath, which guarantees the cutter moving in an unmachined area of the part surface.
Abstract: This paper presents an analytical method for planning an efficient tool-path in machining free-form surfaces on 3-axis milling machines. This new approach uses a nonconstant offset of the previous tool-path, which guarantees the cutter moving in an unmachined area of the part surface and without redundant machining. The method comprises three steps : (1) the calculation of the tool-path interval, (2) the conversion from the path interval to the parametric interval, and (3) the synthesis of efficient tool-path planning.

Journal ArticleDOI
TL;DR: A new model for estimating optical flow based on the motion of planar regions plus local deformations which exploits the strong constraints of parametric approaches while retaining the adaptive nature of regularization approaches.
Abstract: This paper presents a new model for estimating optical flow based on the motion of planar regions plus local deformations. The approach exploits brightness information to organize and constrain the interpretation of the motion by using segmented regions of piecewise smooth brightness to hypothesize planar regions in the scene. Parametric flow models are estimated in these regions in a two step process which first computes a coarse fit and then estimates the appropriate parametrization of the motion of the region. The initial fit is refined using a generalization of the standard area-based regression approaches. Since the assumption of planarity is likely to be violated, we allow local deformations from the planar assumption in the same spirit as physically-based approaches which model shape using coarse parametric models plus local deformations. This parametric plus deformation model exploits the strong constraints of parametric approaches while retaining the adaptive nature of regularization approaches. Experimental results on a variety of images model produces accurate flow estimates while the incorporation of brightness segmentation boundaries.

Journal ArticleDOI
TL;DR: It is demonstrated that following the approach presented may lead to violations of the strict prescriptions and proscriptions of measurement theory, but that in practical terms these violations would have diminished consequences, especially when compared to the advantages afforded to the practicing researcher.
Abstract: Elements of measurement theory have recently been introduced into the software engineering discipline. It has been suggested that these elements should serve as the basis for developing, reasoning about, and applying measures. For example, it has been suggested that software complexity measures should be additive, that measures fall into a number of distinct types (i.e., levels of measurement: nominal, ordinal, interval, and ratio), that certain statistical techniques are not appropriate for certain types of measures (e.g., parametric statistics for less-than-interval measures), and that certain transformations are not permissible for certain types of measures (e.g., non-linear transformations for interval measures). In this paper we argue that, inspite of the importance of measurement theory, and in the context of software engineering, many of these prescriptions and proscriptions are either premature or, if strictly applied, would represent a substantial hindrance to the progress of empirical research in software engineering. This argument is based partially on studies that have been conducted by behavioral scientists and by statisticians over the last five decades. We also present a pragmatic approach to the application of measurement theory in software engineering. While following our approach may lead to violations of the strict prescriptions and proscriptions of measurement theory, we demonstrate that in practical terms these violations would have diminished consequences, especially when compared to the advantages afforded to the practicing researcher.

Journal ArticleDOI
TL;DR: This research examines a variety of approaches for using two-dimensional orthogonal polynomials for the recognition of handwritten Arabic numerals and presents an efficient method for computing the moments via geometric moments and a new approach to location invariance using a minimum bounding circle.
Abstract: This research examines a variety of approaches for using two-dimensional orthogonal polynomials for the recognition of handwritten Arabic numerals. It also makes use of parametric and non-parametric statistical and neural network classifiers. Polynomials, including Legendre, Zernike, and pseudo-Zernike, are used to generate moment-based features which are invariant to location, size, and (optionally) rotation. An efficient method for computing the moments via geometric moments is presented. A side effect of this method also yields scale invariance. A new approach to location invariance using a minimum bounding circle is presented, and a detailed analysis of the rotational properties of the moments is given. Data partitioning tests are performed to evaluate the various feature types and classifiers. For rotational invariant character recognition, the highest percentage of correctly classified characters was 91.7%, and for non-rotational invariant recognition it was 97.6%. This compares with a previous effort, using the same data and test conditions, of 94.8%. The techniques developed here should also be applicable to other areas of shape recognition.

Journal ArticleDOI
TL;DR: In this paper, a simple test for dependence in the residuals of a linear parametric time series model fitted to non-gaussian data is presented, and the test statistic is a third order extension of the standard correlation test for whiteness.
Abstract: This paper presents a simple test for dependence in the residuals of a linear parametric time series model fitted to non gaussian data. The test statistic is a third order extension of the standard correlation test for whiteness. but the number of lags used in this test is a function of the sample size. The power of this test goes to one as the sample size goes to infinity for any alternative which has non zero bicovariances c e3(r,s)= E[e(t)e(t + r)e(t + s)] for a zero mean stationary random time series. The asymptotic properties of the test statistic are rigorously determined. This test is important for the validation of the sampling properties of the parameter estimates for standard finite parameter linear models when the unobserved input (innovations) process is white but not gaussian. The sizes and power derived from the asymptotic results are checked using artificial data for a number of sample sizes. Theoretical and simulation results presented in this paper support the proposition that the test wi...

ReportDOI
TL;DR: In this paper, the authors discuss the concept of robust covariance matrix estimation, which is used to estimate the spectral density matrix at frequency zero of a vector of residual terms, where the weights are determined by the kernel and the bandwidth parameter.
Abstract: Publisher Summary This chapter discusses the concept of robust covariance matrix estimation. In many structural economic or time-series models, the errors may have heteroscedasticity and temporal dependence of unknown form. Thus, to draw accurate inferences from estimated parameters, it has become increasingly common to construct test statistics using a heteroskedasticity and autocorrelation consistent (HAC) or robust covariance matrix. The key step in constructing a HAC covariance matrix is to estimate the spectral density matrix at frequency zero of a vector of residual terms. In some empirical problems, the regression residuals are assumed to be generated by a specific parametric model. In a rational expectations model, for example, the Euler equation residuals typically follow a specific moving-average (MA) process of known finite order. HAC covariance matrix estimation procedures can be classified into two broad categories: nonparametric kernel-based procedures and parametric procedures. Each kernel-based procedure uses a weighted sum of the auto-covariances to estimate the spectral density at frequency zero, where the weights are determined by the kernel and the bandwidth parameter. Each parametric procedure estimates a time-series model and then constructs the spectral density at frequency zero that is implied by this model.

Journal ArticleDOI
TL;DR: In this article, the authors consider a regression model in which the mean function may have a discontinuity at an unknown point and propose an estimate of the location of the discontinuity based on one-side nonparametric regression estimates of the mean functions.
Abstract: We consider a regression model in which the mean function may have a discontinuity at an unknown point. We propose an estimate of the location of the discontinuity based on one-side nonparametric regression estimates of the mean function. The change point estimate is shown to converge in probability at rate 0(n-1) and to have the same asymptotic distribution as maximum likelihood estimates considered by other authors under parametric regression models. Confidence regions for the location and size of the change are also discussed.

Journal ArticleDOI
TL;DR: In this paper, a plate element model of a cantilevered box beam with varying rib stiffnesses is used to demonstrate efficiency and highlight practical difficulties of the proposed approaches for predictions of static responses, modal frequencies, modeshapes, and sensitivities of those quantities to design parameters.

Journal ArticleDOI
TL;DR: In this paper, the identification and estimation of mean regression models when a binary regressor is mismeasured is examined and bounds for the model parameters are identified and provided simple estimators which are consistent and asymptotically normal.

Journal ArticleDOI
TL;DR: The ability of this new technique to identify brain regions where rCBF is closely related to increasing word presentation rate in both subjects without constraining the nature of this relationship and where these nonlinear responses differ is demonstrated.

Journal ArticleDOI
TL;DR: This paper compares the classical linear model using log-transformed data with two GLMs: one with Poisson errors and an empirical scale parameter, and one in which negative binomial errors are explicitly defined (Model 3), and concludes that either GLM method will serve equally well.
Abstract: 1. Empirically, parasite distributions are often best described by the negative binomial distribution; some hosts have many parasites while most have just a few. Thus identifying heterogeneities in parasite burdens using conventional parametric methods is problematical. In an attempt to conform to the assumptions of parametric analyses, parasitologists and ecologists frequently log-transform their overdispersed data prior to analysis. In this paper, we compare this method of analysis with an alternative, generalized linear modelling (GLM), approach. 2. We compare the classical linear model using log-transformed data (Model 1) with two GLMs: one with Poisson errors and an empirical scale parameter (Model 2), and one in which negative binomial errors are explicitly defined (Model 3). We use simulated datasets and empirical data from a long-term study of parasitism in Soay Sheep on St Kilda to test the efficacies of these three statistical models. 3. We conclude that Model 1 is much more likely to produce type I errors than either of the two GLMs, and that it also tends to produce more type II errors. Model 3 is only marginally more successful than Model 2, indicating that the use of an empirical scale parameter is only slightly more likely to generate errors than using an explicitly defined negative binomial distribution. Thus, while we strongly recommend the use of GLMs over conventional parametric analyses, either GLM method will serve equally well.

Journal ArticleDOI
01 Jan 1996-Gene
TL;DR: A rapid algorithm is presented that allows each operation to take on a range of weights, producing an relatively tight upper bound on the distance between single-chromosome genomes, by means of a greedy search with look-ahead.