scispace - formally typeset
Search or ask a question

Showing papers on "Parametric statistics published in 1993"


Journal ArticleDOI
TL;DR: In this paper, the wild bootstrap method was used to fit Engel curves in expenditure data analysis, and it was shown that the standard way of bootstrapping this statistic fails.
Abstract: In general, there will be visible differences between a parametric and a nonparametric curve estimate. It is therefore quite natural to compare these in order to decide whether the parametric model could be justified. An asymptotic quantification is the distribution of the integrated squared difference between these curves. We show that the standard way of bootstrapping this statistic fails. We use and analyse a different form of bootstrapping for this task. We call this method the wild bootstrap and apply it to fitting Engel curves in expenditure data analysis.

1,229 citations


Journal ArticleDOI
TL;DR: The strengths and limitations of correlation-based signal processing methods, with emphasis on the bispectrum and trispectrum, and the applications of higher-order spectra in signal processing are discussed.
Abstract: The strengths and limitations of correlation-based signal processing methods are discussed. The definitions, properties, and computation of higher-order statistics and spectra, with emphasis on the bispectrum and trispectrum are presented. Parametric and nonparametric expressions for polyspectra of linear and nonlinear processes are described. The applications of higher-order spectra in signal processing are discussed. >

931 citations


Journal ArticleDOI
01 Sep 1993-Ecology
TL;DR: This paper attempts to introduce some distribution-free and robust techniques to ecologists and to offer a critical appraisal of the potential advantages and drawbacks of these methods.
Abstract: After making a case for the prevalence of nonnormality, this paper attempts to introduce some distribution-free and robust techniques to ecologists and to offer a critical appraisal of the potential advantages and drawbacks of these methods. The techniques presented fall into two distinct categories, methods based on ranks and "computer-inten- sive" techniques. Distribution-free rank tests have features that can be recommended. They free the practitioner from concern about the underlying distribution and are very robust to outliers. If the distribution underlying the observations is other than normal, rank tests tend to be more efficient than their parametric counterparts. The absence, in computing packages, of rank procedures for complex designs may, however, severely limit their use for ecological data. An entire body of novel distribution-free methods has been developed in parallel with the increasing capacities of today's computers to process large quantities of data. These techniques either reshuffle or resample a data set (i.e., sample with replacement) in order to perform their analyses. The former we shall refer to as "permutation" or "randomiza- tion" methods and the latter as "bootstrap" techniques. These computer-intensive methods provide new alternatives for the problem of a small and/or unbalanced data set, and they may be the solution for parameter estimation when the sampling distribution cannot be derived analytically. Caution must be exercised in the interpretation of these estimates because confidence limits may be too small.

462 citations


Journal ArticleDOI
TL;DR: In this paper, a new class of models for nonlinear time series analysis is proposed, and a modeling procedure for building such a model is suggested, which makes use of ideas from both parametric and nonparametric statistics.
Abstract: In this article we propose a new class of models for nonlinear time series analysis, investigate properties of the proposed model, and suggest a modeling procedure for building such a model. The proposed modeling procedure makes use of ideas from both parametric and nonparametric statistics. A consistency result is given to support the procedure. For illustration we apply the proposed model and procedure to several data sets and show that the resulting models substantially improve postsample multi-step ahead forecasts over other models.

459 citations


Proceedings ArticleDOI
01 Jun 1993
TL;DR: This work addresses the more realistic and more ambitious problem of deriving symbolic constraints on the timing properties required of real-time systems by introducing parametric timed automata whose transitions are constrained with parametric timing requirements.
Abstract: Traditional approaches to the algorithmic veri cation of real-time systems are limited to checking program correctness with respect to concrete timing properties (e.g., \message delivery within 10 milliseconds"). We address the more realistic and more ambitious problem of deriving symbolic constraints on the timing properties required of real-time systems (e.g., \message delivery within the time it takes to execute two assignment statements"). To model this problem, we introduce parametric timed automata | nite-state machines whose transitions are constrained with parametric timing requirements. The emptiness question for parametric timed automata is central to the veri cation problem. On the negative side, we show that in general this question is undecidable. On the positive side, we provide algorithms for checking the emptiness of restricted classes of parametric timed automata. The practical relevance of these classes is illustrated with several veri cation examples. There remains a gap between the automata classes for which we know that emptiness is decidable and undecidable, respectively, and this gap is related to various hard and open problems of logic and automata theory.

417 citations


Journal ArticleDOI
TL;DR: Early-stage learner data are presented that are not compatible with a strong view of parametric transfer, a weak transfer view in which lexical and functional projections transfer, and the headedness of those projections transfers, but morphology-driven values of features like the strength of agreement do not transfer.
Abstract: White (e.g., 1990/1991; 1992a) argued that grammatical representations in second-language development can be understood in terms of parametric values that are transferred from the learner's native language. Schwartz (1993a; 1993b) and Schwartz and Sprouse (1994) push this view of transfer to its logical limit: The initial state of L2 acquisition is determined in its entirety by the parametric values of the native language. This article presents early-stage learner data that are not compatible with this strong view of parametric transfer. An alternative proposal is made, a weak transfer view in which lexical and functional projections transfer, and the headedness of those projections transfers, but morphology-driven values of features like the strength of agreement do not transfer. This idea is checked against learner data and is also evaluated for the extent to which it follows in a principled fashion from linguistic theory.

272 citations


Journal ArticleDOI
TL;DR: In this article, the authors present a new paradigm using experimental mathematics to examine the claims made in the levels of measurement controversy, which is referred to as monte carlo simulation, and demonstrate that the approach advocated in this paper is linked closely to representational theory.
Abstract: The notion that nonparametric methods are required as a replacement of parametric statistical methods when the scale of measurement in a research study does not achieve a certain level was discussed in light of recent developments in representational measurement theory. A new approach to examining the problem via computer simulation was introduced. Some of the beliefs that have been widely held by psychologists for several decades were examined by means of a computer simulation study that mimicked measurement of an underlying empirical structure and performed two - sample Student t - tests on the resulting sample data. It was concluded that there is no need to replace parametric statistical tests by nonparametric methods when the scale of measurement is ordinal and not interval.Stevens' (1946) classic paper on the theory of scales of measurement triggered one of the longest standing debates in behavioural science methodology. The debate -- referred to as the levels of measurement controversy, or measurement - statistics debate -- is over the use of parametric and nonparametric statistics and its relation to levels of measurement. Stevens (1946; 1951; 1959; 1968), Siegel (1956), and most recently Siegel and Castellan (1988) and Conover (1980) argue that parametric statistics should be restricted to data of interval scale or higher. Furthermore, nonparametric statistics should be used on data of ordinal scale. Of course, since each scale of measurement has all of the properties of the weaker measurement, statistical methods requiring only a weaker scale may be used with the stronger scales. A detailed historical review linking Stevens' work on scales of measurement to the acceptance of psychology as a science, and a pedagogical presentation of fundamental axiomatic (i.e., representational) measurement can be found in Zumbo and Zimmerman (1991).Many modes of argumentation can be seen in the debate about levels of measurement and statistics. This paper focusses almost exclusively on an empirical form of rhetoric using experimental mathematics (Ripley, 1987). The term experimental mathematics comes from mathematical physics. It is loosely defined as the mimicking of the rules of a model of some kind via random processes. In the methodological literature this is often referred to as monte carlo simulation. However, for the purpose of this paper, the terms experimental mathematics or computer simulation are preferred to monte carlo because the latter is typically referred to when examining the robustness of a test in relation to particular statistical assumptions. Measurement level is not an assumption of the parametric statistical model (see Zumbo & Zimmerman, 1991 for a discussion of this issue) and to call the method used herein "monte carlo" would imply otherwise. The term experimental mathematics emphasizes the modelling aspect of the present approach to the debate.The purpose of this paper is to present a new paradigm using experimental mathematics to examine the claims made in the levels of measurement controversy. As Michell (1986) demonstrated, the concern over levels of measurement is inextricably tied to the differing notions of measurement and scaling. Michell further argued that fundamental axiomatic measurement or representational theory (see, for example, Narens & Luce, 1986) is the only measurement theory which implies a relation between measurement scales and statistics. Therefore, the approach advocated in this paper is linked closely to representational theory. The novelty of this approach, to the authors knowledge, is in the use of experimental mathematics to mimic representational measurement. Before describing the methodology used in this paper, we will briefly review its motivation.Admissible TransformationsRepresentational theory began in the late 1950's with Scott and Suppes (1958) and later with Suppes and Zinnes (1963), Pfanzagl (1968), and Krantz, Luce, Suppes & Tversky (1971). …

247 citations


Proceedings ArticleDOI
02 Jun 1993
TL;DR: In this article, an adaptive control design procedure for a class of nonlinear systems with both parametric uncertainty and unknown nonlinearities is presented, and the overall adaptive scheme is shown to guarantee global uniform ultimate boundedness.
Abstract: An adaptive control design procedure for a class of nonlinear systems with both parametric uncertainty and unknown nonlinearities is presented. The unknown nonlinearities lie within some "bounding functions" which are assumed to be partially known. The key assumption is that the uncertain terms satisfy a "triangularity condition." As illustrated by examples, the proposed design procedure expands the class of nonlinear systems for which global adaptive stabilization methods can be applied. The overall adaptive scheme is shown to guarantee global uniform ultimate boundedness.

170 citations


Journal ArticleDOI
TL;DR: It is shown that some of the methods used for computing intersections of algebraic surfaces with piecewise rational polynomial parametric surface patches also apply to the generalParametric surface-intersection problem.
Abstract: Techniques for computing intersections of algebraic surfaces with piecewise rational polynomial parametric surface patches and intersections of two piecewise rational polynomial parametric surface patches are discussed. The techniques are classified using four categories-lattice evolution methods, marching methods, subdivision methods, and analytic methods-and their principal features are discussed. It is shown that some of these methods also apply to the general parametric surface-intersection problem. >

156 citations


Journal ArticleDOI
01 Dec 1993-Metrika
TL;DR: In this article, the authors show that the bootstrap approximation holds with probability for several selected parametric families of distribution functions and a simulation study is included which demonstrates the validity of this approximation.
Abstract: Let ℱ={Fθ} be a parametric family of distribution functions, and denote withFn the empirical df of an iid sample Goodness-of-fit tests of a composite hypothesis (contained in ℱ) are usually based on the so-called estimated empirical process Typically, they are not distribution-free In such a situation the bootstrap offers a useful alternative It is the purpose of this paper to show that this approximation holds with probability one A simulation study is included which demonstrates the validity of the bootstrap for several selected parametric families

155 citations


Journal ArticleDOI
TL;DR: In this paper, a modification of the G-O model is proposed and the results of the new model are compared with real software failure data and compared with the Jelinski-Moranda models.
Abstract: A stochastic model (G-O) for the software failure phenomenon based on a nonhomogeneous Poisson process (NHPP) was suggested by Goel and Okumoto (1979). This model has been widely used but some important work remains undone on estimating the parameters. The authors present a necessary and sufficient condition for the likelihood estimates to be finite, positive, and unique. A modification of the G-O model is suggested. The performance measures and parametric inferences of the new model are discussed. The results of the new model are applied to real software failure data and compared with G-O and Jelinski-Moranda models. >

Journal ArticleDOI
TL;DR: Conditions are given under which satisfaction of a performance specification is ascertained for a family of systems by only checking a finite subset of the extreme members of this family by Kharitonov's Theorem.

Journal ArticleDOI
Oliver Linton1
TL;DR: In this paper, the authors construct adaptive estimators of the identifiable parameters in a regression model when the errors follow a stationary parametric ARCH(P) process, but do not assume a functional form for the conditional density of the errors, but require that it be symmetric about zero.
Abstract: We construct efficient estimators of the identifiable parameters in a regression model when the errors follow a stationary parametric ARCH(P) process. We do not assume a functional form for the conditional density of the errors, but do require that it be symmetric about zero. The estimators of the mean parameters are adaptive in the sense of Bickel [2]. The ARCH parameters are not jointly identifiable with the error density. We consider a reparameterization of the variance process and show that the identifiable parameters of this process are adaptively estimable.

Journal ArticleDOI
TL;DR: In this paper, the performance bounds for a class of adaptive and non-adaptive systems were derived and compared, showing that adaptation improves the overall performance without the undesirable effects of high gain.

Journal ArticleDOI
TL;DR: In this paper, the authors present a preliminary study of a systematic methodology to account robustly for parametric uncertainties in the original system model, which is based on combining sliding control ideas with the recursive construction of a closed-loop Lyapunov function.
Abstract: To make input-output feedback linearization a practical and systematic design methodology for single-input nonlinear systems, two problems need to be addressed One is to handle systematically the difficulties associated with the internal dynamics or zero-dynamics when the relative degree is less than the system order The other is to account for the effect of model uncertainties in the successive differentiations of the output of interest While the first problem has recently received considerable attention, the second has been largely unexplored This paper represents a preliminary study of a systematic methodology to account robustly for parametric uncertainties in the original system model The approach is based on combining sliding control ideas with the recursive construction of a closed-loop Lyapunov function, and is illustrated with a simple example

Proceedings ArticleDOI
01 Sep 1993
TL;DR: An efficient and robust algorithm for finding points of collision between time-dependent parametric and implicit surfaces by using a new interval approach for constrained minimization to detect collisions, and a tangency condition to reduce the dimensionality of the search space.
Abstract: We present an efficient and robust algorithm for finding points of collision between time-dependent parametric and implicit surfaces. The algorithm detects simultaneous collisions at multiple points of contact. When the regions of contact form curves or surfaces, it returns a finite set of points uniformly distributed over each contact region. Collisions can be computed for a very general class of surfaces: those for which inclusion functions can be constructed. Included in this set are the familiar kinds of surfaces and time behaviors encountered in computer graphics. We use a new interval approach for constrained minimization to detect collisions, and a tangency condition to reduce the dimensionality of the search space. These approaches make interval methods practical for multi-point collisions between complex surfaces. An interval Newton method based on the solution of the interval linear equation is used to speed convergence to the collision time and location. This method is more efficient than the Krawczyk‐Moore iteration used previously in computer graphics.

Patent
23 Mar 1993
TL;DR: In this article, a two-level indexing scheme for accessing pixel data in a texture map, is used to identify shading values for pixels in a display window, and a screen look-up table and a parametric lookup table are used for real time rotation of a textured sphere and panning of the view into a spherical environment map.
Abstract: A method and apparatus for rendering textured spheres and spherical environment maps. The method of the present invention provides for real time rotation of a textured sphere and panning of the view into a spherical environment map, along multiple axes without the need for special rendering hardware. A two-level indexing scheme for accessing pixel data in a texture map, is used to identify shading values for pixels in a display window. The two-level indexing scheme is comprised of a screen look-up table and a parametric look-up table. The screen look-up table has the dimensions of the display window, whereas the parametric look-up table has the dimensions of the parametric spherical environment map (wherein the pixel addresses are rotated 90 degrees from the origin). The method for the present invention is comprised primarily of the steps of: providing a parametric spherical environment map of the image to be viewed, generating a screen look-up table comprised of look-up addresses, generating a parametric look-up table comprised of index values into the parametric spherical environment map, and for each look-up address in the screen look-up table, mapping to an entry in the parametric look-up table, retrieving the value in the entry, and using the value to retrieve pixel values from the parametric spherical environment map. Rotation or movement of the view being seen is accomplished by adding offsets to the look-up address and/or the index values.

Journal Article
TL;DR: The robustness and power of four commonly used MANOVA statistics (the Pillai-Bartlett trace (V), Wilks' Lambda (W), Hotelling's trace (1), Roy's greatest root (R)) are reviewed and their behaviours demonstrated by Monte Carlo simulations using a one-way fixed effects design as mentioned in this paper.
Abstract: The robustness and power of four commonly used MANOVA statistics (the Pillai-Bartlett trace (V), Wilks' Lambda (W), Hotelling's trace (1), Roy's greatest root (R)) are reviewed and their behaviours demonstrated by Monte Carlo simulations using a one-way fixed effects design in which assumptions of the model are violated in a systematic way under different conditions of sample size (n), number of dependent variables (P), number of groups (k), and balance in the data. The behaviour of Box's M statistic, which tests for covariance heterogeneity, is also examined. The behaviours suggest several recommendations for multivariate design and for application of MANOVA in marine biology and ecology, viz. (1) Sample sizes should be equal. (2) p, and to a lesser extent k, should be kept to a minimum insofar as the hypothesis permits. (3) Box's M statistic is rejected as a test of homogeneity of covariance matrices. A suitable alternative is Hawkins' (1981) statistic that tests for heteroscedasticity and non-normality simultaneously. (4) To improve agreement with assumptions, and thus reliability of tests, reduction of p (e.g. by PCA or MDS methods) and/or transforming data to stabilise variances should be attempted. (5) The V statistic is recommended for general use but the others are more appropriate in particular circumstances. For Type I errors, the violation of the assumption of homoscedasticity is more serious than is nonnormality and the V statistic is clearly the most robust to variance heterogeneity in terms of controlling level. Kurtosis reduces the power of all statistics considerably. Loss of power is dramatic if assumptions of normality and homoscedasticity are violated simultaneously. (6) The preferred approach to multiple comparison procedures after MANOVA is to use Bonferroni-type methods in which the total number of comparisons is limited to the fewest possible. If all possible comparisons are required an alternative is to use the V statistic in the overall test and the R statistic in a follow-up simultaneous test procedure. We recommend following a significant MANOVA result with a canonical discriminant analysis. (7) Classical parametric MANOVA should not be used with data in which high levels of variance heterogeneity cannot be rectified or in which sample sizes are unequal and assumptions are not satisfied. We discuss briefly alternatives to parametric MANOVA.

Journal ArticleDOI
TL;DR: In this paper, an optimal algorithm was obtained from energy analysis and verified by experiment on a scale model for changing the tension as a positive use of parametric excitation, which was shown to be optimal.
Abstract: Changing the tension as a positive use of parametric excitation is studied. An optimal algorithm is obtained from energy analysis and verified by experiment on a scale model

Journal ArticleDOI
TL;DR: In contrast, nonparametric regression estimation requires attention to (e.g., parameters and variables) but permits greatly reduced attention to the functional form of the regression model as mentioned in this paper.
Abstract: Current real estate statistical valuation involves the estimation of parameters within a posited specification. Suchparametric estimation requires judgment concerning model (1) variables; and (2) functional form. In contrast,nonparametric regression estimation requires attention to (1) but permits greatly reduced attention to (2). Parametric estimators functionally model the parameters and variables affectingE(y¦x) while nonparametric estimators directly modelpdf(y, x) and henceE(y¦x).

Proceedings ArticleDOI
02 Jun 1993
TL;DR: In this article, a general approach for modeling structured real-valued parametric perturbations is presented, based on a decomposition of perturbation into linear fractional transformations (LFTs).
Abstract: In this paper a general approach for modelling structured real-valued parametric perturbations is presented. It is based on a decomposition of perturbations into linear fractional transformations (LFTs), and is applicable to rational multi-dimensional (ND) polynomial perturbations of entries in state-space models. Model reduction is used to reduce the size of the uncertainty structure. The procedure will be applied for the uncertainty modelling of an aircraft model depending on altitude and velocity (flight envelope).

Journal ArticleDOI
TL;DR: In this article, a new interpretation of the stochastic differential calculus allows first a full explanation of the presence of the Wong-Zakai or Stratonovich correction terms in the Ito's differential rule, and then this rule is extended to take into account the nonnormality of the input.
Abstract: In this paper, nonlinear systems subjected to external and parametric non-normal delta-correlated stochastic excitations are treated. A new interpretation of the stochastic differential calculus allows first a full explanation of the presence of the Wong-Zakai or Stratonovich correction terms in the Ito’s differential rule. Then this rule is extended to take into account the non-normality of the input. The validity of this formulation is confirmed by experimental results obtained by Monte Carlo simulations.

Journal ArticleDOI
TL;DR: In this article, the authors provide a general framework for constructing specification tests for parametric and semiparametric models, and develop new specification tests using the general framework, which apply in time series and cross-sectional contexts.

Journal ArticleDOI
TL;DR: Two methodologies for fitting radiotracer models on a pixel-wise basis to PET data are considered and the results obtained by mixture analysis are found to have substantially improved mean square error performance characteristics.
Abstract: Two methodologies for fitting radiotracer models on a pixel-wise basis to PET data are considered. The first method does parameter optimization for each pixel considered as a separate region of interest. The second method also does pixel-wise analysis but incorporates an additive mixture representation to account for heterogeneity effects induced by instrumental and biological blurring. Several numerical and statistical techniques including cluster analysis, constrained nonlinear optimization, subsampling, and spatial filtering are used to implement the methods. A computer simulation experiment, modeling a standard F-18 deoxyglucose (FDG) imaging protocol using the UW-PET scanner, is conducted to evaluate the statistical performance of the parametric images obtained by the two methods. The results obtained by mixture analysis are found to have substantially improved mean square error performance characteristics. The total computation time for mixture analysis is on the order of 0.7 s/pixel on a 16 MIPS workstation. This results in a total computation time of about 1 h per slice for a typical FDG brain study. >

Journal ArticleDOI
TL;DR: In this article, a method based on mutual information analysis is proposed for the selection of optimum band subsets from remotely sensed for visual interpretation or automatic processing, which is an interesting task which will assume growing importance with the availability of highly multispectral data from future sensors.
Abstract: The selection of optimum band subsets from remotely sensed for visual interpretation or automatic processing is an interesting task which will assume growing importance with the availability of highly multispectral data from future sensors. The usual methods for the mathematical evaluation of the best combination of channels are based on parametric statistical procedures such as eigenvector analysis and calculation of separability measurements. These procedures are not easy to be interpreted or computationally expensive and are not suited for evaluating the probabilistic information which can be exploited by non-parametric processes. For this kind of application, a method based on mutual information analysis is put forward in the present paper. Mutual information analysis is a statistical procedure which, using the concept of system entropy, is capable of mathematically evaluating the probabilistic information common to different variables. When applied to remotely sensed scenes superimposed on ground references related to some theme (for example vegetation types), information analysis can indicate which channels express more information about that theme. The method was applied to some Landsat TM scenes from three Italian areas about which ground references were available. Some mixed parametric-non-parametric classifications were then performed to test the results of the analyses. From these tests the subsets identified were demonstrated to be significantly more informative than standard subsets, which testifies to the efficiency of the procedure.

Journal ArticleDOI
Stephen Vida1
TL;DR: A computer program is presented that utilizes non-parametric formulations that do not require the normal distribution of test scores to calculate the sensitivity and specificity of a test, and calculates the AUC, its standard error and 95% confidence limits, and allows the comparison of tests on independent and correlated samples.

Journal ArticleDOI
01 Jan 1993-Analyst
TL;DR: This review summarizes critically the approaches available to the treatment of suspect outlying results in sets of experimental measurements, including the use of parametric methods such as the Dixon test and the application of robust statistical methods, which down-weight the importance of outliers.
Abstract: This review summarizes critically the approaches available to the treatment of suspect outlying results in sets of experimental measurements. It covers the use of parametric methods such as the Dixon test (with comments on the problems of multiple outliers); the application of non-parametric statistics based on the median to by-pass outlier problems; and the application of robust statistical methods, which down-weight the importance of outliers. The extension of these approaches to outliers occurring in regression problems is also surveyed.

Journal ArticleDOI
TL;DR: In this paper, a methodology is developed that enables designers to estimate the cost of a product at the conceptual stage of design with minimal component information and little or no experience of the intended production process.
Abstract: A methodology has been developed that enables designers to estimate the cost of a product at the conceptual stage of design with minimal component information and little or no experience of the intended production process. The methodology is based on a parametric approach in which the information available at the conceptual design stage is directed through a set of data converters to give values of parameters which have been identified as the cost drivers. Parametric equations, derived from a large database of component information, are then used to calculate a cost estimate. This methodology has been used to develop a computer-based cost estimating system for a range of injection-moulded components. It is capable of estimating costs to within 20% and is the first stage in the development of a conceptual cost comparison system.