scispace - formally typeset
Search or ask a question

Showing papers on "Parametric statistics published in 1995"


Journal ArticleDOI
TL;DR: The approach is predicated on an extension of the general linear model that allows for correlations between error terms due to physiological noise or correlations that ensue after temporal smoothing, and uses the effective degrees of freedom associated with the error term.

2,647 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a class of inverse probability of censoring weighted estimators for the parameters of models for the dependence of the mean of a vector of correlated response variables on the vector of explanatory variables in the presence of missing response data.
Abstract: We propose a class of inverse probability of censoring weighted estimators for the parameters of models for the dependence of the mean of a vector of correlated response variables on a vector of explanatory variables in the presence of missing response data. The proposed estimators do not require full specification of the likelihood. They can be viewed as an extension of generalized estimating equations estimators that allow for the data to be missing at random but not missing completely at random. These estimators can be used to correct for dependent censoring and nonrandom noncompliance in randomized clinical trials studying the effect of a treatment on the evolution over time of the mean of a response variable. The likelihood-based parametric G-computation algorithm estimator may also be used to attempt to correct for dependent censoring and nonrandom noncompliance. But because of possible model misspecification, the parametric G-computation algorithm estimator, in contrast with the proposed w...

1,510 citations


Book
01 Jan 1995
TL;DR: This theory allows us to determine if a linear time invariant control system, containing several uncertain real parameters remains stable as the parameters vary over a set and nicely complements the optimal theories as well as Classical Control and considerably extends the range of possibilities available to the control specialist.
Abstract: From the Book: PREFACE: The subject of robust control began to receive worldwide attention in the late 1970's when it was found that Linear Quadratic Optimal Control (optimal control), state feedback through observers, and other prevailing methods of control system synthesis such as Adaptive Control, lacked any guarantees of stability or performance under uncertainty Thus, the issue of robustness, prominent in Classical Control, took rebirth in a modern setting Optimal control was proposed as a first approach to the solution of the robustness problem This elegant approach, and its offshoots, such as theory, have been intensely developed over the past 12 years or so, and constitutes one of the triumphs of control theory The theory provides a precise formulation and solution of the problem of synthesizing an output feedback compensator that minimizes the norm of a prescribed system transfer function Many robust stabilization and performance problems can be cast in this formulation and there now exists effective, and fairly complete theory for control system synthesis subjected to perturbations, in the framework The theory delivers an "optimal" feedback compensator for the system Before such a compensator can be eployed in a physical (real-world) system it is natural to test its capabilities with regard to additional design criteria, not covered by the optimality criterion used In particular the performance of any controller under real parameter uncertainty, as well as mixed parametric-unstructured uncertainty, is an issue which is vital to most control systems However, optimal theory is incapable of providing a direct and nonconservative answer to thisimportantquestion The problem of robustness under parametric uncertainty received a shot in the arm in the form of Kharitonov's Theorem for interval polynomials, which appeared in the mid-1980's in the Western literature It was originally published in 1978 in a Russian journal With this surprising theorem the entire field of robust control under real parametric uncertainty came alive and it can be said that Kharitonov's Theorem is the most important occurrence in this area after the development of the Routh-Hurwitz criterion A significant development following Kharitonov's Theorem was the calculation, in 1985, by Soh, Berger and Dabke of the radius of the stability ball in the space of coefficients of a polynomial From the mid-1980's rapid and spectacular developments have taken place in this field As a result we now have a rigorous, coherent, and comprehensive theory to deal directly and effectively with real parameter uncertainty in control systems This theory nicely complements the optimal theories as well as Classical Control and considerably extends the range of possibilities available to the control specialist The main accomplishment of this theory is that it allows us to determine if a linear time invariant control system, containing several uncertain real parameters remains stable as the parameters vary over a set This question can be answered in a precise manner, that is, nonconservatively, when the parameters appear linearly or multilinearly in the characteristic polynomial In developing the solution to the above problem, several important control system design problems are answered These are 1) the calculation of the real parametric stability margin, 2) the determination of stability and stability margins under mixed parametric and unstructured (norm-bounded or nonlinear) uncertainty 3) the evaluation of the worst case or robust performance measured in the norm, over a prescribed parametric uncertainty set and 4) the extension of classical design techniques involving Nyquist, Nichols and Bode plots and root-loci to systems containing several uncertain real parameters These results are made possible because the theory developed provides built-in solutions to several extremal problems It identifies apriori the critical subset of the uncertain parameter set over which stability or performance will be lost and thereby reduces to a very small set, usually points or lines, the parameters over which robustness must be verified This built-in optimality of the parametric theory is its main strong point particularly from the point of view of applications It allows us, for the first time, to devise methods to effectively carry out robust stability and performance analysis of control systems under parametric and mixed uncertainty To balance this rather strong claim we point out that a significant deficiency of control theory at the present time is the lack of nonconservative synthesis methods to achieve robustness under parameter uncertainty Nevertheless, even here the sharp analysis results obtained in the parametric framework can be exploited in conjunction with synthesis techniques developed in the framework to develop design techniques to partially cover this drawback The objective of this book is to describe the parametric theory in a self-contained manner The book is suitable for use as a graduate textbook and also for self-study The entire subject matter of the book is developed from the single fundamental fact that the roots of a polynomial depend continuously on its coefficients This fact is the basis of the Boundary Crossing Theorem developed in Chapter 1 and is repeatedly used throughout the book Surprisingly enough this simple idea, used systematically is sufficient to derive even the most mathematically sophisticated results This economy and transparency of concepts is another strength of the parametric theory It makes the results accessible and appealing to a wide audience and allows for a unified and systematic development of the subject The contents of the book can therefore be covered in one semester despite the size of the book In accordance with our focus we do not develop any results in theory although some results from theory are used in the chapter on synthesis In Chapter 0, which serves as an extension of this preface, we rapidly overview some basic aspects of control systems, uncertainty models and robustness issues We also give a brief historical sketch of Control Theory, and then describe the contents of the rest of the chapters in some detail The theory developed in the book is presented in mathematical language The results described in these theorems and lemmas however are completely oriented towards control systems applications and in fact lead to effective algorithms and graphical displays for design and analysis We have throughout included examples to illustrate the theory and indeed the reader who wants to avoid reading the proofs can understand the significance and utility of the results by reading through the examples A MATLAB based software package, the Robust Parametric Control ToolBox, has been developed by the authors in collaboration with Samir Ahmad, our graduate student It implements most of the theory presented in the book In fact, all the examples and figures in this book have been generated by this ToolBox We gratefully acknowledge Samir's dedication and help in the preparation of the numerical examples given in the book A demonstration diskette illustrating this package is included with this book SPB would like to thank R Kishan Baheti, Director of the Engineering Systems Program at the National Science Foundation, for supporting his research program LHK thanks Harry Frisch and Frank Bauer of NASA Goddard Space Flight Center and Jer-Nan Juang of NASA Langley Research Center for their support of his research, and Mike Busby, Director of the Center of Excellence in Information Systems at Tennessee State University for his encouragement It is a pleasure to express our gratitude to several colleagues and coworkers in this field We thank Antonio Vicino, Alberto Tesi, Mario Milanese, Jo W Howze, Aniruddha Datta, Mohammed Mansour, J Boyd Pearson, Peter Dorato, Yakov Z Tsypkin, Boris T Polyak, Vladimir L Kharitonov, Kris Hollot, Juergen Ackermann, Diedrich Hinrichsen, Tony Pritchard, Dragoslav D Siljak, Charles A Desoer, Soura Dasgupta, Suhada Jayasuriya, Rama K Yedavalli, Bob R Barmish, Mohammed Dahleh, and Biswa N Datta for their, support, enthusiasm, ideas and friendship In particular we thank Nirmal K Bose, John A Fleming and Bahram Shafai for thoroughly reviewing the manuscript and suggesting many improvements We are indeed honored that Academician Ya Z Tsypkin, one of the leading control theorists of the world, has written a Foreword to our book Professor Tsypkin's pioneering contributions range from the stability analysis of time-delay systems in the 1940's, learning control systems in the 1960's to robust control under parameter uncertainty in the 1980's and 1990's His observations on the contents of the book and this subject based on this wide perspective are of great value The first draft of this book was written in 1989 We have added new results of our own and others as we became aware of them However, because of the rapid pace of developments of the subject and the sheer volume of literature that has been published in the last few years, it is possible that we have inadvertently omitted some results and references worthy of inclusion We apologize in advance to any authors or readers who feel that we have not given credit where it is due S P Bhattacharyya H Chapellat L H Keel December 5, 1994

1,052 citations


Journal ArticleDOI
TL;DR: In this paper, the authors proposed new methods for testing and correcting for sample selection bias in panel data models, which allow the unobserved effects in both the regression and selection equations to be correlated with the observed variables; the error distribution in the regression equation is unspecified; arbitrary serial dependence in the idiosyncratic errors of both equations is allowed; all idiosyncratic error can be heterogeneously distributed.

917 citations


Posted Content
TL;DR: In this paper, the authors test parametric models by comparing their implied parametric density to the same density estimated nonparametrically, and do not replace the continuous-time model by discrete approximations, even though the data are recorded at discrete intervals.
Abstract: Different continuous-time models for interest rates coexist in the literature. We test parametric models by comparing their implied parametric density to the same density estimated nonparametrically. We do not replace the continuous-time model by discrete approximations, even though the data are recorded at discrete intervals. The principal source of rejection of existing models is the strong nonlinearity of the drift. Around its mean, where the drift is essentially zero, the spot rate behaves like a random walk. The drift then mean-reverts strongly when far away from the mean. The volatility is higher when away from the mean.

830 citations


BookDOI
TL;DR: Non-Bayesian predictive approaches for Bayesian prediction of process control and optimization and Multivariate normal prediction problems.
Abstract: The author's research has been directed towards inference involving observables rather than parameters. In this book, he brings together his views on predictive or observable inference and its advantages over parametric inference. While the book discusses a variety of approaches to prediction including those based on parametric, nonparametric, and nonstochastic statistical models, it is devoted mainly to predictive applications of the Bayesian approach. It not only substitutes predictive analyses for parametric analyses, but it also presents predictive analyses that have no real parametric analogues. It demonstrates that predictive inference can be a critical component of even strict parametric inference when dealing with interim analyses. This approach to predictive inference will be of interest to statisticians, psychologists, econometricians, and sociologists.

750 citations


Journal ArticleDOI
TL;DR: Numerical results support this approach, as validated by the use of these algorithms on complex sequences, and two robust estimators in a multi-resolution framework are developed.

673 citations


Journal ArticleDOI
TL;DR: Both parametric and semi-parametric estimators of the association parameter are efficient at independence, and the parameter estimates in the margins have high efficiency and are robust to misspecification of dependency structures.
Abstract: We investigate two-stage parametric and two-stage semi-parametric estimation procedures for the association parameter in copula models for bivariate survival data where censoring in either or both components is allowed. We derive asymptotic properties of the estimators and compare their performance by simulations. Both parametric and semi-parametric estimators of the association parameter are efficient at independence, and the parameter estimates in the margins have high efficiency and are robust to misspecification of dependency structures. In addition, we propose a consistent variance estimator for the semi-parametric estimator of the association parameter. We apply the proposed methods to an AIDS data set for illustration.

648 citations


Posted Content
TL;DR: In this paper, a general time inhomogeneous multiple spell model is presented, which contains a variety of useful models as special cases, and conditions under which access to multiple spell data aids in solving the sensitivity problem.
Abstract: This paper considers the formulation and estimation of continuous time social science duration models. The focus is on new issues that arise in applying statistical models developed in biostatistics to analyze economic data and formulate economic models. Both single spell and multiple spell models are discussed. In addition, we present a general time inhomogeneous multiple spell model which contains a variety of useful models as special cases.Four distinctive features of social science duration analysis are emphasized:(1) Because of the limited size of samples available in economics and because of an abundance of candidate observed explanatory variables and plausible omitted explanatory variables, standard nonparametric procedures used in biostatistics are of limited value in econometric duration analysis. It is necessary to control for observed and unobserved explanatory variables to avoid biasing inference about underlying duration distributions. Controlling for such variables raises many new problems not discussed in the available literature.(2) The environments in which economic agents operate are not the time homogeneous laboratory environments assumed in biostatistics and reliability theory. Ad hoc methods for controlling for time inhomogeneity produce badly biased estimates.(3) Because the data available to economists are not obtained from the controlled experimental settings available to biologists, doing econometric duration analysis requires accounting for the effect of sampling plans on the distributions of sampled spells.(4) Econometric duration models that incorporate the restrictions produced by economic theory only rarely can be represented by the models used by biostatisticians. The estimation of structural econometric duration models raises new statistical and computational issues.Because of (1) it is necessary to parameterize econometric duration models to control for both observed and unobserved explanatory variables. Economic theory only provides qualitative guidance on the matter of selecting a functional form for a conditional hazard, and it offers no guidance at all on the matter of choosing a distribution of unobservables. This is unfortunate because empirical estimates obtained from econometric duration models are very sensitive to assumptions made about the functional forms of these model ingredients.In response to this sensitivity we present criteria for inferring qualitative properties of conditional hazards and distributions of unobservables from raw duration data sampled in time homogeneous environments; i.e. from unconditional duration distributions. No parametric structure need be assumed to implement these procedures.We also note that current econometric practice overparameterizes duration models. Given a functional form for a conditional hazard determined up to a finite number of parameters, it is possible to consistently estimate the distribution of unobservables nonparametrically. We report on the performance of such an estimator and show that it helps to solve the sensitivity problem.We demonstrate that in principle it is possible to identify both the conditional hazard and the distribution of unobservables without assuming parametric functional forms for either. Tradeoffs in assumptions required to secure such model identification are discussed. Although under certain conditions a fully nonparametric model can be identified, the development of a consistent fully nonparametric estimator remains to be done.We also discuss conditions under which access to multiple spell data aids in solving the sensitivity problem. A superficially attractive conditional likelihood approach produces inconsistent estimators, but the practical significance of this inconsistency is not yet known. Conditional inference schemes for eliminating unobservables from multiple spell duration models that are based on sufficient or ancillary statistics require unacceptably strong assumptions about the functional forms of conditional hazards and so are not robust. Contrary to recent claims, they offer no general solution to the model sensitivity problem.The problem of controlling for time inhomogeneous environments (Point (2)) remains to be solved. Failure to control for time inhomogeneity produces serious biases in estimated duration models. Controlling for time inhomogeneity creates a potential identification problem.For a single spell data it is impossible to separate the effect of duration dependence from the effect of time inhomogeneity by a fully nonparametric procedure. Although it is intuitively obvious that access to multiple spell data aids in the solution of this identification problem, the development of precise conditions under which this is possible is a topic left for future research.We demonstrate how sampling schemes distort the functional forms of sample duration distributions away from the population duration distributions that are the usual object of econometric interest (Point (3)). Inference based on misspecified duration distributions is in general biased. New formulae for the densities of commonly used duration measures are produced for duration models with unobservables in time inhomogeneous environments. We show how access to spells that begin after the origin date of a sample aids in solving econometric problems created by the sampling schemes that are used to generate economic duration data.We also discuss new issues that arise in estimating duration models explicitly derived from economic theory (Point (4)). For a prototypical search unemployment model we discuss and resolve new identification problems that arise in attempting to recover structural economic parameters. We also consider nonstandard statistical problems that arise in estimating structural models that are not treated in the literature. Imposing or testing the restrictions implied by economic theory requires duration models that do not appear in the received literature and often requires numerical solution of implicit equations derived from optimizing theory.

500 citations


Book
15 Dec 1995
TL;DR: In this paper, the authors present an approach for regression analysis based on univariate and univariate descriptive statistics and nonparametric statistics, as well as regression analysis for time series models.
Abstract: 1. Statistics and Geography I. Descriptive Statistics 2. Univariate Descriptive Statistics 3. Descriptive Statistics for Spatial Distributions II. Inferential Statistics 5. Elementary Probability Theory 6. Random Variables and Probability Distributions 7. Sampling 8. Parametric Statistical Inference: Estimation 9. Parametric Statistical Inference: Hypothesis Testing 10. Parametric Statistical Inference: Two Sample Tests 11. Nonparametric Statistics III. Statistical Relationships Between Two Variables 12. Correlation Analysis 13. Introduction to Regression Analysis 14. Inferential Aspects of Regression Analysis 15. Time Series Models IV. Modern Methods of Analysis 16. Exploratory Data Analysis 17. Bootstrapping and Related Computer Intensive Methods

493 citations


Book
01 Jan 1995
TL;DR: 1. Coherent Systems Analysis, Nonparametric Methods and Model Adequacy, and Parametric Lifetime Models.
Abstract: 1 Introduction 2 Coherent Systems Analysis 3 Lifetime Distributions 4 Parametric Lifetime Models 5 Specialized Models 6 Repairable Systems 7 Lifetime Data Analysis 8 Parametric Estimation for Models without Covariates 9 Parametric Estimation for Models with Covariates 10 Nonparametric Methods and Model Adequacy

Journal ArticleDOI
TL;DR: An algorithm is proposed that estimates the parameters of this model using a generalization of the MUSIC algorithm and it is shown that the threshold signal-to-noise ratio required for resolving two closely spaced distributed sources is considerably smaller for the new method.
Abstract: Most array processing algorithms are based on the assumption that the signals are generated by point sources. This is a mathematical constraint that is not satisfied in many applications. In this paper, we consider situations where the sources are distributed in space with a parametric angular cross-correlation kernel. We propose an algorithm that estimates the parameters of this model using a generalization of the MUSIC algorithm. The method involves maximizing a cost function that depends on a matrix array manifold and the noise eigenvectors. We study two particular cases: coherent and incoherent spatial source distributions. The spatial correlation function for a uniformly distributed signal is derived. From this, we find the array gain and show that (in contrast to point sources) it does not increase linearly with the number of sources. We compare our method to the conventional (point source) MUSIC algorithm. The simulation studies show that the new method outperforms the MUSIC algorithm by reducing the estimation bias and the standard deviation for scenarios with distributed sources. It is also shown that the threshold signal-to-noise ratio required for resolving two closely spaced distributed sources is considerably smaller for the new method. >

Journal ArticleDOI
TL;DR: In this article, a simple method of approximating the deflection path of end-loaded, large-deflection cantilever beams is presented, where the path coordinates are parameterized in a single parameter called the pseudo-rigid-body angle.
Abstract: Geometric nonlinearities often complicate the analysis of systems containing largedeflection members. The time and resources required to develop closed-form or numerical solutions have inspired the development of a simple method of approximating the deflection path of end-loaded, large-deflection cantilever beams. The path coordinates are parameterized in a single parameter called the pseudo-rigid-body angle. The approximations are accurate to within 0.5 percent of the closedform elliptic integral solutions. A physical model is associated with the method, and may be used to simplify complex problems. The method proves to be particularly useful in the analysis and design of compliant mechanisms

Journal ArticleDOI
TL;DR: This work generalizes the McCullagh and Nelder approach to a latent class framework and demonstrates how this approach handles many of the existing latent class regression procedures as special cases, as well as a host of other parametric specifications in the exponential family heretofore not mentioned in the latent class literature.
Abstract: A mixture model approach is developed that simultaneously estimates the posterior membership probabilities of observations to a number of unobservable groups or latent classes, and the parameters of a generalized linear model which relates the observations, distributed according to some member of the exponential family, to a set of specified covariates within each Class. We demonstrate how this approach handles many of the existing latent class regression procedures as special cases, as well as a host of other parametric specifications in the exponential family heretofore not mentioned in the latent class literature. As such we generalize the McCullagh and Nelder approach to a latent class framework. The parameters are estimated using maximum likelihood, and an EM algorithm for estimation is provided. A Monte Carlo study of the performance of the algorithm for several distributions is provided, and the model is illustrated in two empirical applications.

Journal ArticleDOI
TL;DR: In this article, a new approach to scattering center extraction based on a scattering model derived from the geometrical theory of diffraction (GTD) is presented. But the model is better matched to the physical scattering process than the damped exponential model and conventional Fourier analysis.
Abstract: This paper presents a new approach to scattering center extraction based on a scattering model derived from the geometrical theory of diffraction (GTD). For stepped frequency measurements at high frequencies, the model is better matched to the physical scattering process than the damped exponential model and conventional Fourier analysis. In addition to determining downrange distance, energy, and polarization, the GTD-based model extracts frequency dependent scattering information, allowing partial identification of scattering center geometry. We derive expressions for the Cramer-Rao bound of this model; using these expressions, we analyze the behavior of the new model as a function of scatterer separation, bandwidth, number of data points, and noise level. Additionally, a maximum likelihood algorithm is developed for estimation of the model parameters. We present estimation results using data measured on a compact range to validate the proposed modeling procedure. >

Journal ArticleDOI
TL;DR: An adaptive control scheme that guarantees the asymptotic satisfaction of the same control objective is presented in the presence of bounded parametric uncertainties, and the reduction of the order of the relevant error equation is presented.

Journal ArticleDOI
07 Feb 1995
TL;DR: In this paper, an analytical simulation model has been developed for predicting and optimizing the thermal performance of bidirectional fin heat sinks in a partially confined configuration, and sample calculations are carried out, and parametric plots are provided, illustrating the effect of various design parameters on the performance of a heat sink.
Abstract: An analytical simulation model has been developed for predicting and optimizing the thermal performance of bidirectional fin heat sinks in a partially confined configuration. Sample calculations are carried out, and parametric plots are provided, illustrating the effect of various design parameters on the performance of a heat sink. It is observed that the actual convection flow velocity through fins is usually unknown to designers, yet, is one of the parameters that greatly affect the overall thermal performance of a heat sink. In this paper, a simple method of determining the fin flow velocity is presented, and the development of the overall thermal model is described. An overview of different types of heat sinks and associated design parameters is provided. Optimization of heat-sink designs and typical parametric behaviors are discussed based on the sample simulation results.

Journal ArticleDOI
TL;DR: In this article, a mechanical system exhibiting combined parametric excitation and clearance type nonlinearity is examined analytically and experimentally in an effort to explain complex behavior that is commonly observed in the steady state forced response of rotating machines.

Journal ArticleDOI
TL;DR: In this article, the authors propose two consistent one-sided specification tests for parametric regression models, one based on the sample covariance between the residual from the parametric model and the discrepancy between parametric and nonparametric fitted values, and the other based on a difference in sums of squared residuals between the parameterized and non-parametric models, which can be viewed as a test of the joint hypothesis that the true parameters of a series regression model are zero.
Abstract: This paper proposes two consistent one-sided specification tests for parametric regression models, one based on the sample covariance between the residual from the parametric model and the discrepancy between the parametric and nonparametric fitted values ; the other based on the difference in sums of squared residuals between the parametric and nonparametric models. We estimate the nonparametric model by series regression. The new test statistics converge in distribution to a unit normal under correct specification and grow to infinity faster than the parametric rate (n -1/2 ) under misspecification, while avoiding weighting, sample splitting, and non-nested testing procedures used elsewhere in the literature. Asymptotically, our tests can be viewed as a test of the joint hypothesis that the true parameters of a series regression model are zero, where the dependent variable is the residual from the parametric model, and the series terms are functions of the explanatory variables, chosen so as to support nonparametric estimation of a conditional expectation. We specifically consider Fourier series and regression splines, and present a Monte Carlo study of the finite sample performance of the new tests in comparison to consistent tests of Bierens (1990), Eubank and Spiegelman (1990), Jayasuriya (1990), Wooldridge (1992), and Yatchew (1992) ; the results show the new tests have good power, performing quite well in some situations. We suggest a joint Bonferroni procedure that combines a new test with those of Bierens and Wooldridge to capture the best features of the three approaches.

Journal ArticleDOI
TL;DR: The proposed model is a semi-parametric generalization of the mixture model of Farewell (1982), and a logistic regression model is proposed for the incidence part of the model, and a Kaplan-Meier type approach is used to estimate the latency part ofThe model.
Abstract: A mixture model is an attractive approach for analyzing failure time data in which there are thought to be two groups of subjects, those who could eventually develop the endpoint and those who could not develop the endpoint. The proposed model is a semi-parametric generalization of the mixture model of Farewell (1982). A logistic regression model is proposed for the incidence part of the model, and a Kaplan-Meier type approach is used to estimate the latency part of the model. The estimator arises naturally out of the EM algorithm approach for fitting failure time mixture models as described by Larson and Dinse (1985). The procedure is applied to some experimental data from radiation biology and is evaluated in a Monte Carlo simulation study. The simulation study suggests the semi-parametric procedure is almost as efficient as the correct fully parametric procedure for estimating the regression coefficient in the incidence, but less efficient for estimating the latency distribution.

Journal ArticleDOI
TL;DR: In this article, the relation between categorization and pdf estimation is examined from the perspective of the categorization process and it is shown that the prototype model and several decision-bound models of categorization are parametric, whereas most exemplar models are nonparametric.

Journal ArticleDOI
TL;DR: The U test is proposed to be used as a non-parametric two-sample test and to adjust the estimated optimal sample size according to the overdispersion observed in a large historical control and the relative efficiency of the U test in comparison to the t test and related parametric tests.
Abstract: In genetic toxicology it is important to know whether chemicals should be regarded as clearly hazardous or whether they can be considered sufficiently safe, which latter would be the case from the genotoxicologist's view if their genotoxic effects are nil or at least significantly below a predefined minimal effect level. A previously presented statistical decision procedure which allows one to make precisely this distinction is now extended to the question of how optimal experimental sample size can be determined in advance for genotoxicity experiments using the somatic mutation and recombination tests (SMART) of Drosophila. Optimally, the statistical tests should have high power to minimise the chance for statistically inconclusive results. Based on the normal test, the statistical principles are explained, and in an application to the wing spot assay, it is shown how the practitioner can proceed to optimise sample size to achieve numerically satisfactory conditions for statistical testing. The somatic genotoxicity assays of Drosophila are in principle based on somatic spots (mutant clones) that are recovered in variable numbers on individual flies. The underlying frequency distributions are expected to be of the Poisson type. However, some care seems indicated with respect to this latter assumption, because pooling of data over individuals, sexes, and experiments, for example, can (but need not) lead to data which are overdispersed, i.e, the data may show more variability than theoretically expected. It is an undesired effect of overdispersion that in comparisons of pooled totals it can lead to statistical testing which is too liberal, because overall it yields too many seemingly significant results. If individual variability considered alone is not contradiction with Poisson expectation, however, experimental planning can help to minimise the undesired effects of overdispersion on statistical testing of pooled totals. The rule for the practice is to avoid disproportionate sampling. It is recalled that for optimal power in statistical testing, it is preferable to use equal total numbers of flies in the control and treated series. Statistical tests which are based on Poisson expectations are too liberal if there is overdispersion in the data due to excess individual variability. In this case we propose to use the U test as a non-parametric two-sample test and to adjust the estimated optimal sample size according to (i) the overdispersion observed in a large historical control and (ii) the relative efficiency of the U test in comparison to the t test and related parametric tests.

Journal ArticleDOI
TL;DR: In this paper, the performance of both passive and active tuned mass damper (TMD) systems can be readily assessed by parametric studies which have been the subject of numerous research.

Posted ContentDOI
TL;DR: In this article, the authors examined the advantage of using ranked set sampling for estimating the population mean when the distribution of the data was unknown, for a specific family of distributions and found that the ranked set sample does provide more information about both μ and σ than a random sample of the same number of observations.
Abstract: Ranked set sampling was introduced by McIntyre (1952,Australian Journal of Agricultural Research,3, 385–390) as a cost-effective method of selecting data if observations are much more cheaply ranked than measured. He proposed its use for estimating the population mean when the distribution of the data was unknown. In this paper, we examine the advantage, if any, that this method of sampling has if the distribution is known, for a specific family of distributions. Specifically, we consider estimation of μ and σ for the family of random variables with cdf's of the formF(x−μ/σ). We find that the ranked set sample does provide more information about both μ and σ than a random sample of the same number of observations. We examine both maximum likelihood and best linear unbiased estimation of μ and σ, as well as methods for modifying the ranked set sampling procedure to provide even better estimation.

Journal ArticleDOI
TL;DR: In this paper, a sliding mode control method for nonlinear and hysteretic civil engineering structures is presented. But the authors focus on continuous sliding modes that do not have possible chattering effects.
Abstract: Control methods based on the theory of variable structure system or sliding mode control are presented for applications to seismically excited nonlinear and hysteretic civil engineering structures. These control methods are robust with respect to parametric uncertainties of the structure. The controllers have no adverse effect should the actuator be saturated due to unexpected extreme earthquakes. Emphasis is placed on continuous sliding mode control methods that do not have possible chattering effects. Static output feedback controllers using only the measured information from a limited number of sensors installed at strategic locations are also presented. When a controller is installed in each story unit of a building, a complete compensation of the structural response can be achieved and the response state vector can be reduced to zero. Among the contributions of this paper are the establishment of saturated controllers and controllers for static output feedback. The robustness of the control methods, the applications of the static output controllers, and the control effectiveness in case of actuator saturation are all demonstrated by numerical simulation results. Simulation results indicate that the performance of the sliding mode control methods is remarkable.

Journal ArticleDOI
TL;DR: In this article, the authors deal with residual generation for the diagnosis of faults in the presence of disturbances, represented as multiplicative disturbances, and on parametric faults, both characterized as discrepancies in a set of underlying parameters.
Abstract: his paper deals with residual generation for the diagnosis of faults in the presence of disturbances. The emphasis is on modelling errors, represented as multiplicative disturbances, and on parametric faults. These are both characterized as discrepancies in a set of underlying parameters. The residuals are obtained using parity equations. To address the situation when the number of uncertain parameters is too high to allow perfect decoupling, two approximate decoupling methods are introduced. One utilizes rank reduction of the model-error/fault entry matrix via singular value decomposition. The other minimizes a least squares performance index, formulated on the residuals, under a set of equality constraints. It is shown that, by the appropriate construction of the entry matrix or of the performance index and the constraints, a broad variety of structured and directional residual strategies can be implemented.

01 Jan 1995
TL;DR: This thesis addresses the non-linear system identification problem, and in particular, investigates the use of neural networks in system identification using a common framework based on analogies to linear black-box models.
Abstract: This thesis addresses the non-linear system identification problem, and in particular, investigates the use of neural networks in system identification. An overview of different possible mode! structures is given in a common framework. A nonlinear structure is described as the concatenation of a map from the observed data to the regressor, and a map from the regressor to the output space. This divides the model structure selection problem into two problems with lower complexity: that of choosing the regressor and that of choosing the non-linear map.The possible choices for the regressors consists of past inputs and outputs, and filtered versions of them. The dynamics of the mode! depends on the choice of regressor, and families of different mode! structures are suggested based on analogies to linear black-box models. State-space models are also described within this common framework by a special choice of regressor. It is shown that state-space models which have no parameters in the state update function can be viewed as an input-output mode! preceded by a pre-filter. A parameterized state update function, on the other hand, can be seen as a data driven regressor selector. The second step of the nonlinear identification is the mapping from the regressor to the output space. It is often advantageous to try some intermediate mappings between the linear and the general non-linear mapping. Such non-linear black-box mappings are discussed and motivated by considering different noise assumptions.The validation of a linear mode! should contain a test for non-linearities and it is shown that, in general, it is easy to detect non-linearities. This implies that it is not worth spending too much energy searching for optimal non-linear validation methods for a specific problem. lnstead the validation method should be chosen so that it is easy to apply. Two such methods, based on polynomials and neural nets, are suggested. Further, two validation methods, the correlation-test and the parametric F-test, are investigated. It is shown that under certain conditions these methodscoincide.Parameter estimates are usually based on criterion minimization. In connection with neural nets it has been noted that it is not always optimal to try to find the absolute minimum point of the criterion. Instead a better estimate can be obtained if the numerical search for the minimum is prematurely stopped. A forma! connection between this stopped search and regularization is given. It is shown that the numerical minimization of the criterion can be view as a regularization term which is gradually turned to zero. This closely connects to, and explains, what is called overtraining in the neural net literature.

Journal ArticleDOI
TL;DR: In this paper, an analysis of covariance model where the covariate effect is assumed only to be smooth is considered, and tests of equality and of parallelism across groups are constructed.
Abstract: An analysis of covariance model where the covariate effect is assumed only to be smooth is considered. The possibility of different shapes of covariate effect in different groups is also allowed and tests of equality and of parallelism across groups are constructed. These are implemented using Gasser-Maller smoothing, whose properties enable problems of bias to be avoided. Accurate moment-based approximations are available for the distribution of each test statistic. Some data on Spanish Onions are used to contrast the non-parametric approach with that of a nonlinear, but parametric, model. A simulation study is also used to explore the properties of the non-parametric tests.

Journal ArticleDOI
TL;DR: This work shows that the capacity of the channel induced by a given class of sources is essentially a lower bound also in a stronger sense, that is, for "most" sources in the class, and extends Rissanen's (1984, 1986) lower bound for parametric families.
Abstract: The capacity of the channel induced by a given class of sources is well known to be an attainable lower bound on the redundancy of universal codes with respect to this class, both in the minimax sense and in the Bayesian (maximin) sense. We show that this capacity is essentially a lower bound also in a stronger sense, that is, for "most" sources in the class. This result extends Rissanen's (1984, 1986) lower bound for parametric families. We demonstrate the applicability of this result in several examples, e.g., parametric families with growing dimensionality, piecewise-fixed sources, arbitrarily varying sources, and noisy samples of learnable functions. Finally, we discuss implications of our results to statistical inference. >

Journal ArticleDOI
TL;DR: A practical application of quadratic programming is shown to calculate the directional derivative in the case when the optimal multipliers are not unique, for the first time to the authors' knowledge.
Abstract: Consider a parametric nonlinear optimization problem subject to equality and inequality constraints. Conditions under which a locally optimal solution exists and depends in a continuous way on the parameter are well known. We show, under the additional assumption of constant rank of the active constraint gradients, that the optimal solution is actually piecewise smooth, hence B-differentiable. We show, for the first time to our knowledge, a practical application of quadratic programming to calculate the directional derivative in the case when the optimal multipliers are not unique.