scispace - formally typeset
Search or ask a question

Showing papers on "Linear model published in 1991"


Book
13 Mar 1991
TL;DR: In this paper, the authors present a directory of Symbols and Definitions for PCA, as well as some classic examples of PCA applications, such as: linear models, regression PCA of predictor variables, and analysis of variance PCA for Response Variables.
Abstract: Preface.Introduction.1. Getting Started.2. PCA with More Than Two Variables.3. Scaling of Data.4. Inferential Procedures.5. Putting It All Together-Hearing Loss I.6. Operations with Group Data.7. Vector Interpretation I : Simplifications and Inferential Techniques.8. Vector Interpretation II: Rotation.9. A Case History-Hearing Loss II.10. Singular Value Decomposition: Multidimensional Scaling I.11. Distance Models: Multidimensional Scaling II.12. Linear Models I : Regression PCA of Predictor Variables.13. Linear Models II: Analysis of Variance PCA of Response Variables.14. Other Applications of PCA.15. Flatland: Special Procedures for Two Dimensions.16. Odds and Ends.17. What is Factor Analysis Anyhow?18. Other Competitors.Conclusion.Appendix A. Matrix Properties.Appendix B. Matrix Algebra Associated with Principal Component Analysis.Appendix C. Computational Methods.Appendix D. A Directory of Symbols and Definitions for PCA.Appendix E. Some Classic Examples.Appendix F. Data Sets Used in This Book.Appendix G. Tables.Bibliography.Author Index.Subject Index.

3,534 citations


Book
01 Oct 1991
TL;DR: In this article, the authors describe the use of statistical software for measuring the success probability of binary response probability in the presence of exposure and disease in the context of binary time series.
Abstract: INTRODUCTION Some Examples The Scope of this Book Use of Statistical Software STATISTICAL INFERENCE FOR BINARY DATA The Binomial Distribution Inference about the Success Probability Comparison of Two Proportions Comparison of Two or More Proportions MODELS FOR BINARY AND BINOMIAL DATA Statistical Modelling Linear Models Methods of Estimation Fitting Linear Models to Binomial Data Models for Binomial Response Data The Linear Logistic Model Fitting the Linear Logistic Model to Binomial Data Goodness of Fit of a Linear Logistic Model Comparing Linear Logistic Models Linear Trend in Proportions Comparing Stimulus-Response Relationships Non-Convergence and Overfitting Some other Goodness of Fit Statistics Strategy for Model Selection Predicting a Binary Response Probability BIOASSAY AND SOME OTHER APPLICATIONS The Tolerance Distribution Estimating an Effective Dose Relative Potency Natural Response Non-Linear Logistic Regression Models Applications of the Complementary Log-Log Model MODEL CHECKING Definition of Residuals Checking the Form of the Linear Predictor Checking the Adequacy of the Link Function Identification of Outlying Observations Identification of Influential Observations Checking the Assumption of a Binomial Distribution Model Checking for Binary Data Summary and Recommendations OVERDISPERSION Potential Causes of Overdispersion Modelling Variability in Response Probabilities Modelling Correlation Between Binary Responses Modelling Overdispersed Data A Model with a Constant Scale Parameter The Beta-Binomial Model Discussion MODELLING DATA FROM EPIDEMIOLOGICAL STUDIES Basic Designs for Aetiological Studies Measures of Association Between Disease and Exposure Confounding and Interaction The Linear Logistic Model for Data from Cohort Studies Interpreting the Parameters in a Linear Logistic Model The Linear Logistic Model for Data from Case-Control Studies Matched Case-Control Studies MIXED MODELS FOR BINARY DATA Fixed and Random Effects Mixed Models for Binary Data Multilevel Modelling Mixed Models for Longitudinal Data Analysis Mixed Models in Meta-Analysis Modelling Overdispersion Using Mixed Models EXACT METHODS Comparison of Two Proportions Using an Exact Test Exact Logistic Regression for a Single Parameter Exact Hypothesis Tests Exact Confidence Limits for bk Exact Logistic Regression for a Set of Parameters Some Examples Discussion SOME ADDITIONAL TOPICS Ordered Categorical Data Analysis of Proportions and Percentages Analysis of Rates Analysis of Binary Time Series Modelling Errors in the Measurement of Explanatory Variables Multivariate Binary Data Analysis of Binary Data from Cross-Over Trials Experimental Design COMPUTER SOFTWARE FOR MODELLING BINARY DATA Statistical Packages for Modelling Binary Data Interpretation of Computer Output Using Packages to Perform Some Non-Standard Analyses Appendix A: Values of logit(p) and probit(p) Appendix B: Some Derivations Appendix C: Additional Data Sets References Index of Examples Index

1,573 citations


Book
28 Mar 1991

1,075 citations


Journal ArticleDOI
TL;DR: In this paper, the authors developed a discrete state space solution method for a class of nonlinear rational expectations models by using numerical quadrature rules to approximate the integral operators that arise in stochastic intertemporal models.
Abstract: The paper develops a discrete state space solution method for a class of nonlinear rational expectations models. The method works by using numerical quadrature rules to approximate the integral operators that arise in stochastic intertemporal models. The method is particularly useful for approximating asset pricing models and has potential applications in other problems as well. An empirical application uses the method to study the relationship between the risk premium and the conditional variability of the equity return under an ARCH endowment process. NONLINEAR DYNAMIC RATIONAL EXPECTATIONS MODELS rarely admit explicit solutions. Techniques like the method of undetermined coefficients or forward- looking expansions, which often work well for linear models, rarely provide explicit solutions for nonlinear models. The lack of explicit solutions compli- cates the tasks of analyzing the dynamic properties of such models and generat- ing simulated realizations for applied policy work and other purposes. This paper develops a discrete state-space approximation method for a specific class of nonlinear rational expectations models. The class of models is distinguished by two features: First, the solution functions for the endogenous variables are functions of at most a finite number of lags of an exogenous stationary state vector. Second, the expectational equations of the model take the form of integral equations, or more precisely, Fredholm equations of the second type. The key component of the method is a technique, based on numerical quadrature, for forming a discrete approximation to a general time series conditional density. More specifically, the technique provides a means for calibrating a Markov chain, with a discrete state space, whose probability distribution closely approximates the distribution of a given time series. The quality of the approximation can be expected to get better as the discrete state space is made sufficiently finer. The term "discrete" is used here in reference to the range space of the random variables and not to the time index; time is always discrete in our analysis. The discretization technique is primarily useful for taking a discrete approxi- mation to the conditional density of the strictly exogenous variables of a model. The specification of this conditional density could be based on a variety of 1Financial support under NSF Grants SES-8520244 and SES-8810357 is acknowledged. We thank the co-editor and referees of earlier drafts for many, many helpful comments that substantially improved the manuscript.

955 citations


Journal ArticleDOI
TL;DR: This article cast the generalized linear random effects model in a Bayesian framework and use a Monte Carlo method, the Gibbs sampler, to overcome the current computational limitations, which is flexible to easily accommodate changes in the number of observations.
Abstract: Generalized linear models have unified the approach to regression for a wide variety of discrete, continuous, and censored response variables that can be assumed to be independent across experimental units. In applications such as longitudinal studies, genetic studies of families, and survey sampling, observations may be obtained in clusters. Responses from the same cluster cannot be assumed to be independent. With linear models, correlation has been effectively modeled by assuming there are cluster-specific random effects that derive from an underlying mixing distribution. Extensions of generalized linear models to include random effects has, thus far, been hampered by the need for numerical integration to evaluate likelihoods. In this article, we cast the generalized linear random effects model in a Bayesian framework and use a Monte Carlo method, the Gibbs sampler, to overcome the current computational limitations. The resulting algorithm is flexible to easily accommodate changes in the number...

852 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a methodology for fitting models with various fixed and random elements with the possible assumption of correlation among random effects, and the advantage of teaching analysis of variance applications from this methodology is presented.
Abstract: The mixed model equations as presented by C. R. Henderson offers the base for a methodology that provides flexibility of fitting models with various fixed and random elements with the possible assumption of correlation among random effects. The advantage of teaching analysis of variance applications from this methodology is presented. Particular emphasis is placed upon the relationship between choice of estimable function and inference space.

565 citations


Posted Content
TL;DR: Methods for spectral analysis are used to evaluate numerical accuracy formally and construct diagnostics for convergence in the normal linear model with informative priors, and in the Tobit-censored regression model.
Abstract: Data augmentation and Gibbs sampling are two closely related, sampling-based approaches to the calculation of posterior moments. The fact that each produces a sample whose constituents are neither independent nor identically distributed complicates the assessment of convergence and numerical accuracy of the approximations to the expected value of functions of interest under the posterior. In this paper methods for spectral analysis are used to evaluate numerical accuracy formally and construct diagnostics for convergence. These methods are illustrated in the normal linear model with informative priors, and in the Tobit-censored regression model.

505 citations


Journal ArticleDOI
TL;DR: In this paper, the utility of the Hammerstein model to represent the dynamics of nonlinear chemical processes was investigated, which is composed of a static nonlinear element in series with a linear dynamic part.
Abstract: The utility of the Hammerstein model, which is composed of a static nonlinear element in series with a linear dynamic part, was investigated to represent the dynamics of nonlinear chemical processes. Different methods to identify the parameters of Hammerstein models were tested. The methods were applied to the identification of simulated distillation columns and to an experimental heat exchanger process. The results show that the dynamics of such processes can be better represented by Hammerstein-type models than by linear models.

458 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a new procedure for continuous and discrete-time linear control systems design, which consists of the definition of a convex programming problem in the parameter space that, when solved, provides the feedback gain.
Abstract: This paper presents a new procedure for continuous and discrete-time linear control systems design. It consists of the definition of a convex programming problem in the parameter space that, when solved, provides the feedback gain. One of the most important features of the procedure is that additional design constraints are easily incorporated in the original formulation, yielding solutions to problems that have raised a great deal of interest within the last few years. This is precisely the case of the decentralized control problem and the quadratic stabilizability problem of uncertain systems with both dynamic and input uncertain matrices. In this last case, necessary and sufficient conditions for the existence of a linear stabilizing gain are provided and, to the authors’ knowledge, this is one of the first numerical procedures able to handle and solve this interesting design problem for high-order, continuous-time or discrete-time linear models. The theory is illustrated by examples.

348 citations


Journal ArticleDOI
TL;DR: In this article, the authors describe the use of linear deterministic models for examining the spread of population processes, and discuss their advantages and limitations, as well as their application to both epidemic and population dynamic problems.
Abstract: This paper describes the use of linear deterministic models for examining the spread of population processes, discussing their advantages and limitations. Their main advantages are that their assumptions are relatively transparent and that they are easy to analyze, yet they generally give the same velocity as more complex linear stochastic and nonlinear deterministic models. Their simplicity, especially if we use the elegant reproduction and dispersal kernel formulation of Diekmann and van den Bosch et al., allows us greater freedom to choose a biologically realistic model and greatly facilitates examination of the dependence of conclusions on model components and of how these are incorporated into the model and fitted from data. This is illustrated by consideration of a range of examples, including both diffusion and dispersal models and by discussion of their application to both epidemic and population dynamic problems. A general limitation on fitting models results from the poor accuracy of most ecological data, especially on dispersal distances. Confirmation of a model is thus rarely as convincing as those cases where we can clearly reject one. We also need to be aware that linear models provide only an upper bound for the velocity of more realistic nonlinear stochastic models and are almost wholly inadequate when it comes to modeling more complex aspects such as the transition to endemicity and endemic patterns. These limitations are, however, to a great extent shared by linear stochastic and nonlinear deterministic models.

300 citations


Journal ArticleDOI
TL;DR: In this paper, different specifications of conditional expectations are compared with nonparametric techniques that make no assumptions about the distribution of the data, and the conditional mean and variance of the NYSE market return are examined.
Abstract: This paper explores different specifications of conditional expectations. The mostcommon specification, linear least squares, is contrasted with nonparametric techniques that make no assumptions about the distribution of the data. Nonparametric regression is successful in capturing some nonlinearities in financial data, in particular, asymmetric responses of security returns to the direction and magnitude of market returns. The technique is ideally suited for empirically modeling returns of securities that have complicated embedded options. The conditional mean and variance of the NYSE market return are also examined. Forecasts of market returns are not improved with the nonparametric techniques which suggests that linear conditional expectations are a reasonable approximation in conditional asset pricing research. However, the linear model produces a disturbing number of negative expected excess returns. My results also indicate that the relation between the conditional mean and variance depends on the specification of the conditional variance. Furthermore, a linear model relating mean to variance is rejected and these tests are not sensitive to the expectation generating mechanism nor the conditioning information. Rejections are driven by the distinct countercyclical variation in the ratio of the conditional mean to variance. A revised version of this paper was published in the Journal of Empirical Finance in 2001.

Journal ArticleDOI
TL;DR: In this article, regression-based conditional mean and conditional variance diagnostics are proposed for nonlinear models of conditional means and conditional variances for cross-section or time-series data, and the distinguishing feature of the current approach, which builds on already popular residual-based procedures, is that no auxiliary assumptions are imposed at any testing stage.

Journal ArticleDOI
TL;DR: A two-stage construction of a linear regression model is proposed using an enhancement of a minimal vagueness criterion already discussed in fuzzy regression analysis.

Journal ArticleDOI
TL;DR: In this paper, an empirical relation between suspended-sediment load (L) and streamflow (S) is defined as a power function, L = aSb, and is referred to as a suspendedsediment rating curve.

Book
01 Jan 1991
TL;DR: Multivariate linear models discrimination and allocation frequency analysis of time series time domain analysis linear models for spatial data are discussed in this paper, where the authors present a set of features for each of the models.
Abstract: Multivariate linear models discrimination and allocation frequency analysis of time series time domain analysis linear models for spatial data.

Journal ArticleDOI
TL;DR: In this paper, the problem of modeling change in a vector time series is studied using a dynamic linear model with measurement matrices that switch according to a time-varying independent random process.
Abstract: The problem of modeling change in a vector time series is studied using a dynamic linear model with measurement matrices that switch according to a time-varying independent random process. We derive filtered estimators for the usual state vectors and also for the state occupancy probabilities of the underlying nonstationary measurement process. A maximum likelihood estimation procedure is given that uses a pseudo-expectation-maximization algorithm in the initial stages and nonlinear optimization. We relate the models to those considered previously in the literature and give an application involving the tracking of multiple targets.

Proceedings ArticleDOI
11 Dec 1991
TL;DR: In this paper, the authors present a method to generate exciting identification trajectories in order to minimize the effect of noise and error modeling on the standard least squares (LS) solution.
Abstract: A common way to identify the inertial parameters of robots is to use a linear model in relation to the parameters and standard least squares (LS) techniques. The authors present a method to generate exciting identification trajectories in order to minimize the effect of noise and error modeling on the LS solution. Using nonlinear optimization techniques, the condition number of a matrix W obtained from the energy model is minimized and the scaling of its terms is carried out. An example of a 3 degree of freedom robot is presented. >

Journal Article
TL;DR: A statistical approach is taken, and a form of bootstrapping is used to detect nonlinearity by showing that a given linear model is unlikely to have produced the data.

Journal ArticleDOI
TL;DR: In this paper, the authors proposed the bootstrap method for assessing the precision of Gaussian maximum likelihood estimates of the parameters of linear state-space models and applied it to autoregressive moving average models.
Abstract: The bootstrap is proposed as a method for assessing the precision of Gaussian maximum likelihood estimates of the parameters of linear state-space models. Our results also apply to autoregressive moving average models, since they are a special case of state-space models. It is shown that for a time-invariant, stable system, the bootstrap applied to the innovations yields asymptotically consistent standard errors. To investigate the performance of the bootstrap for finite sample lengths, simulation results are presented for a two-state model with 50 and 100 observations; two cases are investigated, one with real characteristic roots and one with complex characteristic roots. The bootstrap is then applied to two real data sets, one used in a test for efficient capital markets and one used to develop an autoregressive integrated moving average model for quarterly earnings data. We find the bootstrap to be of definite value over the conventional asymptotics.

Journal ArticleDOI
TL;DR: In this paper, the authors present an analysis of a coupled ocean-atmosphere model used to study ENSO (El Nino-Southern Oscillation) using a linear model with only a few degrees of freedom.
Abstract: This paper presents an analysis of a coupled ocean-atmosphere model used to study ENSO (El Nino–Southern Oscillation). Our interest here is in the growth of initial error: that is, the predictability of ENSO. The analysis proceeds by constructing a linear model that optimally fits the behavior of the original nonlinear coupled model. By construction, this approximate linear model has only a few degrees of freedom. Because the linear model is so much smaller than the original, it is possible to understand it in much finer detail, indirectly offering insight into the properties and behavior of the original model. As it turns out, even linear models with only a few degrees of freedom can have rather elaborate and surprising short-term error behavior. It has been shown that if a system is not self-adjoint that there is a possibiliiy of error growth in a mode completely unrelated to the classic nation of a fastest growing linearly unstable mode. This holds for simple linear models as well. The work he...

Journal ArticleDOI
TL;DR: In this paper, it is argued that multi-level models based on shrinkage estimators represent a considerable improvement over single level models estimated by ordinary-least squares and allow relationships to vary in time and space according to context.
Abstract: It is argued that multi-level models based on shrinkage estimators represent a considerable improvement over single-level models estimated by ordinary-least squares. In substantive terms, the ML models allow relationships to vary in time and space according to context. Shrinkage estimators make very efficient use of the information contained in the hierprchical data sets that are estimated by ML models. A number of ML models for house-price variation are specified in terms of fixed and random, allowed-to-vary, effects. Empirical illustrations of some of these ML models are given for house-price variation in Southampton.

Journal ArticleDOI
TL;DR: In this article, the authors derive several properties unique to nonlinear model hypothesis testing problems involving linear or nonlinear inequality constraints in the null or alternative hypothesis, and discuss the impact of these properties on the empirical implementation and interpretation of these test procedures.
Abstract: This paper derives several properties unique to nonlinear model hypothesis testing problems involving linear or nonlinear inequality constraints in the null or alternative hypothesis. The paper is organized around a lemma which characterizes the set containing the least favorable parameter value for a nonlinear model inequality constraints hypothesis test. We then present two examples which illustrate several implications of this lemma. We also discuss the impact of these properties on the empirical implementation and interpretation of these test procedures.

Journal ArticleDOI
TL;DR: In this article, generalized C L (GC L ), cross-validation (CV), and generalized Cross-Valley (GCV) procedures are analyzed for model selection problems in linear regression and nonparametric regression estimation via series estimators.

Book ChapterDOI
01 Jan 1991
TL;DR: A procedure which should fit many purposes reasonably well for robust regression estimators and the reluctance to use the straightforward inference based on asymptotics is proposed.
Abstract: Even if robust regression estimators have been around for nearly 20 years, they have not found widespread application. One obstacle is the diversity of estimator types and the necessary choices of tuning constants, combined with a lack of guidance for these decisions. While some participants of the IMA summer program have argued that these choices should always be made in view of the specific problem at hand, we propose a procedure which should fit many purposes reasonably well. A second obstacle is the lack of simple procedures for inference, or the reluctance to use the straightforward inference based on asymptotics.

Journal ArticleDOI
TL;DR: In this paper, a locally linear model is used to obtain a trajectory consistent with the dynamics as well as with the measured data, which leads to a significant improvement of correlation dimension estimates.

Journal ArticleDOI
TL;DR: In this article, a hierarchical Bayes (HB) approach is proposed for prediction in general mixed linear models. But the results find application only in small area estimation, where the model unifies and extends a number of existing models.
Abstract: This paper introduces a hierarchical Bayes (HB) approach for prediction in general mixed linear models. The results find application in small area estimation. Our model unifies and extends a number of models previously considered in this area. Computational formulas for obtaining the Bayes predictors and their standard errors are given in the general case. The methods are applied to two actual data sets. Also, in a special case, the HB predictors are shown to possess some interesting frequentist properties.

Journal ArticleDOI
TL;DR: In this paper, the authors find that long-term uncertainty in a linear model of the interest rate term structure can have dramatic effects on variance bounds implied by the expectations theories of the term structure.
Abstract: We find that long-term uncertainty in a linear model of the interest rate term structure can have dramatic effects on variance bounds implied by the expectations theories of the term structure. We bootstrap fractionally integrated models of the term structure of interest rates. The fractional order of integration's bootstrapped standard errors simulate uncertainty surrounding long-term forecasts of interest rates, and we find that it is possible to overstate the significance of variance-bounds violations by at least a factor of three and perhaps by a factor of ten when long-term uncertainty is ignored.

Journal ArticleDOI
TL;DR: In this paper, a review and development of what is currently known about the directionality (irreversibility) of time series models is given, together with briefer coverage of the still limited statistical methodology.
Abstract: Summary This paper gives a review and development of what is currently known about the directionality (irreversibility) of time series models, together with briefer coverage of the still limited statistical methodology. Reversibility is shown to imply stationarity; Weiss's result concerning the reversibility of linear Gaussian processes is stressed, and contrasted to the directional nature of much time series data. Reversed ARMA models are explored, and non-linear examples given; the stationarity and invertibility conditions of ARMA models are shown to be implicitly directional, and a consequence of the future-independent nature of such models. Invertibility is extended to the two-sided futuredependent generalised linear model, and applied to reversible moving average models. The directional and reversible implications of autoregressive roots are covered. Work applying directional-sensitive methods of statistical analysis to reversed data series is mentioned; possible dangers in transforming directional series to Gaussian marginal distributions are noted. The directional nature of most non-linear models is invoked to emphasise the current importance of the area.

Journal ArticleDOI
TL;DR: In this article, an explicit formula for computing the optimal design weights on linearly independent regression vectors is derived for the mean parameters in a linear model with homoscedastic variances, which is a special case of a general result which holds for a wide class of optimality criteria.
Abstract: An explicit formula is derived to compute the $A$-optimal design weights on linearly independent regression vectors, for the mean parameters in a linear model with homoscedastic variances. The formula emerges as a special case of a general result which holds for a wide class of optimality criteria. There are close links to iterative algorithms for computing optimal weights.

Journal ArticleDOI
TL;DR: In this paper, a Bahadur representation for regression quantiles is provided for error processes which are highly non-stationary (for which there is a nonvanishing bias term) and which are close to being m-dependent.