scispace - formally typeset
Search or ask a question

Showing papers on "Proper linear model published in 1991"


Journal ArticleDOI
TL;DR: The general regression neural network (GRNN) is a one-pass learning algorithm with a highly parallel structure that provides smooth transitions from one observed value to another.
Abstract: A memory-based network that provides estimates of continuous variables and converges to the underlying (linear or nonlinear) regression surface is described. The general regression neural network (GRNN) is a one-pass learning algorithm with a highly parallel structure. It is shown that, even with sparse data in a multidimensional measurement space, the algorithm provides smooth transitions from one observed value to another. The algorithmic form can be used for any regression problem in which an assumption of linearity is not justified. >

4,091 citations


Journal ArticleDOI
TL;DR: Multiple regression analysis is one of the most widely used statistical procedures for both scholarly and applied marketing research as discussed by the authors. Yet, correlated predictor variables, and potential collinearity of correlated predictors have not yet been explored.
Abstract: Multiple regression analysis is one of the most widely used statistical procedures for both scholarly and applied marketing research. Yet, correlated predictor variables—and potential collinearity ...

1,159 citations


Journal ArticleDOI
01 Dec 1991
TL;DR: In this article, a comparison study has been made for determining a statistic which performs better than the other for detecting the scale shift type outliers and influential observations in the linear regression model, and they have used cutoff points using the exact distributions and Bonferroni's inequality for each statistic.
Abstract: A large number of statistics are used in the literature to detect outliers and influential observations in the linear regression model. In this paper comparison studies have been made for determining a statistic which performs better than the other. This includes: (i) a detailed simulation study, and (ii) analyses of several data sets studied by different authors. Different choices of the design matrix of regression model are considered. Design A studies the performance of the various statistics for detecting the scale shift type outliers, and designs B and C provide information on the performance of the statistics for identifying the influential observations. We have used cutoff points using the exact distributions and Bonferroni's inequality for each statistic. The results show that the studentized residual which is used for detection of mean shift outliers is appropriate for detection of scale shift outliers also, and the Welsch's statistic and the Cook's distance are appropriate for detection of influential observations.

634 citations


Book
14 Nov 1991
TL;DR: In this article, the basic components and factor analysis of Bivariate Regression Analysis and Logit Regression are presented. But they do not consider the relationship between variable distributions and robust regression.
Abstract: 1. Variable Distributions 2. Bivariate Regression Analysis 3. Basics of Multiple Regression 4. Regression Criticism 5. Fitting Curves 6. Robust Regressions 7. Logit Regression 8. Principle Components and Factor Analysis

604 citations


Book
10 Oct 1991
TL;DR: In this paper, the authors present a primer on ARIMA models and a general framework for building dynamic regression models in a vector ARMA framework, which includes model checking, reformulation, and evaluation.
Abstract: A Primer on ARIMA Models. A Primer on Regression Models. Rational Distributed Lag Models. Building Dynamic Regression Models: Model Identification. Building Dynamic Regression Models: Model Checking, Reformulation, and Evaluation. Intervention Analysis. Intervention and Outlier Detection and Treatment. Estimation and Forecasting. Dynamic Regression Models in a Vector ARMA Framework. Appendices. References. Index.

423 citations


Journal ArticleDOI
TL;DR: In this paper, the authors show that the effect of adjusting for covariates upon precision is quite different in the case of logistic regression, and that when testing for a treatment effect in randomized studies, it is always more efficient to adjust for predictive covariates when logistic models are used.
Abstract: Summary Results from classic linear regression regarding the effect of adjusting for covariates upon the precision of an estimator of exposure effect are often assumed to apply more generally to other types of regression models. In this paper we show that such an assumption is not justified in the case of logistic regression, where the effect of adjusting for covariates upon precision is quite different. For example, in classic linear regression the adjustment for a non-confounding predictive covariate results in improved precision, whereas such adjustment in logistic regression results in a loss of precision. However, when testing for a treatment effect in randomized studies, it is always more efficient to adjust for predictive covariates when logistic models are used, and thus in this regard the behavior of logistic regression is the same as that of classic linear regression.

401 citations


Journal ArticleDOI
TL;DR: A two-stage construction of a linear regression model is proposed using an enhancement of a minimal vagueness criterion already discussed in fuzzy regression analysis.

278 citations


Journal ArticleDOI
Eric R. Ziegel1

253 citations


Book ChapterDOI
01 Jan 1991
TL;DR: A procedure which should fit many purposes reasonably well for robust regression estimators and the reluctance to use the straightforward inference based on asymptotics is proposed.
Abstract: Even if robust regression estimators have been around for nearly 20 years, they have not found widespread application. One obstacle is the diversity of estimator types and the necessary choices of tuning constants, combined with a lack of guidance for these decisions. While some participants of the IMA summer program have argued that these choices should always be made in view of the specific problem at hand, we propose a procedure which should fit many purposes reasonably well. A second obstacle is the lack of simple procedures for inference, or the reluctance to use the straightforward inference based on asymptotics.

136 citations


Journal ArticleDOI
TL;DR: In this paper, a constrained least squares regression model of the deterministic shift-share method is presented for analyzing the components of regional employment change, which leads to a direct determination of estimates for the national, industrial, and regional growth coefficients.
Abstract: . This paper presents a full-analogue regression model of the deterministic shift-share method, for analyzing the components of regional employment change. The model is formulated as a constrained least squares regression problem. This leads to a direct determination of estimates for the national, industrial, and regional growth coefficients. The statistical significance of these coefficients and the overall model can then be tested by using standard regression test statistics.

81 citations


Journal ArticleDOI
TL;DR: In this article, principal component regression (PCR) is used for continuous Y-variables in multivariate image analysis and visual tools for diagnosis of model and prediction are provided, often based on derived image material.
Abstract: Regression between two blocks (usually called ‘dependent’ or Y and ‘independent’ or X) of data is a very important scientific and data-analytical tool. Regression on multivariate images is possible and constitutes a meaningful addition to existing univariate and multivariate techniques of image analysis. The regression can be used as a modeling tool or for prediction. The form of the regression equation chosen is dependent upon problem specification and information at hand. This paper describes the use of principal component regression (PCR). Both model building and prediction are presented for continuous Y-variables. The final goal is to supply new image material that can be used for visual inspection on a screen. Also, visual tools for diagnosis of model and prediction are provided, often based on derived image material. Examples of modeling and prediction are given for six channels in a seven-channel satellite image.

Journal ArticleDOI
TL;DR: In this article, the authors derived the exact risk of pre-test estimators of the prediction vector and of the error variance of a linear regression model with spherically symmetric disturbances.

Journal ArticleDOI
TL;DR: In this paper, the authors explored the ramifications of performing a linear regression on data obtained from a complex sample survey and showed that the incorporation of sampling weights into estimated regression coefficients helps protect against the potential existence of missing regressors.
Abstract: This article explores the ramifications of performing a linear regression on data obtained from a complex sample survey. The incorporation of sampling weights into estimated regression coefficients helps protect against the potential existence of missing regressors. In addition, the linearization variance estimator, computed by certain software regression packages designed specifically for use with survey data (e.g., SURREGR, SUPER CARP, and PC CARP), is robust against the likelihood of correlated errors and the possibility of heteroscedasticity.

Journal ArticleDOI
TL;DR: Several methods for bootstrapping generalized linear regression models are introduced in this article, both conditional and unconditional on the covariates, with respect to robustness and coverage properties, and one-step techniques, both unconditional and conditional, are examined.


Journal ArticleDOI
TL;DR: In this paper, the authors derived statistics for tests of changes at unknown times in the parameters of a general linear regression model and applied them to data on the incidence of AIDS in the United States.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss the issues in estimating the effects of marketing variables with linear models and propose that covariance structure analysis with an appropriate measurement model can ensure the unbiasedness of estimated effects.
Abstract: This paper discusses the issues in estimating the effects of marketing variables with linear models. When the variables are not directly observable, it is well known that direct regression yields biased estimates. Several researchers have recently suggested reverse regression as an alternative procedure. However, it is shown that the reverse regression approach also fails to provide unbiased estimates in general, except for some special cases. It is proposed that covariance structure analysis with an appropriate measurement model can ensure the unbiasedness of estimated effects. These issues are examined in the context of assessing market pioneer advantages.


Book ChapterDOI
01 Jan 1991
TL;DR: In this article, an extensive outline of the multiple linear regression model and its applications is presented, as well as a data set to be used as a multiple regression example is described.
Abstract: The multiple linear regression model is the most commonly applied statistical technique for relating a set of two or more variables. In Chapter 3 the concept of a regression model was introduced to study the relationship between two quantitative variables X and Y. In the latter part of Chapter 3, the impact of another explanatory variable Z on the regression relationship between X and Y was also studied. It was shown that by extending the regression to include the explanatory variable Z, the relationship between Y and X can be studied while controlling or taking into account Z. In a multivariate setting, the regression model can be extended so that Y can be related to a set of p explanatory variables X 1, X 2, …, X p . In this chapter, an extensive outline of the multiple linear regression model and its applications will be presented. A data set to be used as a multiple regression example is described next.

Book ChapterDOI
01 Jun 1991
TL;DR: A method for learning higher-order polynomial functions from examples using linear regression and feature construction and an extension to this method selected the specific pair of features to combine by measuring their joint ability to predict the hypothesis' error.
Abstract: We present a method for learning higher-order polynomial functions from examples using linear regression and feature construction Regression is used on a set of training instances to produce a weight vector for a linear function over the feature set If this hypothesis is imperfect, a new feature is constructed by forming the product of the two features that most effectively predict the squared error of the current hypothesis The algorithm is then repeated In an extension to this method, the specific pair of features to combine is selected by measuring their joint ability to predict the hypothesis' error

Journal ArticleDOI
TL;DR: Opinion about the regression coefficients and experimental error is elicited and modeled by a multivariate probability distribution (a Bayesian conjugate prior distribution) and various assessment tasks are used to estimate its parameters.
Abstract: This paper describes a method of quantifying subjective opinion about a normal linear regression model. Opinion about the regression coefficients and experimental error is elicited and modeled by a multivariate probability distribution (a Bayesian conjugate prior distribution). The distribution model is richly parameterized and various assessment tasks are used to estimate its parameters. These tasks include the revision of opinion in the light of hypothetical data, the assessment of credible intervals, and a task commonly performed in cue-weighting experiments. A new assessment task is also introduced. In addition, implementation of the method in an interactive computer program is described and the method is illustrated with a practical example.

Posted Content
TL;DR: In this paper, the authors study the posibilidad de inconsis- tencia de estimadores de maxima verosimilitud for ciertos modelos heteroscedasticos de regre- sion.
Abstract: Resu meti: Este documento estudia la posibilidad de inconsis­ tencia de los estimadores de maxima verosimilitud para ciertos modelos heteroscedasticos de regre­ sion. Estos incluyen el modelo de regresion de Poisson y los modelos ARCH.

Journal ArticleDOI
TL;DR: In this article, a nonparametric regression model was proposed to investigate the relationship between groundwater level fluctuations and streamflow time series observations, and the results from the analysis indicate that the non-parametric method gives more accurate prediction results than those obtained from parametric regression.
Abstract: A new nonparametric regression model is proposed to investigate the relationship between groundwater level fluctuations and streamflow time series observations. The developed nonparametric model does not force the relationship between variables into a rigidly defined class (i.e., linear regression) and is capable of inferring complicated relationships. The results from the analysis indicate that the nonparametric method gives more accurate prediction results than those obtained from parametric regression. A split-sample experiment shows that nonparametric regression gives accurate prediction (extrapolation) results at the validation stage. Key words: nonparametric regression, cross-validation method, groundwater level, streamflow.

Journal ArticleDOI
TL;DR: In this article, the authors illustrate how adding cross-correlated components to the Hildreth-Houck random coefficient regression model can eliminate negative estimated variances and uncover coefficient randomness otherwise masked.

Journal ArticleDOI
TL;DR: In this paper, the error vector is distributed as a scale mixture of multivariate normal distributions, and the results obtained for the linear model by Zellner (1976), Jammalamadaka et al. (1987), and Chib et al (1988) are explained and generalized to much more general classes of regression models and prior distributions.


Journal ArticleDOI
TL;DR: In this article, the authors consider the pre-test estimation of the parameters of a linear regression model after a preliminary-test for exact linear restrictions when the model is mis-specified through the omission of relevant regressors and the usual assumption of normal regression disturbances is widened to a subclass of spherically symmetric errors.
Abstract: We consider the pre-test estimation of the parameters of a linear regression model after a preliminary-test for exact linear restrictions when the model is mis-specified through the omission of relevant regressors and the usual assumption of normal regression disturbances is widened to a subclass of the family of spherically symmetric errors. We derive and analyse the exact risk (under quadratic loss) of a pre-test estimator of the prediction vector and of the scale parameter.

Journal ArticleDOI
TL;DR: In this paper, the authors give a sufficient condition for consistency of the standard OLS-based estimate of the disturbance variance in the linear regression model with autocorrelated disturbances, which is similar to our condition.
Abstract: We give a simple sufficient condition for consistency of the standard OLS-based estimate of the disturbance variance in the linear regression model with autocorrelated disturbances.

01 Jan 1991
TL;DR: In this paper, the authors consider distribution violation in least squares linear regression and propose robust methods for estimating the linear model in the presence of distribution violation, which is similar to our approach.
Abstract: : Least squares linear regression is one of the most widely statistical tools. It is based on a certain standard linear model where y denotes a scalar outcome variable, and x denotes a p-dimensional column vector of regressor variables. In empirical applications, it is unlikely for the standard linear model to hold exactly. Therefore we need to be concerned about possible violations of the model assumptions. For example, we might consider distribution violation: the error distribution might not be normal. There is a rich literature on robust methods for estimating the linear model in the presence of distribution violation.

Journal ArticleDOI
TL;DR: This paper contains a globally optimal solution for a class of functions composed of a linear regression function and a penalty function for the sum of squared regression weights, obtained from inequalities rather than from partial derivatives of a Lagrangian function.
Abstract: This paper contains a globally optimal solution for a class of functions composed of a linear regression function and a penalty function for the sum of squared regression weights. Global optimality is obtained from inequalities rather than from partial derivatives of a Lagrangian function. Applications arise in multidimensional scaling of symmetric or rectangular matrices of squared distances, in Procrustes analysis, and in ridge regression analysis. The similarity of existing solutions for these applications is explained by considering them as special cases of the general class of functions addressed.