scispace - formally typeset
Search or ask a question

Showing papers on "Proper linear model published in 1994"


BookDOI
01 Jan 1994
TL;DR: In this paper, the authors propose that having more aspects to know and understand will lead to becoming a more precious person, and becoming more precious can be situated with the presentation of how your knowledge much.
Abstract: Of course, from childhood to forever, we are always thought to love reading. It is not only reading the lesson book but also reading everything good is the choice of getting new inspirations. Religion, sciences, politics, social, literature, and fictions will enrich you for not only one aspect. Having more aspects to know and understand will lead you become someone more precious. Yea, becoming precious can be situated with the presentation of how your knowledge much.

1,900 citations


Journal ArticleDOI
TL;DR: The ZIP regression model appears to be a serious candidate model when data exhibit excess zeros, e.g. due to underreporting, and it is recommended that the Poisson regression model be used as an initial model for developing the relationship.

787 citations


Journal ArticleDOI
TL;DR: Nonparametric versions of discriminant analysis are obtained by replacing linear regression by any nonparametric regression method so that any multiresponse regression technique can be postprocessed to improve its classification performance.
Abstract: Fisher's linear discriminant analysis is a valuable tool for multigroup classification. With a large number of predictors, one can find a reduced number of discriminant coordinate functions that are “optimal” for separating the groups. With two such functions, one can produce a classification map that partitions the reduced space into regions that are identified with group membership, and the decision boundaries are linear. This article is about richer nonlinear classification schemes. Linear discriminant analysis is equivalent to multiresponse linear regression using optimal scorings to represent the groups. In this paper, we obtain nonparametric versions of discriminant analysis by replacing linear regression by any nonparametric regression method. In this way, any multiresponse regression technique (such as MARS or neural networks) can be postprocessed to improve its classification performance.

722 citations


Book
21 Jul 1994
TL;DR: In this article, simple linear regression and calibration are combined with regularized multiple regression for pattern recognition, and a Partial least-squares algorithm is proposed for partial least squares algorithm.
Abstract: Introduction 1. Simple linear regression 2. Multiple regression and calibration 3. Regularized multiple regression 4. Multivariate calibration 5. Regession on curves 6. Non-linearity and selection 7. Pattern recognition A. Distribution theory B. Conditional inference C. Regularization dominance E. Partial least-squares algorithm Bibliography Index

335 citations


Journal ArticleDOI
TL;DR: A new class of fuzzy linear regression models based on Tanaka's approach, here all training data influence the estimated interval, and an adaptation of the fuzzy regression equation to new data becomes possible.

324 citations


Journal ArticleDOI
TL;DR: In this article, the autoregressive model for cointegrated variables is analyzed with respect to the role of the constant and linear terms, and it is shown that statistical inference can be performed by reduced rank regression.
Abstract: The autoregressive model for cointegrated variables is analyzed with respect to the role of the constant and linear terms Various models for 1(1) variables defined by restrictions on the deterministic terms are discussed, and it is shown that statistical inference can be performed by reduced rank regression The asymptotic distributions of the test statistics and estimators are found A similar analysis is given for models for 1(2) variables with a constant term

321 citations


Posted Content
TL;DR: In this paper, the authors interpret the specification of linear regression models as a mixed continuous-discrete prior distribution for coefficient values and use a Gibbs sampler to construct posterior moments.
Abstract: In the specification of linear regression models it is common to indicate a list of candidate variables from which a subset enters the model with nonzero coefficients. This paper interprets this specification as a mixed continuous-discrete prior distribution for coefficient values. It then utilizes a Gibbs sampler to construct posterior moments. It is shown how this method can incorporate sign constraints and provide posterior probabilities for all possible subsets of regressors. The methods are illustrated using some standard data sets.

243 citations


Book
20 Sep 1994
TL;DR: In this article, a simple linear regression with two independent variables is proposed, followed by a linear discriminant regression with four independent variables and a multivariable regression with three independent variables.
Abstract: 1. General Concepts. 2. Simple Linear Regression. 3. Regression with Two Independent Variables. 4. Multivariable Regression. 5. Analysis of Covariance. 6. Linear Discriminant Analysis. 7. Principal Components. 8. Contingency Table Analysis I. 9. Contingency Table Analysis II. 10. Logistic Regression Analysis. 11. Survival Data Analysis. 12. Analysis of Rates with Poisson Regression.

197 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of determining how many linear combinations are involved in a general regression problem is addressed, where a response variable can be expressed as some function of one or more different linear combinations of a set of explanatory variables as well as a random error term.
Abstract: A general regression problem is one in which a response variable can be expressed as some function of one or more different linear combinations of a set of explanatory variables as well as a random error term. Sliced inverse regression is a method for determining these linear combinations. In this article we address the problem of determining how many linear combinations are involved. Procedures based on conditional means and conditional covariance matrices, as well as a procedure combining the two approaches, are considered. In each case we develop a test that has an asymptotic chi-squared distribution when the vector of explanatory variables is sampled from an elliptically symmetric distribution.

158 citations


Book
01 Jan 1994
TL;DR: A review of statistics and matrices regression and prediction straight line regression multiple linear regression diagnostic procedures applications of regression I applicationa of regression II alternate assumptions for regression nonlinear regression can be found in this article.
Abstract: Review of statistics and matrices regression and prediction straight line regression multiple linear regression diagnostic procedures applications of regression I applicationa of regression II alternate assumptions for regression nonlinear regression.

147 citations


Journal ArticleDOI
TL;DR: The purpose of this paper is to review and examine some of the approaches to fuzzy linear regression, to discuss their strengths and weaknesses relative to each other, and to suggest possible improvements.

Journal ArticleDOI
TL;DR: In this article, a weighted regression quantile process is used for linearly heteroscedastic regression models and it is shown that the resulting estimators are "efficient" in terms of time complexity.
Abstract: L-estimators based on a weighted regression quantile process are considered for a class of linearly heteroscedastic regression models. It is shown that the resulting estimators are “efficient” in t...

Book
28 Sep 1994
TL;DR: In this article, a Quick Look at a Typical Regression Program is presented, where simple linear regression and nonlinear regression are applied to a general linear model and a computer assisted model building.
Abstract: 1. A Quick Look at a Typical Regression Program. 2. Simple Linear Regression. 3. Applying Simple Linear Regression. 4. Multiple Linear Regression. 5. Computer Assisted Model Building. 6. The General Linear Model. 7. Analysis of Variance and Covariance. 8. Nonlinear Regression. 9. Maximum Likelihood Analysis and Robust Estimation.

Book Chapter
01 Jan 1994
TL;DR: The regression tree model in the context of general regression models is discussed and the idea od the regression tree algorithm is presented and a strategy to adjust for such an undesirable situation is proposed.
Abstract: The modelling of the relationship of some response to factors measured on different scales is a common problem in various fields of application. We discuss the regression tree model in the context of general regression models and present the idea od the regression tree algorithm. In this approach all factors under consideration have to be split to binary variables, leading to a high probability of wrongly identifying as influential a variable with many splits. We propose a strategy to adjust for such an undesirable Finally, we illustrate our modification of the classification and regression tree method with data from a multicenter randomized clinical trial in patients with brain tumors.

Journal ArticleDOI
TL;DR: In this article, a two-part test procedure is proposed to assist in discriminating between errors-in-variables/simultaneity and other misspecifications of a linear regression model.

Proceedings ArticleDOI
01 Oct 1994
TL;DR: A new, more general formulation for PDMs, based on polynomial regression, is presented, and the resulting Polynomial Regression PDMs (PRPDMs) perform well on the data for which the linear method failed.
Abstract: We have previously described how to model shape variability by means of point distribution models (TDMs,) in which there is a linear relationship between a set of shape parameters and the positions of points on the shape. This linear formulation can fail for shapes which articulate or bend.' we show examples of such failure for both real and synthetic classes of shape. A new, more general formulation for PDMs, based on polynomial regression, is presented. The resulting Polynomial Regression PDMs (PRPDMsj perform well on the data for which the linear method failed.

Journal ArticleDOI
TL;DR: In this article, the authors obtained asymptotic representations of the regression quantiles and the regression rank-scores processes in linear regression setting when the errors are a function of Gaussian random variables that ale stationary and long range dependent.

Journal ArticleDOI
Z. D. Bai1, Yuehua Wu1
TL;DR: In this article, the asymptotic properties of M-estimators of the regression coefficients in linear models (both scale-variant and scale-invariant) when the number of regression coefficients tends to infinity as the sample size increases are investigated.

Journal ArticleDOI
Bo Jönsson1
TL;DR: In this article, the authors focus on the case of a non-stochastic true regressor (x) and show that for a wide range of true x-values around the mean of x in the estimation period, predictions based on OLS on the observed variables is to be preferred in terms of MSE to a predictor based on consistent estimation of the parameters.

Journal ArticleDOI
TL;DR: In this paper, the authors consider the case where the model specifies the regression and dispersion functions for the data but robustness is of concern and one wishes to use least absolute error regressions.
Abstract: In heteroscedastic regression models assumptions about the error distribution determine the method of consistent estimation of parameters. For example, consider the case where the model specifies the regression and dispersion functions for the data but robustness is of concern and one wishes to use least absolute error regressions. Except in certain special circumstances, parameter estimates obtained in this way are inconsistent. In this article we expand the heteroscedastic model so that all of the common methods yield consistent estimates of the major model parameters. Asymptotic theory shows the extent to which standard results on the effect of estimating regression and dispersion parameters carry over into this setting. Careful attention is given to the question of when one can adapt for heteroscedasticity when estimating the regression parameters. We find that in many cases such adaption is not possible. This complicates inference about the regression parameters but does not lead to intracta...

Book ChapterDOI
01 Jan 1994
TL;DR: In this article, the authors show that the mean surface in most non-linear regression models will be approximately planar in the region(s) of high likelihood allowing good approximations based on linear regression techniques to be used.
Abstract: In linear regression the mean surface in sample space is a plane. In non-linear regression the mean surface may be an arbitrary curved surface but in other respects the models are similar. In practice the mean surface in most non-linear regression models will be approximately planar in the region(s) of high likelihood allowing good approximations based on linear regression techniques to be used. Non-linear regression models can still present tricky computational and inferential problems. (Indeed, the examples here exceeded the capacity of S-PLUS for Windows 3.1.)

Book
01 Jan 1994
TL;DR: Linear regression models correlation nonlinear regression models interpolation and approximation derivatives and integrals as mentioned in this paper, which is a generalization of the non-linear regression model for linear regression.
Abstract: Linear regression models correlation nonlinear regression models interpolation and approximation derivatives and integrals.


Journal ArticleDOI
TL;DR: This paper provides a fairly easy-to-apply statistical test based on the asymptotic properties of the bispectrum of the inverse filtered data that is compared with an existing order selection method based upon rank testing via singular value decomposition.
Abstract: There exists several methods for fitting linear models to linear stationary nonGaussian signals using higher order statistics. The models are fitted under certain assumptions on the data and the underlying (true) model. This paper is devoted to the problem of model validation, i.e., to checking if the fitted linear model is consistent with the underlying basic assumptions. Model order selection is a by-product of the solution. We provide a fairly easy-to-apply statistical test based on the asymptotic properties of the bispectrum of the inverse filtered data. Computer simulation results are presented for both linear model validation and model order selection. The proposed model order selection approach is compared with an existing order selection method based upon rank testing via singular value decomposition. >

Proceedings ArticleDOI
14 Dec 1994
TL;DR: In this paper, it was shown that 4SID can be viewed as a linear regression multistep ahead prediction error method, with certain rank constraints, and that ARX models have nice properties in terms of system identification.
Abstract: State-space subspace system identification (4SID) has been suggested as an alternative to more traditional prediction error system identification, such as ARX least squares estimation. The aim of this note is to analyse the connections between these two different approaches to system identification. The conclusion is that 4SID can be viewed as a linear regression multistep ahead prediction error method, with certain rank constraints. This allows us to analyse 4SID methods within the standard framework of system identification and linear regression estimation. For example, it is shown that ARX models have nice properties in terms of 4SID identification. From a linear regression model, estimates of the extended observability matrix are found. Results from an asymptotic analysis are presented, i.e. explicit formulas for the asymptotic variances of the pole estimation error are given. From these expressions, some difficulties in choosing user specified parameters are pointed out in an example. >

Proceedings ArticleDOI
26 Jun 1994
TL;DR: This work proposes a fuzzy robust linear regression which is not influenced by data with error, and is built as rigid a model as possible to minimize the total error between the model and the data.
Abstract: Since a fuzzy linear regression model was proposed in 1987, its possibilistic model was employed to analyze data. From viewpoints of fuzzy linear regression, data are understood to express the possibilities of a latent system. When data have error or data are very irregular, the obtained regression model has an unnaturally wide possibility range. We propose a fuzzy robust linear regression which is not influenced by data with error. The model is built as rigid a model as possible to minimize the total error between the model and the data. The robustness of the proposed model is shown using numerical examples. >


Journal ArticleDOI
TL;DR: In this article, the 3-parameter Weibull distribution has been used for nonlinear regression analysis, and three pragmatic estimation methods have been proposed to deal with difficult problems such as estimates of the 3 parameters becoming nonpositive.
Abstract: The conventional techniques of linear regression analysis (linear least squares) applied to the 3-parameter Weibull distribution are extended (not modified), and new techniques are developed for the 3-parameter Weibull distribution. The three pragmatic estimation methods in this paper are simple, accurate, flexible, and powerful in dealing with difficult problems such as estimates of the 3 parameters becoming nonpositive. In addition, the inherent disadvantages of the 3-parameter Weibull distribution are revealed; the advantages of a new 3-parameter Weibull-like distribution over the original Weibull distribution are explored; and the potential of a 4-parameter Weibull-like distribution is briefly mentioned. This paper demonstrates how a general linear regression analysis or linear least-squares breaks away from the classical or modern nonlinear regression analysis or nonlinear least-squares. By adding a parameter to the simplest 2-parameter linear regression model (AB-model), two kinds of ABC models (elementary 3-parameter nonlinear regression models) are found, and then a 4-parameter AABC model is built as an example of multi-parameter nonlinear regression models. Although some other techniques are still necessary, additional applications of the ABC models are strongly implied. >

Journal ArticleDOI
TL;DR: In this article, the case deletion model (CDM) and the mean shift outlier model (MSOM) are compared in a wide class of statistical models, which include LSE, MLE, Bayesian estimate and M-estimate in linear and nonlinear regression models; MLE in generalized linear models and exponential family nonlinear models; and MLEs of transformation parameters of explanatory variables in a Box-Cox regression models and so on.
Abstract: In regression diagnostics, the case deletion model (CDM) and the mean shift outlier model (MSOM) are commonly used in practice. In this paper we show that the estimates of CDM and MSOM are equal in a wide class of statistical models, which include LSE, MLE, Bayesian estimate andM-estimate in linear and nonlinear regression models; MLE in generalized linear models and exponential family nonlinear models; MLEs of transformation parameters of explanatory variables in a Box-Cox regression models and so on. Furthermore, we study some models, in which, the estimates are not exactly equal but are approximately equal for CDM and MSOM.

Journal ArticleDOI
TL;DR: Several adaptive versions of the minimum mean squared error estimator of the coefficient vector in a linear regression model are proposed in the literature as discussed by the authors, and some of these are compared here, and another estimator is also proposed.
Abstract: Several adaptive versions of the minimum mean squared error estimator of the coefficient vector in a linear regression model are proposed in the literature. Some of these are compared here, and another estimator is also proposed.