scispace - formally typeset
Search or ask a question
Author

A. J. Scott

Bio: A. J. Scott is an academic researcher. The author has contributed to research in topics: Logistic regression. The author has an hindex of 1, co-authored 1 publications receiving 29123 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: Applied Logistic Regression, Third Edition provides an easily accessible introduction to the logistic regression model and highlights the power of this model by examining the relationship between a dichotomous outcome and a set of covariables.
Abstract: \"A new edition of the definitive guide to logistic regression modeling for health science and other applicationsThis thoroughly expanded Third Edition provides an easily accessible introduction to the logistic regression (LR) model and highlights the power of this model by examining the relationship between a dichotomous outcome and a set of covariables. Applied Logistic Regression, Third Edition emphasizes applications in the health sciences and handpicks topics that best suit the use of modern statistical software. The book provides readers with state-of-the-art techniques for building, interpreting, and assessing the performance of LR models. New and updated features include: A chapter on the analysis of correlated outcome data. A wealth of additional material for topics ranging from Bayesian methods to assessing model fit Rich data sets from real-world studies that demonstrate each method under discussion. Detailed examples and interpretation of the presented results as well as exercises throughout Applied Logistic Regression, Third Edition is a must-have guide for professionals and researchers who need to model nominal or ordinal scaled outcome variables in public health, medicine, and the social sciences as well as a wide range of other fields and disciplines\"--

30,190 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This article gives an introduction to the subject of classification and regression trees by reviewing some widely available algorithms and comparing their capabilities, strengths, and weakness in two examples.
Abstract: Classification and regression trees are machine-learning methods for constructing prediction models from data. The models are obtained by recursively partitioning the data space and fitting a simple prediction model within each partition. As a result, the partitioning can be represented graphically as a decision tree. Classification trees are designed for dependent variables that take a finite number of unordered values, with prediction error measured in terms of misclassification cost. Regression trees are for dependent variables that take continuous or ordered discrete values, with prediction error typically measured by the squared difference between the observed and predicted values. This article gives an introduction to the subject by reviewing some widely available algorithms and comparing their capabilities, strengths, and weakness in two examples. © 2011 John Wiley & Sons, Inc. WIREs Data Mining Knowl Discov 2011 1 14-23 DOI: 10.1002/widm.8 This article is categorized under: Technologies > Classification Technologies > Machine Learning Technologies > Prediction Technologies > Statistical Fundamentals

16,974 citations

Book
29 Mar 2013
TL;DR: Linear Mixed-Effects and Nonlinear Mixed-effects (NLME) models have been studied in the literature as mentioned in this paper, where the structure of grouped data has been used for fitting LME models.
Abstract: Linear Mixed-Effects * Theory and Computational Methods for LME Models * Structure of Grouped Data * Fitting LME Models * Extending the Basic LME Model * Nonlinear Mixed-Effects * Theory and Computational Methods for NLME Models * Fitting NLME Models

10,715 citations

Book
21 Mar 2002
TL;DR: An essential textbook for any student or researcher in biology needing to design experiments, sample programs or analyse the resulting data is as discussed by the authors, covering both classical and Bayesian philosophies, before advancing to the analysis of linear and generalized linear models Topics covered include linear and logistic regression, simple and complex ANOVA models (for factorial, nested, block, split-plot and repeated measures and covariance designs), and log-linear models Multivariate techniques, including classification and ordination, are then introduced.
Abstract: An essential textbook for any student or researcher in biology needing to design experiments, sample programs or analyse the resulting data The text begins with a revision of estimation and hypothesis testing methods, covering both classical and Bayesian philosophies, before advancing to the analysis of linear and generalized linear models Topics covered include linear and logistic regression, simple and complex ANOVA models (for factorial, nested, block, split-plot and repeated measures and covariance designs), and log-linear models Multivariate techniques, including classification and ordination, are then introduced Special emphasis is placed on checking assumptions, exploratory data analysis and presentation of results The main analyses are illustrated with many examples from published papers and there is an extensive reference list to both the statistical and biological literature The book is supported by a website that provides all data sets, questions for each chapter and links to software

9,509 citations

Journal ArticleDOI
18 Jun 2003-JAMA
TL;DR: Notably, major depressive disorder is a common disorder, widely distributed in the population, and usually associated with substantial symptom severity and role impairment, and while the recent increase in treatment is encouraging, inadequate treatment is a serious concern.
Abstract: ContextUncertainties exist about prevalence and correlates of major depressive disorder (MDD).ObjectiveTo present nationally representative data on prevalence and correlates of MDD by Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV) criteria, and on study patterns and correlates of treatment and treatment adequacy from the recently completed National Comorbidity Survey Replication (NCS-R).DesignFace-to-face household survey conducted from February 2001 to December 2002.SettingThe 48 contiguous United States.ParticipantsHousehold residents ages 18 years or older (N = 9090) who responded to the NCS-R survey.Main Outcome MeasuresPrevalence and correlates of MDD using the World Health Organization's (WHO) Composite International Diagnostic Interview (CIDI), 12-month severity with the Quick Inventory of Depressive Symptomatology Self-Report (QIDS-SR), the Sheehan Disability Scale (SDS), and the WHO disability assessment scale (WHO-DAS). Clinical reinterviews used the Structured Clinical Interview for DSM-IV.ResultsThe prevalence of CIDI MDD for lifetime was 16.2% (95% confidence interval [CI], 15.1-17.3) (32.6-35.1 million US adults) and for 12-month was 6.6% (95% CI, 5.9-7.3) (13.1-14.2 million US adults). Virtually all CIDI 12-month cases were independently classified as clinically significant using the QIDS-SR, with 10.4% mild, 38.6% moderate, 38.0% severe, and 12.9% very severe. Mean episode duration was 16 weeks (95% CI, 15.1-17.3). Role impairment as measured by SDS was substantial as indicated by 59.3% of 12-month cases with severe or very severe role impairment. Most lifetime (72.1%) and 12-month (78.5%) cases had comorbid CIDI/DSM-IV disorders, with MDD only rarely primary. Although 51.6% (95% CI, 46.1-57.2) of 12-month cases received health care treatment for MDD, treatment was adequate in only 41.9% (95% CI, 35.9-47.9) of these cases, resulting in 21.7% (95% CI, 18.1-25.2) of 12-month MDD being adequately treated. Sociodemographic correlates of treatment were far less numerous than those of prevalence.ConclusionsMajor depressive disorder is a common disorder, widely distributed in the population, and usually associated with substantial symptom severity and role impairment. While the recent increase in treatment is encouraging, inadequate treatment is a serious concern. Emphasis on screening and expansion of treatment needs to be accompanied by a parallel emphasis on treatment quality improvement.

7,706 citations

Journal ArticleDOI
TL;DR: In this paper, instead of selecting factors by stepwise backward elimination, the authors focus on the accuracy of estimation and consider extensions of the lasso, the LARS algorithm and the non-negative garrotte for factor selection.
Abstract: Summary. We consider the problem of selecting grouped variables (factors) for accurate prediction in regression. Such a problem arises naturally in many practical situations with the multifactor analysis-of-variance problem as the most important and well-known example. Instead of selecting factors by stepwise backward elimination, we focus on the accuracy of estimation and consider extensions of the lasso, the LARS algorithm and the non-negative garrotte for factor selection. The lasso, the LARS algorithm and the non-negative garrotte are recently proposed regression methods that can be used to select individual variables. We study and propose efficient algorithms for the extensions of these methods for factor selection and show that these extensions give superior performance to the traditional stepwise backward elimination method in factor selection problems. We study the similarities and the differences between these methods. Simulations and real examples are used to illustrate the methods.

7,400 citations