scispace - formally typeset
Search or ask a question
Author

Joseph W. McKean

Other affiliations: Pennsylvania State University
Bio: Joseph W. McKean is an academic researcher from Western Michigan University. The author has contributed to research in topics: Linear model & Rank (linear algebra). The author has an hindex of 28, co-authored 107 publications receiving 3106 citations. Previous affiliations of Joseph W. McKean include Pennsylvania State University.


Papers
More filters
Book
30 Jan 1998
TL;DR: One-sample problems as mentioned in this paper have been used to evaluate the robustness of estimates of location in linear models with respect to the number of false positives and false negatives of the estimated locations.
Abstract: One-Sample Problems Introduction Location Model Geometry and Inference in the Location Model Examples Properties of Norm-Based Inference Robustness Properties of Norm-Based Inference Inference and the Wilcoxon Signed-Rank Norm Inference Based on General Signed-Rank Norms Ranked Set Sampling L1 Interpolated Confidence Intervals Two-Sample Analysis Two-Sample Problems Introduction Geometric Motivation Examples Inference Based on the Mann-Whitney-Wilcoxon General Rank Scores L1 Analyses Robustness Properties Proportional Hazards Two-Sample Rank Set Sampling (RSS) Two-Sample Scale Problem Behrens-Fisher Problem Paired Designs Linear Models Introduction Geometry of Estimation and Tests Examples Assumptions for Asymptotic Theory Theory of Rank-Based Estimates Theory of Rank-Based Tests Implementation of the R Analysis L1 Analysis Diagnostics Survival Analysis Correlation Model High Breakdown (HBR) Estimates Diagnostics for Differentiating between Fits Rank-Based Procedures for Nonlinear Models Experimental Designs: Fixed Effects Introduction One-Way Design Multiple Comparison Procedures Two-Way Crossed Factorial Analysis of Covariance Further Examples Rank Transform Models with Dependent Error Structure Introduction General Mixed Models Simple Mixed Models Arnold Transformations General Estimating Equations (GEE) Time Series Multivariate Multivariate Location Model Componentwise Spatial Methods Affine Equivariant and Invariant Methods Robustness of Estimates of Location Linear Model Experimental Designs Appendix: Asymptotic Results References Index

505 citations

Journal ArticleDOI
TL;DR: In this article, it has been recognized that the two-phase version of the interrupted time-series design can be frequently modeled using a four-parameter design matrix, however, there are differences across writers in the details of the recommended design matrices to be used in the estimation of the four parameters of the model.
Abstract: It has been recognized that the two-phase version of the interrupted time-series design can be frequently modeled using a four-parameter design matrix. There are differences across writers, however, in the details of the recommended design matrices to be used in the estimation of the four parameters of the model. Various writers imply that different methods of specifying the four-parameter design matrix all lead to the same conclusions; they do not. The tests and estimates for level change are dramatically different under the various seemingly equivalent design specifications. Examples of egregious errors of interpretation are presented and recommendations regarding the correct specification of the design matrix are made. The recommendations hold whether the model is estimated using ordinary least squares (for the case of approximately independent errors) or some more complex time-series approach (for the case of autocorrelated errors).

201 citations

Journal ArticleDOI
TL;DR: An R package, Rfit, is developed that uses standard linear model syntax and includes many of the main inference and diagnostic functions for rank-based estimators and their associated inference.
Abstract: In the nineteen seventies, Jureckova and Jaeckel proposed rank estimation for linear models. Since that time, several authors have developed inference and diagnostic methods for these estimators. These rank-based estimators and their associated inference are highly efficient and are robust to outliers in response space. The methods include estimation of standard errors, tests of general linear hypotheses, confidence intervals, diagnostic procedures including studentized residuals, and measures of influential cases. We have developed an R package, Rfit, for computing of these robust procedures. In this paper we highlight the main features of the package. The package uses standard linear model syntax and includes many of the main inference and diagnostic functions.

199 citations

Journal ArticleDOI
TL;DR: A new method for the analysis of linear models that have autoregressive errors is proposed, which is not only relevant in the behavioral sciences for analyzing small-sample time-series intervention models, but is also appropriate for a wide class of small- sample linear model problems.
Abstract: A new method for the analysis of linear models that have autoregressive errors is proposed. The approach is not only relevant in the behavioral sciences for analyzing small-sample time-series intervention models, but it is also appropriate for a wide class of small-sample linear model problems in which there is interest in inferential statements regarding all regression parameters and autoregressive parameters in the model. The methodology includes a double application of bootstrap procedures. The 1st application is used to obtain bias-adjusted estimates of the autoregressive parameters. The 2nd application is used to estimate the standard errors of the parameter estimates. Theoretical and Monte Carlo results are presented to demonstrate asymptotic and small-sample properties of the method; examples that illustrate advantages of the new approach over established time-series methods are described.

120 citations

Journal ArticleDOI
TL;DR: In this article, the small sample properties of 6 autocorrelation estimators were investigated in an extensive Monte Carlo study, and it was demonstrated that conventional estimators yield problems of estimation and inference in the form of inconsistencies between theoretical and empirical expectations, inconsistencies between error variances, and dramatic differences between nominal and empirical Type I errors.
Abstract: The small sample properties of 6 autocorrelation estimators were investigated in an extensive Monte Carlo study. It was demonstrated that conventional estimators yield problems of estimation and inference in the form of(a) inconsistencies between theoretical and empirical expectations, (b) inconsistencies between theoretical and empirical error variances, and (c) dramatic differences between nominal and empirical Type I errors

111 citations


Cited by
More filters
Book
21 Mar 2002
TL;DR: An essential textbook for any student or researcher in biology needing to design experiments, sample programs or analyse the resulting data is as discussed by the authors, covering both classical and Bayesian philosophies, before advancing to the analysis of linear and generalized linear models Topics covered include linear and logistic regression, simple and complex ANOVA models (for factorial, nested, block, split-plot and repeated measures and covariance designs), and log-linear models Multivariate techniques, including classification and ordination, are then introduced.
Abstract: An essential textbook for any student or researcher in biology needing to design experiments, sample programs or analyse the resulting data The text begins with a revision of estimation and hypothesis testing methods, covering both classical and Bayesian philosophies, before advancing to the analysis of linear and generalized linear models Topics covered include linear and logistic regression, simple and complex ANOVA models (for factorial, nested, block, split-plot and repeated measures and covariance designs), and log-linear models Multivariate techniques, including classification and ordination, are then introduced Special emphasis is placed on checking assumptions, exploratory data analysis and presentation of results The main analyses are illustrated with many examples from published papers and there is an extensive reference list to both the statistical and biological literature The book is supported by a website that provides all data sets, questions for each chapter and links to software

9,509 citations

01 Jan 2016
TL;DR: The modern applied statistics with s is universally compatible with any devices to read, and is available in the digital library an online access to it is set as public so you can download it instantly.
Abstract: Thank you very much for downloading modern applied statistics with s. As you may know, people have search hundreds times for their favorite readings like this modern applied statistics with s, but end up in harmful downloads. Rather than reading a good book with a cup of coffee in the afternoon, instead they cope with some harmful virus inside their laptop. modern applied statistics with s is available in our digital library an online access to it is set as public so you can download it instantly. Our digital library saves in multiple countries, allowing you to get the most less latency time to download any of our books like this one. Kindly say, the modern applied statistics with s is universally compatible with any devices to read.

5,249 citations

Journal ArticleDOI
01 May 1981
TL;DR: This chapter discusses Detecting Influential Observations and Outliers, a method for assessing Collinearity, and its applications in medicine and science.
Abstract: 1. Introduction and Overview. 2. Detecting Influential Observations and Outliers. 3. Detecting and Assessing Collinearity. 4. Applications and Remedies. 5. Research Issues and Directions for Extensions. Bibliography. Author Index. Subject Index.

4,948 citations

Posted Content
TL;DR: In this paper, the authors provide a unified and comprehensive theory of structural time series models, including a detailed treatment of the Kalman filter for modeling economic and social time series, and address the special problems which the treatment of such series poses.
Abstract: In this book, Andrew Harvey sets out to provide a unified and comprehensive theory of structural time series models. Unlike the traditional ARIMA models, structural time series models consist explicitly of unobserved components, such as trends and seasonals, which have a direct interpretation. As a result the model selection methodology associated with structural models is much closer to econometric methodology. The link with econometrics is made even closer by the natural way in which the models can be extended to include explanatory variables and to cope with multivariate time series. From the technical point of view, state space models and the Kalman filter play a key role in the statistical treatment of structural time series models. The book includes a detailed treatment of the Kalman filter. This technique was originally developed in control engineering, but is becoming increasingly important in fields such as economics and operations research. This book is concerned primarily with modelling economic and social time series, and with addressing the special problems which the treatment of such series poses. The properties of the models and the methodological techniques used to select them are illustrated with various applications. These range from the modellling of trends and cycles in US macroeconomic time series to to an evaluation of the effects of seat belt legislation in the UK.

4,252 citations

Book
01 Nov 2002
TL;DR: Drive development with automated tests, a style of development called “Test-Driven Development” (TDD for short), which aims to dramatically reduce the defect density of code and make the subject of work crystal clear to all involved.
Abstract: From the Book: “Clean code that works” is Ron Jeffries’ pithy phrase. The goal is clean code that works, and for a whole bunch of reasons: Clean code that works is a predictable way to develop. You know when you are finished, without having to worry about a long bug trail.Clean code that works gives you a chance to learn all the lessons that the code has to teach you. If you only ever slap together the first thing you think of, you never have time to think of a second, better, thing. Clean code that works improves the lives of users of our software.Clean code that works lets your teammates count on you, and you on them.Writing clean code that works feels good.But how do you get to clean code that works? Many forces drive you away from clean code, and even code that works. Without taking too much counsel of our fears, here’s what we do—drive development with automated tests, a style of development called “Test-Driven Development” (TDD for short). In Test-Driven Development, you: Write new code only if you first have a failing automated test.Eliminate duplication. Two simple rules, but they generate complex individual and group behavior. Some of the technical implications are:You must design organically, with running code providing feedback between decisionsYou must write your own tests, since you can’t wait twenty times a day for someone else to write a testYour development environment must provide rapid response to small changesYour designs must consist of many highly cohesive, loosely coupled components, just to make testing easy The two rules imply an order to the tasks ofprogramming: 1. Red—write a little test that doesn’t work, perhaps doesn’t even compile at first 2. Green—make the test work quickly, committing whatever sins necessary in the process 3. Refactor—eliminate all the duplication created in just getting the test to work Red/green/refactor. The TDD’s mantra. Assuming for the moment that such a style is possible, it might be possible to dramatically reduce the defect density of code and make the subject of work crystal clear to all involved. If so, writing only code demanded by failing tests also has social implications: If the defect density can be reduced enough, QA can shift from reactive to pro-active workIf the number of nasty surprises can be reduced enough, project managers can estimate accurately enough to involve real customers in daily developmentIf the topics of technical conversations can be made clear enough, programmers can work in minute-by-minute collaboration instead of daily or weekly collaborationAgain, if the defect density can be reduced enough, we can have shippable software with new functionality every day, leading to new business relationships with customers So, the concept is simple, but what’s my motivation? Why would a programmer take on the additional work of writing automated tests? Why would a programmer work in tiny little steps when their mind is capable of great soaring swoops of design? Courage. Courage Test-driven development is a way of managing fear during programming. I don’t mean fear in a bad way, pow widdle prwogwammew needs a pacifiew, but fear in the legitimate, this-is-a-hard-problem-and-I-can’t-see-the-end-from-the-beginning sense. If pain is nature’s way of saying “Stop!”, fear is nature’s way of saying “Be careful.” Being careful is good, but fear has a host of other effects: Makes you tentativeMakes you want to communicate lessMakes you shy from feedbackMakes you grumpy None of these effects are helpful when programming, especially when programming something hard. So, how can you face a difficult situation and: Instead of being tentative, begin learning concretely as quickly as possible.Instead of clamming up, communicate more clearly.Instead of avoiding feedback, search out helpful, concrete feedback.(You’ll have to work on grumpiness on your own.) Imagine programming as turning a crank to pull a bucket of water from a well. When the bucket is small, a free-spinning crank is fine. When the bucket is big and full of water, you’re going to get tired before the bucket is all the way up. You need a ratchet mechanism to enable you to rest between bouts of cranking. The heavier the bucket, the closer the teeth need to be on the ratchet. The tests in test-driven development are the teeth of the ratchet. Once you get one test working, you know it is working, now and forever. You are one step closer to having everything working than you were when the test was broken. Now get the next one working, and the next, and the next. By analogy, the tougher the programming problem, the less ground should be covered by each test. Readers of Extreme Programming Explained will notice a difference in tone between XP and TDD. TDD isn’t an absolute like Extreme Programming. XP says, “Here are things you must be able to do to be prepared to evolve further.” TDD is a little fuzzier. TDD is an awareness of the gap between decision and feedback during programming, and techniques to control that gap. “What if I do a paper design for a week, then test-drive the code? Is that TDD?” Sure, it’s TDD. You were aware of the gap between decision and feedback and you controlled the gap deliberately. That said, most people who learn TDD find their programming practice changed for good. “Test Infected” is the phrase Erich Gamma coined to describe this shift. You might find yourself writing more tests earlier, and working in smaller steps than you ever dreamed would be sensible. On the other hand, some programmers learn TDD and go back to their earlier practices, reserving TDD for special occasions when ordinary programming isn’t making progress. There are certainly programming tasks that can’t be driven solely by tests (or at least, not yet). Security software and concurrency, for example, are two topics where TDD is not sufficient to mechanically demonstrate that the goals of the software have been met. Security relies on essentially defect-free code, true, but also on human judgement about the methods used to secure the software. Subtle concurrency problems can’t be reliably duplicated by running the code. Once you are finished reading this book, you should be ready to: Start simplyWrite automated testsRefactor to add design decisions one at a time This book is organized into three sections. An example of writing typical model code using TDD. The example is one I got from Ward Cunningham years ago, and have used many times since, multi-currency arithmetic. In it you will learn to write tests before code and grow a design organically.An example of testing more complicated logic, including reflection and exceptions, by developing a framework for automated testing. This example also serves to introduce you to the xUnit architecture that is at the heart of many programmer-oriented testing tools. In the second example you will learn to work in even smaller steps than in the first example, including the kind of self-referential hooha beloved of computer scientists.Patterns for TDD. Included are patterns for the deciding what tests to write, how to write tests using xUnit, and a greatest hits selection of the design patterns and refactorings used in the examples. I wrote the examples imagining a pair programming session. If you like looking at the map before wandering around, you may want to go straight to the patterns in Section 3 and use the examples as illustrations. If you prefer just wandering around and then looking at the map to see where you’ve been, try reading the examples through and refering to the patterns when you want more detail about a technique, then using the patterns as a reference. Several reviewers have commented they got the most out of the examples when they started up a programming environment and entered the code and ran the tests as they read. A note about the examples. Both examples, multi-currency calculation and a testing framework, appear simple. There are (and I have seen) complicated, ugly, messy ways of solving the same problems. I could have chosen one of those complicated, ugly, messy solutions to give the book an air of “reality.” However, my goal, and I hope your goal, is to write clean code that works. Before teeing off on the examples as being too simple, spend 15 seconds imagining a programming world in which all code was this clear and direct, where there were no complicated solutions, only apparently complicated problems begging for careful thought. TDD is a practice that can help you lead yourself to exactly that careful thought.

1,864 citations