scispace - formally typeset
Search or ask a question
Author

John E. Hunter

Other affiliations: University of Iowa
Bio: John E. Hunter is an academic researcher from Michigan State University. The author has contributed to research in topics: Job performance & Test validity. The author has an hindex of 69, co-authored 174 publications receiving 28653 citations. Previous affiliations of John E. Hunter include University of Iowa.


Papers
More filters
Book
01 Jan 1990
TL;DR: In this article, the authors present a meta-analysis of Artifact Distributions and their impact on study outcomes. But they focus mainly on the second-order sampling error and related issues.
Abstract: PART ONE: INTRODUCTION TO META-ANALYSIS Integrating Research Findings Across Studies Study Artifacts and Their Impact on Study Outcomes PART TWO: META-ANALYSIS OF CORRELATIONS Meta-Analysis of Correlations Corrected Individually for Artifacts Meta-Analysis of Correlations Using Artifact Distributions Technical Questions in Meta-Analysis of Correlations PART THREE: META-ANALYSIS OF EXPERIMENTAL EFFECTS AND OTHER DICHOTOMOUS COMPARISONS Treatment Effects Experimental Artifacts and Their Impact Meta-Analysis Methods for d Values Technical Questions in Meta-Analysis of d Values PART FOUR: GENERAL ISSUES IN META-ANALYSIS Second Order Sampling Error and Related Issues Cumulation of Findings within Studies Methods of Integrating Findings Across Studies Locating, Selecting, and Evaluating Studies General Criticisms of Meta-Analysis Summary of Psychometric Meta-Analysis

4,673 citations

Journal ArticleDOI
TL;DR: In this paper, the authors summarized the practical and theoretical implications of 85 years of research in personnel selection and concluded that the most important property of a personnel assessment method is predictive validity: the ability to predict future job performance, job related learning (such as amount of learning in training and development programs), and other criteria.
Abstract: This article summarizes the practical and theoretical implications of 85 years of research in personnel selection. On the basis of meta-analytic findings, this article presents the validity of 19 selection procedures for predicting job performance and training performance and the validity of paired combinations of general mental ability (GMA) and Ihe 18 other selection procedures. Overall, the 3 combinations with the highest multivariate validity and utility for job performance were GMA plus a work sample test (mean validity of .63), GMA plus an integrity test (mean validity of .65), and GMA plus a structured interview (mean validity of .63). A further advantage of the latter 2 combinations is that they can be used for both entry level selection and selection of experienced employees. The practical utility implications of these summary findings are substantial. The implications of these research findings for the development of theories of job performance are discussed. From the point of view of practical value, the most important property of a personnel assessment method is predictive validity: the ability to predict future job performance, job-related learning (such as amount of learning in training and development programs), and other criteria. The predictive validity coefficient is directly proportional to the practical economic value (utility) of the assessment method (Brogden, 1949; Schmidt, Hunter, McKenzie, & Muldrow, 1979). Use of hiring methods with increased predictive validity leads to substantial increases in employee performance as measured in percentage increases in output, increased monetary value of output, and increased learning of job-related skills (Hunter, Schmidt, & Judiesch, 1990). Today, the validity of different personnel measures can be determined with the aid of 85 years of research. The most wellknown conclusion from this research is that for hiring employees without previous experience in the job the most valid predictor of future performance and learning is general mental ability ([GMA], i.e., intelligence or general cognitive ability; Hunter & Hunter, 1984; Ree & Earles, 1992). GMA can be measured using commercially available tests. However, many other measures can also contribute to the overall validity of the selection process. These include, for example, measures of

3,792 citations

Journal ArticleDOI
TL;DR: In this article, a meta-analysis of the cumulative research on various predictors of job performance shows that for entry-level jobs there is no predictor with validity equal to that of ability, which has a mean validity of.53.
Abstract: Meta-analysis of the cumulative research on various predictors of job performance shows that for entry-level jobs there is no predictor with validity equal to that of ability, which has a mean validity of .53. For selection on the basis of current job performance, the work sample test, with mean validity of .54, is slightly better. For federal entry-level jobs, substitution of an alternative predictor would cost from $3.12 billion (job tryout) to $15.89 billion per year (age). Hiring on ability has a utility of $15.61 billion per year, but affects minority groups adversely. Hiring on ability by quotas would decrease this utility by 5%. A third strategy—using a low cutoff score—would decrease utility by 83%. Using other predictors in conjunction with ability tests might improve validity and reduce adverse impact, but there is as yet no data base for studying this possibility.

2,099 citations

Book
01 Jan 1990
TL;DR: Methods of Meta-Analysis , Methods ofMeta-Analysis, کتابخانه مرکزی دانشگاه علوم پزشدکی ایران
Abstract: Methods of Meta-Analysis , Methods of Meta-Analysis , کتابخانه مرکزی دانشگاه علوم پزشکی ایران

1,553 citations

Book
01 Oct 1982
TL;DR: Meta-analysis is a way of synthesizing previous research on a subject in order to assess what has already been learned, and even to derive new conclusions from the mass of already researched data.
Abstract: Meta-analysis is a way of synthesizing previous research on a subject in order to assess what has already been learned, and even to derive new conclusions from the mass of already researched data. In the opinion of many social scientists, it offers hope for a truly cumulative social scientific knowledge.

1,319 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: In this paper, the authors provide guidance for substantive researchers on the use of structural equation modeling in practice for theory testing and development, and present a comprehensive, two-step modeling approach that employs a series of nested models and sequential chi-square difference tests.
Abstract: In this article, we provide guidance for substantive researchers on the use of structural equation modeling in practice for theory testing and development. We present a comprehensive, two-step modeling approach that employs a series of nested models and sequential chi-square difference tests. We discuss the comparative advantages of this approach over a one-step approach. Considerations in specification, assessment of fit, and respecification of measurement models using confirmatory factor analysis are reviewed. As background to the two-step approach, the distinction between exploratory and confirmatory analysis, the distinction between complementary approaches for theory testing versus predictive application, and some developments in estimation methods also are discussed.

34,720 citations

Journal ArticleDOI
TL;DR: An Explanation and Elaboration of the PRISMA Statement is presented and updated guidelines for the reporting of systematic reviews and meta-analyses are presented.
Abstract: Systematic reviews and meta-analyses are essential to summarize evidence relating to efficacy and safety of health care interventions accurately and reliably. The clarity and transparency of these reports, however, is not optimal. Poor reporting of systematic reviews diminishes their value to clinicians, policy makers, and other users. Since the development of the QUOROM (QUality Of Reporting Of Meta-analysis) Statement—a reporting guideline published in 1999—there have been several conceptual, methodological, and practical advances regarding the conduct and reporting of systematic reviews and meta-analyses. Also, reviews of published systematic reviews have found that key information about these studies is often poorly reported. Realizing these issues, an international group that included experienced authors and methodologists developed PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) as an evolution of the original QUOROM guideline for systematic reviews and meta-analyses of evaluations of health care interventions. The PRISMA Statement consists of a 27-item checklist and a four-phase flow diagram. The checklist includes items deemed essential for transparent reporting of a systematic review. In this Explanation and Elaboration document, we explain the meaning and rationale for each checklist item. For each item, we include an example of good reporting and, where possible, references to relevant empirical studies and methodological literature. The PRISMA Statement, this document, and the associated Web site (http://www.prisma-statement.org/) should be helpful resources to improve reporting of systematic reviews and meta-analyses.

25,711 citations

Book
01 Jan 1980
TL;DR: In this article, the context of educational research, planning educational research and the styles of education research are discussed, along with strategies and instruments for data collection and research for data analysis.
Abstract: Part One: The Context Of Educational Research Part Two: Planning Educational Research Part Three: Styles Of Educational Research Part Four: Strategies And Instruments For Data Collection And Researching Part Five: Data Analysis

21,163 citations

Journal ArticleDOI
TL;DR: The four articles in this special section onMeta-analysis illustrate some of the complexities entailed in meta-analysis methods and contributes both to advancing this methodology and to the increasing complexities that can befuddle researchers.
Abstract: During the past 30 years, meta-analysis has been an indispensable tool for revealing the hidden meaning of our research literatures. The four articles in this special section on meta-analysis illus...

20,272 citations

Journal ArticleDOI
21 Jul 2009-BMJ
TL;DR: The meaning and rationale for each checklist item is explained, and an example of good reporting is included and, where possible, references to relevant empirical studies and methodological literature are included.
Abstract: Systematic reviews and meta-analyses are essential to summarise evidence relating to efficacy and safety of healthcare interventions accurately and reliably. The clarity and transparency of these reports, however, are not optimal. Poor reporting of systematic reviews diminishes their value to clinicians, policy makers, and other users. Since the development of the QUOROM (quality of reporting of meta-analysis) statement—a reporting guideline published in 1999—there have been several conceptual, methodological, and practical advances regarding the conduct and reporting of systematic reviews and meta-analyses. Also, reviews of published systematic reviews have found that key information about these studies is often poorly reported. Realising these issues, an international group that included experienced authors and methodologists developed PRISMA (preferred reporting items for systematic reviews and meta-analyses) as an evolution of the original QUOROM guideline for systematic reviews and meta-analyses of evaluations of health care interventions. The PRISMA statement consists of a 27-item checklist and a four-phase flow diagram. The checklist includes items deemed essential for transparent reporting of a systematic review. In this explanation and elaboration document, we explain the meaning and rationale for each checklist item. For each item, we include an example of good reporting and, where possible, references to relevant empirical studies and methodological literature. The PRISMA statement, this document, and the associated website (www.prisma-statement.org/) should be helpful resources to improve reporting of systematic reviews and meta-analyses.

13,813 citations