scispace - formally typeset
Search or ask a question
Author

Catherine E. Mathers

Bio: Catherine E. Mathers is an academic researcher from James Madison University. The author has contributed to research in topics: Test (assessment) & Higher education. The author has an hindex of 3, co-authored 3 publications receiving 34 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: In this article, students were randomly assigned to one of three test instruction conditions intended to increase test relevance while keeping the test low-stakes to examinees, and test instructions did not impact average perceived test importance, examinee effort, or test performance.
Abstract: Assessment specialists expend a great deal of energy to promote valid inferences from test scores gathered in low-stakes testing contexts. Given the indirect effect of perceived test importance on test performance via examinee effort, assessment practitioners have manipulated test instructions with the goal of increasing perceived test importance. Importantly, no studies have investigated the impact of test instructions on this indirect effect. In the current study, students were randomly assigned to one of three test instruction conditions intended to increase test relevance while keeping the test low-stakes to examinees. Test instructions did not impact average perceived test importance, examinee effort, or test performance. Furthermore, the indirect relationship between importance and performance via effort was not moderated by instructions. Thus, the effect of perceived test importance on test scores via expended effort appears consistent across different messages regarding the personal relevance of t...

17 citations

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the impact of US college coursework on student learning gains and found that students gained, on average, 3.72 points on a 66-item test of quantitative and scientific reasoning after experiencing 1.5 years of college.
Abstract: Answering a call put forth decades ago by the higher education community and the federal government, we investigated the impact of US college coursework on student learning gains. Students gained, on average, 3.72 points on a 66-item test of quantitative and scientific reasoning after experiencing 1.5 years of college. Gain scores were unrelated to the number of quantitative and scientific reasoning courses completed when controlling and not controlling for students’ personal characteristics. Unexpectedly, yet fortunately, gain scores showed no discernable difference when corrected for low test-taking effort, which indicated test-taking effort did not compromise the validity of the test scores. When gain scores were disaggregated by amount of completed coursework, the estimated gain scores of students with quantitative and scientific reasoning coursework were smaller than what quantitative and scientific reasoning faculty expected or desired. In sum, although students appear on average to be makin...

15 citations

01 Jan 2016
TL;DR: Mathers et al. as mentioned in this paper investigated the impact of test session instructions on the psychometric properties of the test-taking motivation measure and found that the effect of instruction manipulations on the test taking motivation measure has yet to be investigated.
Abstract: Catherine E. Mathers James Madison University Abstract Research investigating methods to influence examinee motivation during low-stakes assessment of student learning outcomes has involved manipulating test session instructions. The impact of instructions is often evaluated using a popular self-report measure of test-taking motivation. However, the impact of these manipulations on the psychometric properties of the test-taking motivation measure has yet to be investigated, resulting in questions regarding the comparability of motivation scores across instruction conditions and the scoring of the measure. To address these questions, the factor structure and reliability of test-taking motivation scores were examined across instruction conditions during a low-stakes assessment session designed to address higher education accountability mandates. Incoming first-year college students were randomly assigned to one of three instruction conditions where personal consequences associated with test results were incrementally increased. Confirmatory factor analyses indicated a two-factor structure of test-taking motivation was supported across conditions. Moreover, reliability of motivation scores was adequate even in the condition with greatest personal consequence, which was reassuring given low reliability has been found in high-stakes contexts. Thus, the findings support the use of this self-report measure for the valuable research that informs motivation instruction interventions for low-stakes testing initiatives common in higher education assessment.

14 citations


Cited by
More filters
01 Jan 2013
TL;DR: In this article, Aviles et al. present a review of the state of the art in the field of test data analysis, which includes the following institutions: Stanford University, Stanford Graduate School of Education, Stanford University and the University of Southern California.
Abstract: EDITORIAL BOARD Robert Davison Aviles, Bradley University Harley E. Baker, California State University–Channel Islands Jean-Guy Blais, Universite de Montreal, Canada Catherine Y. Chang, Georgia State University Robert C. Chope, San Francisco State University Kevin O. Cokley, University of Missouri, Columbia Patricia B. Elmore, Southern Illinois University Shawn Fitzgerald, Kent State University John J. Fremer, Educational Testing Service Vicente Ponsoda Gil, Universidad Autonoma de Madrid, Spain Jo-Ida C. Hansen, University of Minnesota Charles C. Healy, University of California at Los Angeles Robin K. Henson, University of North Texas Flaviu Adrian Hodis, Victoria University of Wellington, New Zealand Janet K. Holt, Northern Illinois University David A. Jepsen, The University of Iowa Gregory Arief D. Liem, National Institute of Education, Nanyang Technological University Wei-Cheng J. Mau, Wichita State University Larry Maucieri, Governors State College Patricia Jo McDivitt, Data Recognition Corporation Peter F. Merenda, University of Rhode Island Matthew J. Miller, University of Maryland Ralph O. Mueller, University of Hartford Jane E. Myers, The University of North Carolina at Greensboro Philip D. Parker, University of Western Sydney Ralph L. Piedmont, Loyola College in Maryland Alex L. Pieterse, University at Albany, SUNY Nicholas J. Ruiz, Winona State University James P. Sampson, Jr., Florida State University William D. Schafer, University of Maryland, College Park William E. Sedlacek, University of Maryland, College Park Marie F. Shoffner, University of Virginia Len Sperry, Florida Atlantic University Kevin Stoltz, University of Mississippi Jody L. Swartz-Kulstad, Seton Hall University Bruce Thompson, Texas A&M University Timothy R. Vansickle, Minnesota Department of Education Steve Vensel, Palm Beach Atlantic University Dan Williamson, Lindsey Wilson College F. Robert Wilson, University of Cincinnati

1,306 citations

Journal Article
TL;DR: Arum and Roksa as mentioned in this paper argue that students gain surprisingly little from their college experience, that there is "persistent and growing inequality" in the students' learning, and that "there is notable variation both within and across institutions" so far as "measurable differences in students' educational experiences" is concerned.
Abstract: Academically Adrift: Limited Learning on College Campuses Richard Arum and Josipa Roksa University of Chicago Press, 2011 This book has much to say that is perceptive about today's undergraduate higher education in the United States. It will be valuable to review the authors' insights. At the same time, it will be as instructive to note the book's weaknesses, and especially what is omitted from the discussion. It is a discussion that is truncated intellectually by the authors' close adherence to the selective awareness that so greatly typifies the mindscape of the contemporary American "establishment" in academia and throughout the commanding heights of American society. That mindscape allows a recognition of many things, but not of others. The authors are both faculty members at major American universities. Richard Arum is a sociology professor at New York University with a tie to the university's school of education. He is the author of several books on education and director of the Education Research Program sponsored by the Social Science Research Council. His co-author, Josipa Roksa, is an assistant professor of sociology at the University of Virginia. That the book is published by the University of Chicago Press attests to its presumptive merit. Academically Adrift furnishes an example of something that has long been common in social science writing: a rather thin empirical study serving as the work's own contribution, combined with considerable additional material coming out of the literature on whatever subject is being explored. The function of the authors' own research is thus often to serve more or less as scientistic windowdressing. The reason we say the empiricism for this book is "thin" is that the "longitudinal data of 2,322 students," while seemingly ample, involves students spread over "a diverse range of campuses," including "liberal arts colleges and large research institutions, as well as a number of historically black colleges and universities and Hispanic-serving institutions," all "dispersed nationally across all four regions of the country." This must necessarily mean that the "sample" from any given institution or program was quite small. We are told that the authors didn't concern themselves with the appropriateness of each sample, but left the recruitment and retention of the sample's students to each of the respective institutions. The authors acknowledge that the study included fewer men than women, and more good students than those of "lower scholastic ability." So far as this book is concerned, however, the thinness doesn't particularly hurt the content, since so much of what is said doesn't especially depend upon anything unique found by the authors' own research. A brief summary is provided when the authors say that "we will highlight four core 'important lessons' from our research." These are that the institutions and students are "academically adrift" (which is the basis for the book's title), that students gain surprisingly little from their college experience, that there is "persistent and growing inequality" in the students' learning, and that "there is notable variation both within and across institutions" so far as "measurable differences in students' educational experiences" is concerned. Following the lead of former president Derek Bok of Harvard and of the Council for Aid to Education, the authors' ideal for higher education is that it will enhance students' "capacity for critical thinking, complex reasoning, and writing." These are the three ingredients measured by the Collegiate Learning Assessment (CLA), which the authors value most among the various assessment tools. The CLA results, they say, show that "growing numbers of students are sent to college at increasingly higher costs, but for a large proportion of them the gains in critical thinking, complex reasoning and written communication are either exceedingly small or empirically nonexistent. …

663 citations

Journal ArticleDOI
TL;DR: Test-taking motivation (TTM) has been found to have a profound effect on low-stakes test results as mentioned in this paper, and from the components of TTM test-taking effort has been shown to be the strongest predictor of test performance.

25 citations

Journal ArticleDOI
TL;DR: In this article, students were randomly assigned to one of three test instruction conditions intended to increase test relevance while keeping the test low-stakes to examinees, and test instructions did not impact average perceived test importance, examinee effort, or test performance.
Abstract: Assessment specialists expend a great deal of energy to promote valid inferences from test scores gathered in low-stakes testing contexts. Given the indirect effect of perceived test importance on test performance via examinee effort, assessment practitioners have manipulated test instructions with the goal of increasing perceived test importance. Importantly, no studies have investigated the impact of test instructions on this indirect effect. In the current study, students were randomly assigned to one of three test instruction conditions intended to increase test relevance while keeping the test low-stakes to examinees. Test instructions did not impact average perceived test importance, examinee effort, or test performance. Furthermore, the indirect relationship between importance and performance via effort was not moderated by instructions. Thus, the effect of perceived test importance on test scores via expended effort appears consistent across different messages regarding the personal relevance of t...

17 citations

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the impact of US college coursework on student learning gains and found that students gained, on average, 3.72 points on a 66-item test of quantitative and scientific reasoning after experiencing 1.5 years of college.
Abstract: Answering a call put forth decades ago by the higher education community and the federal government, we investigated the impact of US college coursework on student learning gains. Students gained, on average, 3.72 points on a 66-item test of quantitative and scientific reasoning after experiencing 1.5 years of college. Gain scores were unrelated to the number of quantitative and scientific reasoning courses completed when controlling and not controlling for students’ personal characteristics. Unexpectedly, yet fortunately, gain scores showed no discernable difference when corrected for low test-taking effort, which indicated test-taking effort did not compromise the validity of the test scores. When gain scores were disaggregated by amount of completed coursework, the estimated gain scores of students with quantitative and scientific reasoning coursework were smaller than what quantitative and scientific reasoning faculty expected or desired. In sum, although students appear on average to be makin...

15 citations