scispace - formally typeset
Search or ask a question

Showing papers by "Pam Grossman published in 2008"


Journal ArticleDOI
TL;DR: In this paper, the authors examine two distinct but closely related fields, research on teaching and research on teacher education, and argue that for research in teacher education to move forward, it must reconnect with these fields to address the complexity of both teaching as a practice and the preparation of teachers.
Abstract: In this article, the authors examine two distinct but closely related fields, research on teaching and research on teacher education. Despite its roots in research on teaching, research in teacher education has developed in isolation both from mainstream research on teaching and from research on higher education and professional education. A stronger connection to research on teaching could inform the content of teacher education, while a stronger relationship to research on organizations and policy implementation could focus attention on the organizational contexts in which the work takes shape. The authors argue that for research in teacher education to move forward, it must reconnect with these fields to address the complexity of both teaching as a practice and the preparation of teachers.

1,009 citations


Posted Content
TL;DR: In this article, the effects of features of teachers' preparation on teachers' value-added to student test score performance in math and English Language Arts were investigated. And they found that preparation directly linked to practice appears to benefit teachers in their first year.
Abstract: There are fierce debates over the best way to prepare teachers. Some argue that easing entry into teaching is necessary to attract strong candidates, while others argue that investing in high quality teacher preparation is the most promising approach. Most agree, however, that we lack a strong research basis for understanding how to prepare teachers. This paper is one of the first to estimate the effects of features of teachers' preparation on teachers' value-added to student test score performance in math and English Language Arts. Our results indicate variation across preparation programs in the average effectiveness of the teachers they are supplying to New York City schools. In particular, preparation directly linked to practice appears to benefit teachers in their first year.

624 citations


ReportDOI
TL;DR: Using data for New York City schools from 2000-2005, it is found that first-year teachers whom the authors identify as less effective at improving student test scores have higher attrition rates than do more effective teachers in both low-ach achieving and high-achieving schools.
Abstract: NBER WORKING PAPER SERIESWHO LEAVES? TEACHER ATTRITION AND STUDENT ACHIEVEMENTDonald BoydPam GrossmanHamilton LankfordSusanna LoebJames WyckoffWorking Paper 14022http://www.nber.org/papers/w14022NATIONAL BUREAU OF ECONOMIC RESEARCH1050 Massachusetts AvenueCambridge, MA 02138May 2008We are grateful to the New York City Department of Education and the New York State EducationDepartment for the data employed in this paper. We appreciate comments on an earlier draft fromTim Sass, Jonah Rockoff and participants at both the Economics of Teacher Quality Conference atthe Australian National University and the New York Federal Reserve Education Policy Workshop.The research is supported by funding from the Carnegie Corporation of New York, the National ScienceFoundation, the Spencer Foundation and the National Center for the Analysis of Longitudinal Datain Education Research (CALDER). The views expressed in the paper are solely those of the authorsand may not reflect those of the funders. Any errors are attributable to the authors.¸ The views expressedherein are those of the author(s) and do not necessarily reflect the views of the National Bureau ofEconomic Research.© 2008 by Donald Boyd, Pam Grossman, Hamilton Lankford, Susanna Loeb, and James Wyckoff.All rights reserved. Short sections of text, not to exceed two paragraphs, may be quoted without explicitpermission provided that full credit, including © notice, is given to the source.

311 citations


Journal ArticleDOI
TL;DR: The authors explored how beginning teachers use and learn from curriculum materials and proposed a trajectory for the teachers' use of the curriculum materials based on their findings, and found that new and aspiring teachers need opportunities to analyze and critique curriculum materials, beginning during teacher education and continuing in the company of their more experienced colleagues.

205 citations


Journal ArticleDOI
TL;DR: The authors examined the relationship between specific program features and students' perceptions of the degree to which program vision, principles, and practices are aligned with those in the field, and also explored the degree students have opportunities to practice what they are learning in the program and to enact program goals and visions of good teaching and learning.
Abstract: In this article, the authors focus on the concept of coherence, a relatively underexplored concept in teacher education. They investigate the relationship between students' perceptions of coherence and a number of structural features of teacher education programs to help develop a stronger definition of one important dimension of coherence—the relationship between fieldwork and coursework. The authors examine the relationship between specific program features and students' perceptions of the degree to which program vision, principles, and practices are aligned with those in the field and also explore the degree to which students have opportunities to practice what they are learning in the program and to enact program goals and visions of good teaching and learning in the classroom. In a field that is calling for larger-scale studies, this research attempts to identify promising features that are also amenable to large-scale studies of the impact of teacher education.

177 citations


Journal ArticleDOI
Pam Grossman1
TL;DR: In this article, the authors used Andrew Abbott's concept of jurisdictional challenge to analyze the current challenges facing university-based teacher educators, and suggested that teacher educators are not dan...
Abstract: This article uses Andrew Abbott's concept of jurisdictional challenge to analyze the current challenges facing university-based teacher educators. The author suggests that teacher educators are dan...

163 citations


Journal Article
TL;DR: In this paper, the authors draw on the work of Hazel Markus and others on the development of possible selves to investigate the opportunities novices have to encounter, try out, and evaluate possible selves in the process of constructing professional identities.
Abstract: into that world; part of the role of professional education is to help novices craft these professional identities. During the transitional time represented by professional educa tion, students negotiate their images of themselves as professionals with the images reflected to them by their programs. This process of negotiation can be fraught with difficulty, especially when these images conflict (Britzman, 1990; Cole & Knowles, 1993). As they adapt to new roles, novices must also learn to negotiate their personal identity with the professional role, even as they navigate among the different images of professional identity offered by their programs and practitioners in the field. In this article we draw on the work of Hazel Markus and others on the development of possible selves to investigate the opportunities novices have to encounter, try out, and evaluate possible selves in the process of constructing professional identities. We use data from a study of the f w ■ preparation of teachers, clergy, and clinical psycholo att ew onje t is gists to illustrate the relationship of possible selves and

142 citations


Posted Content
TL;DR: In this article, the effects of features of teachers' preparation on teachers' value-added to student test score performance in math and English language arts in New York City schools were investigated.
Abstract: There are fierce debates over the best way to prepare teachers Some argue that easing entry into teaching is necessary to attract strong candidates, while others argue that investing in high quality teacher preparation is the most promising approach Most agree, however, that we lack a strong research basis for understanding how to prepare teachers This paper is one of the first to estimate the effects of features of teachers' preparation on teachers' value-added to student test score performance in math and English Language Arts Our results indicate variation across preparation programs in the average effectiveness of the teachers they are supplying to New York City schools In particular, preparation directly linked to practice appears to benefit teachers in their first year

128 citations


Journal ArticleDOI
TL;DR: In this paper, the authors describe the state of teacher education in and around the large and diverse school district of New York City using multiple data sources, including program documents, interviews, and surveys of teachers.
Abstract: In this article, the authors describe the state of teacher education in and around the large and diverse school district of New York City. Using multiple data sources, including program documents, interviews, and surveys of teachers, this study attempts to explore the characteristics of programs that prepare elementary teachers of New York City public schools, including the kinds of programs that exist, who enters these different programs, who teaches in the programs, and what characterizes the core curriculum. A central question concerns the amount of variation that exists in the preparation of elementary teachers for a single, large school district. Despite the number and variety of programs that exist to prepare elementary teachers, the authors found the overall curriculum and structure of teacher education to be more similar than different. To understand this lack of variation, the authors draw on organizational theory, particularly, the concept of institutional isomorphism, to examine the case of tea...

94 citations



Book
01 Oct 2008
TL;DR: The authors provides a thorough and dispassionate review of the research evidence on alternative certification, and encourages readers to look carefully at the trade-offs implicit in any route into teaching, and suggests ways to "marry" the proven strengths of both traditional and alternative approaches.
Abstract: Over the past 20 years, alternative certification for teachers has emerged as a major avenue of teacher preparation. The proliferation of new pathways has spurred heated debate over how best to recruit, prepare, and support qualified teachers. Drawing on the work of leading scholars, Alternative Routes to Teaching provides a thorough and dispassionate review of the research evidence on alternative certification. It takes readers beyond the simple dichotomies that have characterized the debate over alternative certification, encourages them to look carefully at the trade-offs implicit in any route into teaching, and suggests ways to "marry" the proven strengths of both traditional and alternative approaches.


01 Jan 2008
TL;DR: In this article, the authors used the covariance structure of student test scores across grades in New York City from 1999 to 2007 to estimate the overall extent of test measurement error and how measurement error varies across students.
Abstract: Value-added models in education research allow researchers to explore how a wide variety of policies and measured school inputs affect the academic performance of students. Researchers typically quantify the impacts of such interventions in terms of effect sizes, i.e., the estimated effect of a one standard deviation change in the variable divided by the standard deviation of test scores in the relevant population of students. Effect size estimates based on administrative databases typically are quite small. Research has shown that high quality teachers have large effects on student learning but that measures of teacher qualifications seem to matter little, leading some observers to conclude that, even though effectively choosing teachers can make an important difference in student outcomes, attempting to differentiate teacher candidates based on pre-employment credentials is of little value. This illustrates how the perception that many educational interventions have small effect sizes, as traditionally measured, are having important consequences for policy. In this paper we focus on two issues pertaining to how effect sizes are measured. First, we argue that model coefficients should be compared to the standard deviation of gain scores, not the standard deviation of scores, in calculating most effect sizes. The second issue concerns the need to account for test measurement error. The standard deviation of observed scores in the denominator of the effect-size measure reflects such measurement error as well as the dispersion in the true academic achievement of students, thus overstating variability in achievement. It is the size of an estimated effect relative to the dispersion in the true achievement or the gain in true achievement that is of interest. Adjusting effect-size estimates to account for these considerations is straightforward if one knows the extent of test measurement error. Technical reports provided by test vendors typically only provide information regarding the measurement error associated with the test instrument. However, there are a number of other factors, including variation in scores associated with students having particularly good or bad days, which can result in test scores not accurately reflecting true academic achievement. Using the covariance structure of student test scores across grades in New York City from 1999 to 2007, we estimate the overall extent of test measurement error and how measurement error varies across students. Our estimation strategy follows from two key assumptions: (1) there is no persistence (correlation) in each student’s test measurement error across grades; (2) there is at least some persistence in learning across grades with the degree of persistence constant across grades. Employing the covariance structure of test scores for NYC students and alternative models characterizing the growth in academic achievement, we find estimates of the overall extent of test measurement error to be quite robust. Returning to the analysis of effect sizes, our effect-size estimates based on the dispersion in gain scores net of test measurement error are four times larger than effect sizes typically measured. To illustrate the importance of this difference, we consider results from a recent paper analyzing how various attributes of teachers affect the test-score gains of their students (Boyd et al., in press). Many of the estimated effects appear small when compared to the standard deviation of student achievement – that is effect sizes of less than 0.05. However, when measurement error is taken into account, the associated effect sizes often are about 0.16. Furthermore, when teacher attributes are considered jointly, based on the teacher attribute combinations commonly observed, the overall effect of teacher attributes is roughly half a standard deviation of universe score gains – even larger when teaching experience is also allowed to vary. The bottom line is that there are important differences in teacher effectiveness that are systematically related to observed teacher attributes. Such effects are important from a policy perspective, and should be taken into account in the formulation and implementation of personnel policies.


Posted Content
TL;DR: The authors found that first-year teachers who were identified as less effective at improving student test scores have higher attrition rates than do more effective teachers in both low-achieving and high achieving schools.
Abstract: Almost a quarter of entering public-school teachers leave teaching within their first three years. High attrition would be particularly problematic if those leaving were the more able teachers. The goal of this paper is estimate the extent to which there is differential attrition based on teachers' value-added to student achievement. Using data for New York City schools from 2000-2005, we find that first-year teachers whom we identify as less effective at improving student test scores have higher attrition rates than do more effective teachers in both low-achieving and high-achieving schools. The first-year differences are meaningful in size; however, the pattern is not consistent for teachers in their second and third years. For teachers leaving low-performing schools, the more effective transfers tend to move to higher achieving schools, while less effective transfers stay in lower-performing schools, likely exacerbating the differences across students in the opportunities they have to learn.


01 Jun 2008
TL;DR: In this paper, the authors used the covariance structure of student test scores across grades in New York City from 1999 to 2007 to estimate the overall extent of test measurement error and how measurement error varies across students.
Abstract: Value-added models in education research allow researchers to explore how a wide variety of policies and measured school inputs affect the academic performance of students. Researchers typically quantify the impacts of such interventions in terms of effect sizes, i.e., the estimated effect of a one standard deviation change in the variable divided by the standard deviation of test scores in the relevant population of students. Effect size estimates based on administrative databases typically are quite small. Research has shown that high quality teachers have large effects on student learning but that measures of teacher qualifications seem to matter little, leading some observers to conclude that, even though effectively choosing teachers can make an important difference in student outcomes, attempting to differentiate teacher candidates based on pre-employment credentials is of little value. This illustrates how the perception that many educational interventions have small effect sizes, as traditionally measured, are having important consequences for policy. In this paper we focus on two issues pertaining to how effect sizes are measured. First, we argue that model coefficients should be compared to the standard deviation of gain scores, not the standard deviation of scores, in calculating most effect sizes. The second issue concerns the need to account for test measurement error. The standard deviation of observed scores in the denominator of the effect-size measure reflects such measurement error as well as the dispersion in the true academic achievement of students, thus overstating variability in achievement. It is the size of an estimated effect relative to the dispersion in the true achievement or the gain in true achievement that is of interest. Adjusting effect-size estimates to account for these considerations is straightforward if one knows the extent of test measurement error. Technical reports provided by test vendors typically only provide information regarding the measurement error associated with the test instrument. However, there are a number of other factors, including variation in scores associated with students having particularly good or bad days, which can result in test scores not accurately reflecting true academic achievement. Using the covariance structure of student test scores across grades in New York City from 1999 to 2007, we estimate the overall extent of test measurement error and how measurement error varies across students. Our estimation strategy follows from two key assumptions: (1) there is no persistence (correlation) in each student’s test measurement error across grades; (2) there is at least some persistence in learning across grades with the degree of persistence constant across grades. Employing the covariance structure of test scores for NYC students and alternative models characterizing the growth in academic achievement, we find estimates of the overall extent of test measurement error to be quite robust. Returning to the analysis of effect sizes, our effect-size estimates based on the dispersion in gain scores net of test measurement error are four times larger than effect sizes typically measured. To illustrate the importance of this difference, we consider results from a recent paper analyzing how various attributes of teachers affect the test-score gains of their students (Boyd et al., in press). Many of the estimated effects appear small when compared to the standard deviation of student achievement – that is effect sizes of less than 0.05. However, when measurement error is taken into account, the associated effect sizes often are about 0.16. Furthermore, when teacher attributes are considered jointly, based on the teacher attribute combinations commonly observed, the overall effect of teacher attributes is roughly half a standard deviation of universe score gains – even larger when teaching experience is also allowed to vary. The bottom line is that there are important differences in teacher effectiveness that are systematically related to observed teacher attributes. Such effects are important from a policy perspective, and should be taken into account in the formulation and implementation of personnel policies.

01 Nov 2008
TL;DR: In this paper, the authors used item response theory (IRT) scale-score measures to evaluate the effect of various interventions in terms of effect sizes and found that none of the estimated effect sizes are large by standards often employed in value-added analyses.
Abstract: The use of value-added models in education research has expanded rapidly. These models allow researchers to explore how a wide variety of policies and measured school inputs affect the academic performance of students. An important question is whether such effects are sufficiently large to achieve various policy goals. For example, would hiring teachers having stronger academic backgrounds sufficiently increase test scores for traditionally low-performing students to warrant the increased cost of doing so? Judging whether a change in student achievement is important requires some meaningful point of reference. In certain cases a grade-equivalence scale or some other intuitive and policy relevant metric of educational achievement can be used. However, this is not the case with item response theory (IRT) scale-score measures common to the tests usually employed in value-added analyses. In such cases, researchers typically describe the impacts of various interventions in terms of effect sizes, although conveying the intuition of such a measure to policymakers often is a challenge. The effect size of an independent variable is measured as the estimated effect of a one standard deviation change in the variable divided by the standard deviation of test scores in the relevant population of students. Intuitively, an effect size represents the magnitude of change in a variable of interest, e.g., student achievement, resulting from a one standard deviation, or rather large, change in another variable, e.g., class-size. Effect size estimates derived from value-added models employing administrative databases typically are quite small. For example, in several recent papers the average effect size of being in the second year of teaching relative to the first year, other things equal, is about 0.04 standard deviations for math achievement and 0.025 standard deviations for reading achievement, with variation no more than 0.02. Additional research examines the effect sizes of a variety of other teacher attributes: alternative certification compared to traditional certification (Boyd et al. 2006; Kane et al. in press); passing state certification exams As one example, consider results from a recent paper analyzing how various attributes of teachers affect the test-score gains of their students (Boyd et al. 2008). Parameter estimates reflecting the effects of a subset of the teacher attributes included in the analysis are shown in the first column of table 1. These estimated effects, measured relative to the standard deviation of observed student achievement scores, indicate that none of the estimated effect sizes are large by standards often employed …