scispace - formally typeset
Search or ask a question

Showing papers in "Review of Educational Research in 1993"


Journal ArticleDOI
TL;DR: The authors presented an analysis of a conceptual change model for describing student learning by applying research on student motivation to the process of conceptual change and discussed the role of classroom contextual factors as moderators of the relations between student motivation and conceptual change.
Abstract: Conceptual change models of student learning are useful for explicating the role of prior knowledge in students’ learning and are very popular in the research on learning in the subject areas. This article presents an analysis of a conceptual change model for describing student learning by applying research on student motivation to the process of conceptual change. Four general motivational constructs (goals, values, self-efficacy, and control beliefs) are suggested as potential mediators of the process of conceptual change. In addition, there is a discussion of the role of classroom contextual factors as moderators of the relations between student motivation and conceptual change. The article highlights the theoretical difficulties of a cold, or overly rational, model of conceptual change that focuses only on student cognition without considering the ways in which students’ motivational beliefs about themselves as learners and the roles of individuals in a classroom learning community can facilitate or h...

2,125 citations


Journal ArticleDOI
TL;DR: In this article, a detailed analysis of the ways in which scientists and science students respond to anomalous data is presented, giving special attention to the factors that make theory change more likely.
Abstract: Understanding how science students respond to anomalous data is essential to understanding knowledge acquisition in science classrooms. This article presents a detailed analysis of the ways in which scientists and science students respond to such data. We postulate that there are seven distinct forms of response to anomalous data, only one of which is to accept the data and change theories. The other six responses involve discounting the data in various ways in order to protect the preinstructional theory. We analyze the factors that influence which of these seven forms of response a scientist or student will choose, giving special attention to the factors that make theory change more likely. Finally, we discuss the implications of our framework for science instruction.

1,434 citations


Journal ArticleDOI
TL;DR: The authors identified and estimated the influence of educational, psychological, and social factors on learning using evidence accumulated from 61 research experts, 91 meta-analyses, and 179 handbook chapters and narrative reviews.
Abstract: The purpose of this article is to identify and estimate the influence of educational, psychological, and social factors on learning. Using evidence accumulated from 61 research experts, 91 meta-analyses, and 179 handbook chapters and narrative reviews, the data for analysis represent over 11,000 relationships. Three methods—content analyses, expert ratings, and results from meta-analyses—are used to quantify the importance and consistency of variables that influence learning. Regardless of which method is employed, there is moderate to substantial agreement on the categories exerting the greatest influence on school learning as well as those that have less influence. The results suggest an emergent knowledge base for school learning. Generally, proximal variables (e.g., psychological, instructional, and home environment) exert more influence than distal variables (e.g., demographic, policy, and organizational). The robustness and consistency of the findings suggest they can be used to inform educational p...

1,215 citations


Journal ArticleDOI
TL;DR: The authors developed a framework for assessing how differential incentive policies affect teacher commitment, identifying seven key workplace conditions that contribute to teacher commitment: job design characteristics, feedback, autonomy, participation, collaboration, learning opportunities, and resources.
Abstract: The push for more complex, intellectually demanding approaches to teaching suggests that teacher commitment will continue to be important for effective education. This article develops a framework for assessing how differential incentive policies affect teacher commitment. It identifies seven key workplace conditions that contribute to teacher commitment: job design characteristics, feedback, autonomy, participation, collaboration, learning opportunities, and resources. This framework is used to assess the effects of such differential incentive policies as merit pay and career ladders. The selection mechanisms in these two programs are found to reduce autonomy and collaboration, but the job enrichment aspects of career ladders are found to increase participation, collaboration, and resources. We recommend combining policies that increase participation, collaboration, and feedback rather than continuing to experiment with differential incentives.

527 citations


Journal ArticleDOI
TL;DR: The limited presence of talented African Americans in the teaching profession has been and continues to be a serious problem confronting the education profession and the African-American community in the United States as mentioned in this paper.
Abstract: The limited presence of talented African Americans in the teaching profession has been and continues to be a serious problem confronting the education profession and the African-American community in the United States. This review summarizes what is known from the research literature. It explores the reasons that African-American teachers are important as well as overall demographic, entry, and retention trends and the distinctive factors that influence the limited presence of African-American teachers. Finally, a suggested research agenda is presented.

366 citations


Journal ArticleDOI
TL;DR: In this article, a meta-analysis of 32 studies that compared two groups of students receiving identical writing instruction but allowed only one group to use word processing for writing assignments was conducted.
Abstract: Word processing in writing instruction may provide lasting educational benefits to users because it encourages a fluid conceptualization of text and frees the writer from mechanical concerns. This meta-analysis reviews 32 studies that compared two groups of students receiving identical writing instruction but allowed only one group to use word processing for writing assignments. Word processing groups, especially weaker writers, improved the quality of their writing. Word processing students wrote longer documents but did not have more positive attitudes toward writing. More effective uses of word processing as an instructional tool might include adapting instruction to software strengths and adding metacognitive prompts to the writing program.

303 citations


Journal ArticleDOI
TL;DR: In this paper, a review of the available literature suggests that there is insufficient evidence to support the tenets of and claims about the utility of the Myers-Briggs Type Indicator.
Abstract: An evaluation of the Myers-Briggs Type Indicator is made using a “unified view” of test validity (e.g., Messick, 1981). The Myers-Briggs Type Indicator is an assessment of personality based on Jung’s theory of types. During the past decade, the test has received considerable attention and use in a variety of applied settings. The unified view of validation requires that validity be considered as an approach that requires many sources of corroboration. This procedure contrasts with previous procedures that tended to focus on single validation procedures (e.g., construct validation). A review of the available literature suggests that there is insufficient evidence to support the tenets of and claims about the utility of the test.

241 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present a survey of procedures that can be used to assess progress in instructional programs designed to enhance cognitive skills, including knowledge acquisition, organization, and structure, depth of problem representation, mental models, and metacognitive skills.
Abstract: The intent of the article is to survey procedures that could be used to assess progress in instructional programs designed to enhance cognitive skills. The organizational framework is provided by J. R. Anderson’s (1982) theory of cognitive skill development and by Glaser, Lesgold, and Lajoie’s (1985) categorization of dimensions of cognitive skills. After describing Anderson’s theory, the article discusses the following types of measures of cognitive skills: (a) measures of knowledge acquisition, organization, and structure; (b) measures of depth of problem representation; (c) measures of mental models; (d) measures of metacognitive skills; (e) measures of the automaticity of performance; and (f) measures of efficiency of procedures. Each of the sections describing measurement procedures is followed by a discussion of the strengths and weaknesses of the procedures. The article closes with a general discussion of techniques for measuring cognitive skills.

234 citations


Journal ArticleDOI
TL;DR: In this article, the authors explored three broad issues associated with the self-concept and giftedness constructs: do gifted and non-gifted children differ in their selfconcepts? Second, what are the effects on the selfconcept of labeling the child as gifted? Third, are there any effects on selfconcept in placing a child in special programs for the gifted?
Abstract: We explored three broad issues in the article. First, do gifted and nongifted children differ in their self-concepts? Second, what are the effects on the self-concept of labeling the child as gifted? Third, are there any effects on self-concept of placing a child in special programs for the gifted? The review begins with a discussion of theoretical and methodological issues associated with the self-concept and giftedness constructs. This is followed by a meta-analysis of studies bearing on the three issues. Two types of studies are reviewed: (a) cases where gifted and average students are compared in terms of the self-concept and (b) studies in which gifted children are exposed to differential programming and the effects on self-concept explored. The studies indicated generally higher academic self-concepts for gifted students, but otherwise the results of the investigations were highly variable. The article concludes with a discussion of research and practical implications.

223 citations


Journal ArticleDOI
TL;DR: In this paper, the role of school recess is reviewed, and the effects of child-level variables (e.g., gender, age, and temperament) on children's recess behavior are reviewed, in terms of impact on classroom behavior and on measures of social and cognitive competence.
Abstract: In this article the empirical research on the role of school recess is reviewed. Recess is first defined, and then the effects of child-level variables (e.g., gender, age, and temperament) and school-level variables (e.g., recess timing and duration) on children’s recess behavior are reviewed. The implications of recess are discussed in terms of impact on classroom behavior and on measures of social and cognitive competence. It is concluded that recess has important educational and developmental implications. Further research in this area is urgently needed, and some promising areas of inquiry are suggested.

177 citations


Journal ArticleDOI
TL;DR: This paper examined the role of schooling in the life course of individuals and the consequences of variations in the timing and sequencing of schooling for adult social and economic success, with special attention to cross-national comparisons with the U. S. and historical changes within countries.
Abstract: This review examines the role of schooling in the life course of individuals, focusing on the timing and sequencing of schooling in the transition to adulthood. First, I examine conceptual issues in the study of schooling and the life course, drawing heavily on the sociological literature. I then consider the timing and sequencing of schooling in the transition to adulthood in the United States, and the consequences of variations in the timing and sequencing of schooling for adult social and economic success. I then discuss the role of social structure, norms, and institutional arrangements in the transition to adulthood, with special attention to cross-national comparisons with the U. S. and historical changes within countries. I conclude with speculations regarding trends in the role of schooling in the life course, and some directions for future research on this topic.

Journal ArticleDOI
TL;DR: The effect of gender differences in mean and variability on 28 cognitive ability scales were examined by Feingold (1992a) and as discussed by the authors, who provided an analytic evaluation of the effect sizes in the tails and showed that the effect size in the tail of these distributions are typically smaller than the center and tails.
Abstract: The joint effects of gender differences in mean and variability on 28 cognitive ability scales were recently examined by Feingold (1992a). He noted that gender differences in extreme score ranges (e.g., in the tails of the distribution of scores) may be influenced by differences in both mean and variability and offered subjective evaluations of effect sizes in the center and tails of these distributions. We provide an analytic evaluation of the effect sizes in the tails and show that the effect sizes in the tails of these distributions are typically smaller than Feingold assumed. We also evaluate the joint effect of gender differences in mean and variability via a different index: the number of females and males in the extreme score ranges. Males outnumber females in the upper tail of the score distribution of 22 of 28 ability scales, including 3 of the scales in which females have a higher overall mean.

Journal ArticleDOI
TL;DR: The authors examines core federal legislation addressing gender inequalities in education (including Title IX of the Educational Amendments Act of 1972, the Women's Educational Equity Act [WEEA], and several vocational education acts) and assesses the influence of this legislation on six elements of the educational system.
Abstract: This study examines core federal legislation addressing gender inequalities in education (Title IX of the Educational Amendments Act of 1972, the Women’s Educational Equity Act [WEEA], and several vocational education acts). It discusses the objectives of these laws and assesses the influence of this legislation on six elements of the educational system, ranging from educational access to the presence of women in administrative positions. The evidence indicates that women made significant gains in access to educational institutions as students rather than as educational administrators or university professors. Field of study choices still reflect unequal gender distributions; curriculum content and teacher training have been mildly affected. While it is difficult to isolate impacts of legislation from those of parallel social forces over a period of 20 years, shortcomings common to each of these equity-focused laws, such as their limited funding, weak enforcement, and reliance on voluntary efforts by educ...

Journal ArticleDOI
TL;DR: Several other statistical methods have been developed and used to make grades from different courses more directly comparable as discussed by the authors, and these methods have improved the understanding of the observed phenomena of differential predictive validity for women and for minority students.
Abstract: The last major review of grade adjustment methods developed to improve the prediction of academic performance is Linn’s 1966 article. Since then, several other statistical methods have been developed and used to make grades from different courses more directly comparable. In several instances, these methods have improved the understanding of the observed phenomena of differential predictive validity for women and for minority students. Each method is based on a different statistical methodology and set of assumptions about the data and tested on a different data set. This article reviews the studies in this area over the past 27 years and discusses this research in the context of selection in admissions and of the prediction of student performance in college.

Journal ArticleDOI
TL;DR: The idea that the results of a wide range of social scientific research can be used "to inform educational policies and practices" (Wang, Haertel, & Walberg, 1993a) is an old one as discussed by the authors.
Abstract: The term knowledge base is a relatively new one to enter into the pedagogical lexicon, but the idea that the results of a wide range of social scientific research can be used "to inform educational policies and practices" (Wang, Haertel, & Walberg, 1993a) is an old one. What else is research in education for? It is evident that, toward that end, the authors of "Toward a Knowledge Base for School Learning" have employed complex research techniques in an effort to synthesize the mass of data that has accumulated-particularly in the 1980s-bearing on the variables that influence learning. But the "era of school reform" (Wang et al., 1993a) that we (and the authors) tend to associate with the 1980s actually has a history going back about a century, and researchers are the heirs to a legacy of assumptions about the relationship between scientific data and educational reform. While it would be an obvious exaggeration to claim that there has been no evolution of or even challenge to these assumptions over the years, there remains a kind of bedrock belief in the power of science to provide at least guidelines, if not specific rules, for how teachers should conduct themselves in schools and classrooms.

Journal ArticleDOI
TL;DR: It is argued that despite what Wang et al. accomplish, one cannot move from this kind of knowledge base to informed policy decisions because of the difficulty in converting evidence into policy.
Abstract: Wang, Haertel, and Walberg (1993a) conclude by stating, "The aggregated estimates ... provide one reasonable basis for formulating educational policies .. . ." We agree that there is a vast amount of evidence-both in the form of empirical studies and in the form of expert opinion-that should be used to inform educational policy decisions. The problem is how to convert such evidence into knowledge and such knowledge into policy. We argue that despite what Wang et al. accomplish, one cannot move from this kind of knowledge base to informed policy decisions. Health science provides one guide for how to make the conversion from evidence to policy.

Journal ArticleDOI
TL;DR: Feingold et al. as mentioned in this paper examined gender differences in variability on scales of well-known standardized cognitive ability batteries and found that gender differences are meaningful only in the absence of corresponding differences in central tendency and that such differences often existed.
Abstract: My article (Feingold, 1992a) focused on the examination of gender differences in variability on scales of well-known standardized cognitive ability batteries. However, I recognized that gender differences in variability per se are meaningful only in the absence of corresponding gender differences in central tendency and that previous analyses of the same scales (Feingold, 1988, 1992a) found that such differences often existed. In order to communicate the joint effects of the two kinds of gender differences, I subjectively integrated ds (standardized mean differences; Cohen, 1977) and VRs (variance ratios; Snedecor & Cochran, 1967) to determine the ratios of males to females at both tails of each distribution

Journal ArticleDOI
TL;DR: It is thought that the "sex difference in the percentages of low scorers" meant the difference obtained by putting all the low Scorers of both genders together and computing the gender difference in this group is the more conventional and policy-relevant conception of "effect size in the tails."
Abstract: Quantitative methods in empirical research are not as objective as their label suggests: Decisions about what to calculate and how to interpret those calculations are necessarily highly subjective. The only step in the process of producing quantitative measures that can be called fully explicit, objective, and reproducible is the process of calculation itself. We (Hedges & Friedman, 1993a) were interested in Feingold's (1992a) work because we felt we could make calculation of the joint effects of differences in means and variability more explicit and objective. From Feingold's reply (Feingold, 1993), it is clear that his steps in characterizing tail effect sizes in his Table 5 (1992a) are different from the calculations we had imagined. We had supposed that the "sex difference in the percentages of low scorers" (p. 75) meant the difference obtained by putting all the low scorers of both genders together and computing the gender difference in this group. We think this is the more conventional and policy-relevant conception of "effect size in the tails."

Journal ArticleDOI
TL;DR: In the fairy tales, humans have always longed for security in an uncertain universe and have sought to achieve command over nature in order that they might better be insulated against disaster and, perhaps, also be able to improve their lot.
Abstract: As John Dewey recognized, humans have always longed for security in an uncertain universe and have sought to achieve command over nature in order that they might better be insulated against disaster and, perhaps, also be able to improve their lot. Myths, legends, and superstitions often reflect these deepseated human urges; so do fairy tales. Consider the wonderful story by the brothers Grimm of Rumpelstiltskin, the gnome who had gained enough control over nature to be able to spin straw into gold. The unfortunate young maiden to whom Rumpelstiltskin gave assistance had to pay a terrible price for his help. In the end, thankfully, the maiden learned Rumpelstiltskin's name. She thereby not only avoided Rumpelstiltskin's price but also ended up gaining control over him.

Journal ArticleDOI
TL;DR: Wang, Haertel, and Walberg as mentioned in this paper found that distal variables have relatively little influence on learning, while proximal variables (individual psychology, classroom instruction, and home environment) have relatively greater influence.
Abstract: "Toward a Knowledge Base for School Learning" (Wang, Haertel, & Walberg, 1993a) is an ambitious and provocative piece of work. It purports to capture the essential lessons from several broad domains of research and translate them into findings that are relevant for educational policy and practice. Its main finding is that so-called distal variables (demography, policy, and organization) have relatively little influence on learning, while so-called proximal variables (individual psychology, classroom instruction, and home environment) have relatively greater influence. I have neither the inclination nor the expertise to criticize the authors' methodology. In general, I find the genre of cross-cutting analyses of existing research—whether they are reviews or meta-analyses—to be enormously interesting and helpful. I find the methodological battles around this genre also to be interesting in small doses and extremely boring in large doses. I am glad that people worry about the methodological issues involved in the aggregation of research results. I understand that it is an important field of scholarly debate. But I must say frankly that I do not form my judgments about "what the research says" on the basis of these methodological disputes. So my comments will not focus on this dimension of the review. My interests lie rather in the relationship among research, policy, and educational practice (defined both as the practice of organizing and operating schools and the practice of designing curriculum and teaching it). From this perspective, I find "Toward a Knowledge Base for School Learning" to be more intriguing for what it doesn't say than for what it does say. The idea of a knowledge base is simply a convenient hook on which the authors hang their findings. Since the idea of a knowledge base is undefined, the findings hang in thin air—suspended in a peculiar world that has more to do with research and researchers than it has to do with anything related to policy or practice. The knowledge base hook, then, is a skyhook. The problems I have with this review are many of the same problems I have with other similar attempts to capture a knowledge base by analyzing existing research on schooling. I find these reviews and meta-analyses often stimulating and useful in thinking about research. I find them practically useless in thinking about policy and practice. Let me explain briefly why this might be so.

Journal ArticleDOI
TL;DR: In this paper, Levin and the editorial board devoted an unprecedented number of pages of the Review of Educational Research to a single article, with only one eighth the space to reply to the most strongly and frequently voiced questions of the commentators.
Abstract: We thank Henry Levin and the editorial board for devoting an unprecedented number of pages of the Review of Educational Research to a single article. We are also grateful to the commentators for their extensive responses to our work. With only one eighth the space to reply, we will respond to the most strongly and frequently voiced questions of the commentators: Why synthesize? How to synthesize? Can research inform practice? Before turning to these, it may be useful to ask an initial question.

Journal ArticleDOI
TL;DR: Wang et al. as discussed by the authors presented an attempt to distill a broad cross-section of educational research, academic opinion, and meta-analyses into a comprehensible whole, but the authors never clarified the multiple contours of this knowledge base; thus, one is hard pressed to understand what their broad-based comparison has really yielded.
Abstract: "Toward a Knowledge Base for School Learning" (Wang, Haertel, & Walberg, 1993a) is an inviting title for an academic article This work represents an attempt to distill a broad cross-section of educational research, academic opinion, and meta-analyses into a comprehensible whole Unfortunately, the authors never clarify the multiple contours of this knowledge base; thus, one is hard pressed to understand what their broad-based comparison has really yielded What is clear is that the emerging knowledge base of learning is not unified Dividing more than 200 educational factors into half a dozen categories, the authors separate the categories into proximal variables, which appear to enhance the learning process, and distal variables, which have less efficacy to improve learning This critique questions the authors' approach by posing several dilemmas created by their methodology The result of these dilemmas is that the article's conclusions about distal variables could actually be counterproductive For example, discouraging changes in education policy (one of the distal variables) could detract from proximal variables such as parent involvement and student-teacher interaction, which the authors recognize as the main agents of school learning Despite the rigor of the moder statistical methodology utilized to make the necessary comparisons, the task the authors set for themselves was simply too great The attempt to identify a knowledge base of learning is admirable, but is it realistically achievable? This daunting task seems to lead the researchers to a process (distillation) that is more akin to alchemy than science Dating, at least, from the work of Aristotle, scientists in the West have pursued the dream of reducing the universe to a set of simple rules that harmoniously explain everything With the development of Kepler's perfect music of the spheres and later with Newtonian mechanics, humanity came close; yet a simple and elegant explanation of the Universe continues to elude natural scientists "Toward a Knowledge Base of School Learning" (Wang et al, 1993a) attempts to accomplish this type of synthesis for a branch of social science Relying on the process of distillation, the authors have tried to reduce a huge mass of educational thought and research to its essence Natural science is diverse and sometimes seemingly contradictory For example, what one experiences at the macrolevel does not necessarily reflect what is true at the molecular level Nonetheless, the disparate parts of science are linked together in a comprehensible whole through the lingua franca of mathematics One must doubt whether sophisticated statistical analyses can ever weave together the multiple parts of school-based learning into a similar shared knowledge base

Journal ArticleDOI
TL;DR: Wang, Haertel, and Walberg (1993a) have conducted an exhaustive review of research studies, experts' ratings, handbook chapters, and narrative reviews related to school learning, and they have compiled a quantitative synthesis of the results as mentioned in this paper.
Abstract: Wang, Haertel, and Walberg (1993a) have conducted an exhaustive review of research studies, experts' ratings, handbook chapters, and narrative reviews related to school learning, and they have compiled a quantitative synthesis of the results. Through their efforts, they have attempted to move educational researchers closer to achieving a knowledge base for school learning. Many educational researchers, as well as I, would agree with Wang and her colleagues that their purpose is a worthy one. Our knowledge, or lack thereof, is a source of much consternation for those of us in education. Our scholarly colleagues in the hard sciences, such as medicine, look askance at educators for not having the solid knowledge base that they themselves have. People on the street wonder why we call ourselves experts in education when they have more knowledge about education from their own experiences than we do from our extensive scholarly research. While some might question the warrants for the assumptions of both the scholar in the sciences and the person on the street, nonetheless, we are uncomfortable with what these folks are pointing out—to outsiders, those of us in education often seem not to know what we are talking about. A critical question thus becomes, How could we do a better job of knowing what we are talking about? Wang and colleagues (1993a) are addressing this question head on and suggesting one possible way—construct a knowledge base for our field of education. But what would such a knowledge base mean for education, and how would we construct it? Wang et al. illustrate a way of constructing such a knowledge base, and they propose results that they consider to be part of their "emergent knowledge base for school learning" (1993a). What does this "emergent knowledge base" as constructed by Wang et al. (1993a) mean for school learning? I decided to pursue this question by beginning with Wang and her colleagues' own statements of their conclusions. Then I attempted to understand what these statements might mean for school learning by visiting the scenes of some cases of school learning from fieldwork that I have

Journal ArticleDOI
TL;DR: Wang, Haertel, and Walberg as discussed by the authors used meta-analysis to identify and estimate the influence of educational, psychological, and social factors on learning, so that they may quantify the importance and consistency of variables that influence learning.
Abstract: How do students learn? What facilitates or impedes learning? Of the myriad potentially important factors, which ones are the most important? These questions, at once deceptively simple and perilously complex, lie at the heart of much educational research. A universal answer has eluded scholars and philosophers for centuries and remains, for some, the holy grail of educational research today. Does the persistence of these questions suggest that we, as a community of scholars, have learned nothing in our decades of study? Or is the inquiry into student learning a perpetual search destined never to end? The importance of questions about student learning requires that we carefully consider, and as carefully evaluate, any serious attempt to address them. From this perspective, I applaud Margaret Wang, Geneva Haertel, and Herbert Walberg (1993a) for tackling difficult questions. Taken at face value, their goals-"to identify and estimate the influence of educational, psychological, and social factors on learning ... [so that they may] quantify the importance and consistency of variables that influence learning"-seem laudable. If these goals could be met and incontrovertible answers found, the information would be useful to many. Federal, state, and local policymakers would be better able to calibrate priorities and implement intervention strategies. School administrators and classroom teachers would be better able to develop curricula and deliver educational services. And the educational research community would be able to cite a discovery comparable to a treatment for cancer or a marker gene for Huntington's disease. Are these goals attainable? I agree with Wang, Haertel, and Walberg (1993a) that no individual study, regardless of size or design, could answer these questions once and for all. Even a large number of well-designed field experiments could not yield a universal answer for all groups of learners, in all contexts, for all times. So the authors turn to the principle of meta-analysis, one of the most significant methodological advances of the past two decades. Meta-analysis, in all its many forms, has quickly become an extraordinarily valuable component of our research repertoire; it has changed both the content, and our perceptions, of virtually every review article we read. Its principles and structure have trans-

Journal ArticleDOI
TL;DR: Hedges and Friedman as discussed by the authors developed a quantitative index that combines estimates of between-gender differences in performance in terms of both performance level and variability, and applied it to test performance between males and females.
Abstract: Motivated by the earlier ground-breaking work of Feingold (1992), Hedges and Friedman (1993) have sought to improve on Feingold's more subjectively developed quantitative index of gender differences in test performance. In particular, Hedges and Friedman have developed a quantitative index that combines estimates of between-gender differences in performance in terms of both performance level and variability. Prior to the work of Feingold, metaanalyses have focused primarily on combining estimates of differences in level of performance alone. The need to quantify between-gender differences in an objective way that takes into account the joint effect of differences in central tendency and variability is an important step forward in attempting to understand more clearly the differences in test performance between males and females. As noted by Hedges and Friedman, differences in variability can lead to situations in which gender differences in the tails of the distribution can be larger or smaller than the corresponding differences measured over the entire range of scores. By being sensitive to issues of variability, and by allowing us to partition a distribution into its tails and its center to investigate the between-gender differences that exist within each partition, the Hedges and Friedman index, no doubt, will enable us to understand, with greater insight, the more complex nature of between-gender differences that exist. Although the development of the index itself is reasonable, one may question to what extent violations of the assumption that both male and female test score distributions are normally distributed affect reported results, especially as results relate to the tails of the combined distribution. With index in hand, however, we may address its interesting application with respect to Feingold's (1992) data, and, in particular to the results reported in Tables 1 and 2. Table 1 (pp. 64-65) shows the tail effect sizes for gender differences, and Table 2 (p. 71) shows these differences as a ratio of the number of males to females. Although, as Hedges and Friedman (1993) noted, the ratio "produces a sharper impression" (p. 97), both tables depict a similar landscape. What is particularly interesting about Table 1 is its clear portrayal of the inadequacy of using an overall effect size measure based on central tendency alone as compared to using the joint effect size measure based on both central tendency and variability for each tail and center partition. For example, knowing that the overall effect size between males and females in terms of the Arithmetic subtest of the California Achievement Test is .00 is less elucidating than knowing that this value is, in part, related to the fact that (a) for those in the center of the test distribution, males perform no differently than females; (b) for those in the top

Journal ArticleDOI
TL;DR: The article by Wang, Haertel, and Walberg (1993a), referred to as WHW, is both thoughtful and thought provoking as mentioned in this paper, with a focus on three general categories of reactions that were evoked by the study.
Abstract: The piece by Wang, Haertel, and Walberg (1993a), hereafter referred to as WHW, is both thoughtful and thought provoking. Impressive with respect to its operational magnitude, the WHW article is a certain-to-be-highly-cited study that will stimulate discussion-and stir controversy-in circles of academic, social, and political influence. Bear in mind, however, that this generally positive assessment of the contribution comes from an instructional psychologist whose predilections about the variables that should make a difference to the quality of school-learning outcomes were, in the main, the variables that emerged as having the strongest associations with improved school-learning outcomes in the WHW study. (The preceding sentence includes a couple of points that will be elaborated on in the commentary that follows.) What I hope to accomplish in the space provided is to focus on three general categories of reactions that were evoked by the WHW study, which can be summarized in the following questions: (1) Will the importance of the WHW findings/conclusions be underestimated? (2) Will the importance of the WHW findings/conclusions be overestimated? and (3) In future investigations of the factors that positively impact on school-learning outcomes, what can be done to prevent errors of both underand overestimation? In my comments, I respond directly to Questions 1 and 2; within each set of comments, I address (at least implicitly) Question 3.