scispace - formally typeset
Search or ask a question

Showing papers in "American Psychologist in 1987"


Journal Article•DOI•
TL;DR: The role of cultural knowledge and culture-specific techniques in the psychotherapeutic treatment of ethnic minority-group clients is examined and analysis of these processes can provide a meaningful method of viewing the role of culture in psychotherapy.
Abstract: This article examines the role of cultural knowledge and culture-specific techniques in the psychotherapeutic treatment of ethnic minority-group clients. Recommendations that admonish therapists to be culturally sensitive and to know the culture of the client have not been very helpful. Such recommendations often fail to specify treatment procedures and to consider within-group heterogeneity among ethnic clients. Similarly, specific techniques based on the presumed cultural values of a client are often applied regardless of their appropriateness to a particular ethnic client. It is suggested that cultural knowledge and culture-consistent strategies be linked to two basic processes— credibility and giving. Analysis of these processes can provide a meaningful method of viewing the role of culture in psychotherapy and also provides suggestions for improving psychotherapy practices, training, and research for ethnic-minority populations.

714 citations




Journal Article•DOI•
TL;DR: This article proposes a name for the impact of assessment on treatment outcome: the "treatment utility of assessment", and considers how to make better progress in understanding the role of assessment in successful treatment.
Abstract: \" In practical terms, the sine qua non of the modes, methods, devices, strategies, and theories of clinical assessment is their contribution to treatment outcome. The importance of this contribution has often been noted, but under many different labels and rationales. The resultant conceptual confusion has considerably restricted the visibility and frequency of research in this critical area. In this article we propose a name for the impact of assessment on treatment outcome: the \"treatment utility of assessment. \"' Some of the questions that can be asked about the treatment utility of assessment are described, and methods appropriate for asking them are examined. Finally, the implications of this kind of utility for other approaches to evaluating assessment quality are analyzed. Clinical assessment is an important and fertile area of psychology, and yet there is general agreement that it has not been in a state of continuous and healthy growth (e.g., Bersoff, 1973; Korchin & Schuldberg, 1981; Rorer & Widiger, 1983). Compared to the early days of clinical psychology, there has been a decline in the emphasis on clinical assessment both in training and in practice (Garfield & Kurtz, 1973, 1976; Levitt, 1973; Shemberg & Keeley, 1970). One reason may be that clinical assessment has not yet proven its value in fostering favorable treatment outcomes. As clinical psychologists have devoted more and more time to treatment activities, the practical value of assessment has come under closer scrutiny (McReynolds, 1985). Unfortunately, experienced clinicians have not always found assessment data to be useful in treatment (Adams, 1972; Daily, 1953; Meehl, 1960; Moore, Bobbitt, & Wildman, 1968). Even the proponents of clinical assessment admit that \"we do not believe that the current [high] status of testing is due to its demonstrated value\" (Kaplan, Colarelli, Gross, Leventhal, & Siegel, 1970, p. 15). The lack of empirical evidence for the practical value of assessment has long been noted. In 1959, Meehl pointed out that, even if an assessment procedure is reliable and valid, a clinician might \"be seriously in error if he concluded therefrom that his tests were paying o f f in practice. On this question there i s . . . no published empirical evidence\" (p. 117). Twenty-two years later, Korchin and Schuldberg (1981) were still worried that \"clinical assessment may not provide the kind of information needed by therapists. Objective evidence is slim\" (p. 1154). More recently, McReynolds (1985) asked \"Are tests helpful to the therapist? Amazingly, there has been little research on this crucial question\" (p. 10). The purpose of the present article is to consider how to make better progress in understanding the role of assessment in successful treatment. Any lack of evidence on the clinical value of assessment is not caused by a lack of appreciation for its ultimate practical purposes. Korchin (1976) defined clinical assessment as \"the process by which clinicians gain understanding of the patient necessary for making informed decisions\" (p. 124). Thus, the \"basic justification for assessment is that it provides information of value to the planning, execution and evaluation of treatment\" (Korchin & Schuldberg, 1981, p. 1154). Wiggins (1973) said, \"Although measurement and prediction may be evaluated by formal psychometric criteria, such as reliability and validity, the outcomes of [assessment] decisions must be evaluated in terms of their utilities for individuals and institutions within our society\" (p. 272, emphasis in original). Meehl (1959) phrased it as follows: \"In what way and to what extent does t h i s . . , information help us in treating the patient?\" (1959, p. 117). He called this question \"ultimately the practically significant one by which the contributions of our [assessment] techniques must be judged\" (p. 116). Definition of the Treatment Utility of Assessment The impact of clinical assessment on treatment outcome has been discussed under a wide variety of labels. Among other terms, it has been viewed as a matter of incremental validity (Mischel, 1968), concurrent validity (Meehl, 1959), construct validity (Edwards, 1970), predictive validity (Lord & Novick, 1968), discriminative efficiency (Wiggins, 1973), and utility (Cronbach & Gleser, 1965). There is considerable confusion about the concepts relevant to the measurement of pragmatic clinical value. We propose to use the phrase the treatment utility of assessment to refer to the degree to which assessment is shown to contribute to beneficial treatment outcome, l 1 Earlier (Nelson & Hayes, 1979), we had used the term treatment validity. Although utility issues can indeed be considered an aspect of validity, the present term seems more direct. November 1987 9 American Psychologist Copyright 1987 by the American Psychological Assooation, Inc. 0003-066X/87/$00.75 Vol. 42, No. I I, 963-974 963 An assessment device, distinction, or strategy has this kind of utility if it can be shown that treatment outcome is positively influenced by this device, distinction, or strategy. The treatment utility of assessment deserves to be termed a type of utility because it relates closely to the functional thrust of that psychometric term. The need to qualify the word utility with the adjective treatment is justified by two facts. First, utility has been almost exclusively evaluated in terms of personnel decisions (e.g., Wiggins, 1973). The issues and methods involved in demonstrating the impact of assessment on treatment outcomes differ significantly from the methods appropriate to the analysis of personnel decisions. Second, in personnel matters the concept of utility has come to refer primarily to the cost-benefit ratio of assessment strategies. This is why it was originally distinguished from predictive validity (Mischel, 1968). The treatment utility of assessment is not primarily a matter of cost-benefit analysis but of the demonstration of a particular type of benefit. Barriers to Research on the Treatment Utility of Assessment Theorists have long believed that research on the treatment utility of assessment should be feasible. \"It is well within the capacity of available research methods and clinical facilities to determine what, if any, is the pragmatic advantage of a personality assessment\" (Meehl, 1959, p. 125). Why then has there continued to be so little research? There are several possible reasons. First, because of conceptual confusion about the psychometric concepts relevant to the treatment utility of assessment, little has been written about the kinds of methods appropriate to treatment utility questions. In the present article, we present a taxonomy of treatment utility methods in the hope of alleviating this problem. Second, \"Clinical p s y c h o l o g i s t s . . , often make a sharp cleavage between their roles as diagnostician and therapist\" (Blatt, 1975, p. 336). Assessment is often not integrated into the therapy process. When the two are disconnected, the value of assessment seemingly turns on the question, Is this diagnosis correct? not Is this assessment useful in treatment? Some clinicians even fear that the assessment process is negatively intrusive on the therapeutic alliance. By focusing on the contribution of assessment to treatment outcome, treatment utility provides an approach for the testing of such concerns and may itself help integrate assessment and treatment roles. Another part of the problem may lie in the belief that complete psychometric purity is necessary before the treatment utility of assessment can be shown or even examined. Wiggins (1973), who has emphasized the The authors would like to thank Paul McReynolds for his helpful comments on an earlier draft of this article. Correspondence concerning this article should be addressed to Steven C. Hayes, Department of Psychology, University of Nevada-Reno, Reno, NV 89557-0062. practical importance of assessment more than most psychometric theorists, noted the possibility that \"concern with the technical problems of measurement . . . has resulted in a relative neglect of the broader context in which such problems arise, namely, the optimal assignment of men to jobs or treatments\" (p. 272). As we will show, there seems to be little reason to delay treatment utility research until psychometrically perfect devices are

442 citations



Journal Article•DOI•
Larry V. Hedges1•
TL;DR: In this article, the authors compare the consistency of replicated research results in physics and in the social sciences and suggest that the results of physical experiments may not be strikingly more consistent than those of social or behavioral experiments.
Abstract: " Research results in the social and behavioral sciences are often conceded to be less replicable than research results in the physical sciences. However, direct empirical comparisons of the cumulativeness of research in the social and physical sciences have not been made to date. This article notes the parallels between methods used in the quantitative synthesis of research in the social and in the physical sciences. Essentially identical methods are used to test the consistency of research results in physics and in psychology. These methods can be used to compare the consistency of replicated research results in physics and in the social sciences. The methodology is illustrated with 13 exemplary reviews from each domain. The exemplary comparison suggests that the results of physical experiments may not be strikingly more consistent than those of social or behavioral experiments. The data suggest that even the results of physical experiments may not be cumulative in the absolute sense by statistical criteria. It is argued that the study of the actual cumulativeness found in physical data could inform social scientists about what to expect from replicated experiments under good conditions. Psychologists and other social scientists have often compared their fields to the natural (the "hard") sciences with a tinge of dismay. Those of us in the social and behavioral sciences know intuitively that there is something "softer" and less cumulative about our research results than about those of the physical sciences. It is easy to chronicle the differences between soft and hard sciences that might lead to less cumulative research results in the soft sciences. One such chronicle is provided by Meehl (1978), who listed 20 such differences and went on to argue that reliance on tests of statistical significance also contributes to the poorer cumulativeness of research results in the social sciences. Other distinguished researchers have cited the pervasive presence of interactions (Cronbach, 1975) or historical influences (Gergen, 1973, 1982) as reasons not to expect a cumulative social science. Still others (Kruskal, 1978, 1981) have cited the low quality of data in the social sciences as a barrier to truly cumulative social inquiry. These pessimistic views have been accompanied by a tendency to reconceptualize the philosophy of inquiry into a format that implies less ambitious aspirations for social knowledge (e.g., Cronbach, 1975; Gergen, 1982). Cumulativeness in the scientific enterprise can mean at least two things. In the broadest sense scientific results are cumulative if empirical laws and theoretical structures build on one another so that later developments extend and unify earlier work. This idea might be called conceptual or theoretical cumulativeness. The assessment of theoretical cumulativeness must be rather subjective. A narrower and less subjective indicator of cumulativeness is the degree of agreement among replicated experiments or the degree to which related experimental results fit into a simple pattern that makes conceptual sense. This idea might be called empirical cumulativeness. The purpose of this article is to suggest that it may be possible to compare at least the empirical cumulativeness of psychological research with that of research in the physical sciences. An exemplary comparison suggests that the differences may be less striking than previously imagined. The mechanism for this comparison is derived from recent developments in methods for the quantitative synthesis of research in the social sciences. Some of the methods used in meta-analysis are analogous to methods used in the quantitative synthesis of research in the physical sciences. In particular, physicists and psychologists use analogous methods for assessing the consistency of research results, a fact that makes possible comparisons among quantitative reviews in physics and in psychology. One such comparison is reported in this article. This comparison was not chosen in a way that guarantees it to be representative of either social science research or physical science research. However, some effort was exerted to prevent the comparison from obviously favoring one domain or the other, and additional examples are provided to suggest that the case for the empirical cumulativeness of physical science could have been made to look far worse. More data would obviously be needed to support strong conclusions. It seems, however, that the "obvious" conclusion that the results of physical science experiments are more cumulative than those of social science experiments does not have much empirical sup-

354 citations



Journal Article•DOI•
Alice H. Eagly1•

264 citations



Journal Article•DOI•
B. F. Skinner1•
TL;DR: For example, this article reviewed three obstacles in psychology's path as a science of behavior, including humanistic psychology, the helping professions, and cog- nitive psychology, and reviewed their unfortunate effects on psychology as a Science and as the basis of a technology.
Abstract: Research on lower organisms by H. S. Jen- nings and Jacques Loeb, together with a positivistic ap- proach to a philosophy of science, contributed to early efforts to explain behavior as a subject matter in its own right rather than as the effect of internal processes, mental or neural. The experimental analysis of behavior was an example of such a program. Psychology has remained, however, primarily a search for internal determiners. Three obstacles in its path as a science of behavior-- humanistic psychology, the helping professions, and cog- nitive psychology--seem to explain why. Some of their unfortunate effects upon psychology as a science and as the basis of a technology are reviewed.

232 citations




Journal Article•DOI•
TL;DR: In a recent survey of experts in the field of intelligence and aptitude testing, this article found that experts hold positive attitudes about the validity and usefulness of tests, and that they are seen as adequately measuring most important elements of intelligence, although the tests are somewhat racially and socioeconomically biased.
Abstract: Psychologists and educational specialists with expertise in areas related to intelligence testing responded to a questionnaire dealing with a wide variety of issues constituting the IQ controversy. Overall, experts hold positive attitudes about the validity and usefulness of intelligence and aptitude tests. Tests are seen as adequately measuring most important elements of intelligence, although the tests are believed to be somewhat racially and socioeconomically biased. There is overwhelming support for a significant within-group heritability for 1(2, and a majority of respondents feel that black-white and socioeconomic status 1(2 differences are also partially hereditary. Problems with intelligence tests are perceived in the influence of nonintellectual characteristics on test performance and in the frequent misinterpretation and overreliance on test scores in elementary and secondary schools. Despite these difficulties, experts favor the continued use of intelligence and aptitude tests at their present level. Variation in responding to substantive questions on testing is largely resistant to prediction by a host of demographic and background variables, including within-sample variation in expertise. Intelligence tests have been under attack practically since their inception (Cronbach, 1975; Haney, 1981). Critics have claimed, among other things, that intelligence and aptitude tests measure nothing but test-taking skills, have little predictive power, are biased against certain racial and economic groups, are used to stigmatize low scorers, and are tools developed and fostered by those in power in order to maintain the status quo (see Block & Dworkin, 1976, and Houts, 1977, for collections of such critiques). Though perhaps not as apparent as 10 (or 60) years ago, such criticisms remain prevalent (e.g., Gould, 1981; Lewontin, Rose, & Kamin, 1984; Owen, 1985). Moreover, critics of testing appear to have much influence in such organizations as the National Education Association, the news media, the New York State Legislature, and the courts (Bersoff, 1981; Herrnstein, 1982; Lerner, 1980). It is not surprising, of course, in light of the important role intelligence and aptitude tests play in the allocation of valued resources and opportunities, that testing has been a topic of concern in the popular press and in all three branches of government. What is surprising is that much of the public controversy seems to be uninformed. Those who must reach policy decisions about Harvard University Smith College testing often seem more influenced by political considerations than by the empirical literature. There is, of course, no shortage of appeal to expertise. Public opinion and policy are influenced by the perception of expert opinion: Witness the standard procedure in Congressional hearings and news media stories on technical issues. In public forums, the impression is often given by those who attack tests (e4;., CBS News, 1975; Larry P. v. Wilson Riles, 1979) that many of the longaccepted "facts" about intelligence tests are subjects of great dispute within the expert community or that most experts actually agree, for example, that tests are culturally biased, meaningless as anything but predictors of success in school, and unrelated to an individual's genetic endowment. These claims may very well be true, but they are rarely made with sufficient supporting evidence. It is important, therefore, to try to assess the veracity of assertions that there is substantial controversy about, and even animosity toward, testing among those most familiar with the empirical evidence. Surveys of opinion on intelligence testin~ and related issues, among any group, have been scarce since the advent of the most recent wave (post 1969) of testing criticism (see Brim, Glass, Neulinger, & F'trestone, 1969, for an earlier comprehensive survey of public opinion, and Lerner, 1981, for a review of more recent public opinion surveys). One group that has been particularly ill served by survey research is testing experts. Those who conduct research on the nature of intelligence and test use and those who design and validate tests, and who therefore are most qualified to evaluate criticisms of testingin the context of the body of psychometric and cognitive ability literature have rarely been asked their opinions about the most important issues of public contention surrounding intelligence tests. To date, there are no comprehemive polls of this sort. Such a survey is needed, but not because it will resolve any of the various controversies surrounding testing; issues of fact are not settled via consensus. A comprehensive survey of expert opinion about intelligence testing is necessary because the use of intelligence and epfitude testing represents an important public policy issue. A survey of expert opinion will not settle this issue, but it will allow a clearer picture of informed opinion to enter the public debate. In a way, it is a method of pooling "expert testimony" for the benefit of those charged with policy decisions. It should also allow anyone interested February 1987 9 American Psychologist ~ N t 1987 by the Amcxican P~chok~cal Amociation, Inc. 00034)66X/87/$00.75 o. 2, 137-144 137 in the IQ controversy to achieve a better understanding of the issues involved. Table 1 Composition of Survey Sample M e t h o d G ~ o ~ N

Journal Article•DOI•
TL;DR: De Meuse as discussed by the authors gratefully acknowledges the assistance of the following individuals who helped code the data used in this study: Sue Anderson, Valerie Baker; JeffCook, Martha Mueller, Mary Jane Morse, and Amiram Raban.
Abstract: The author gratefully acknowledges the assistance of the following individuals who helped code the data used in this study: Sue Anderson, Valerie Baker; JeffCook, Martha Mueller, Mary Jane Morse, and Amiram Raban. The author also wishes to thank Meg Gerrard, Judy Krulewitz, and Paul Muchinsky for their input on an earlier version of this comment. Correspondence concerning this comment should be addressed to Kenneth P. De Meuse, Human Resources, Intergraph Corporation, One Madison Industrial Park, Huntsville, AL 35807.


Journal Article•DOI•
TL;DR: Current trends are documents current trends and new directions for the future are suggested, which represent a truly new development in the area of self-help.
Abstract: \" Research findings on do-it-yourself treatmerit books demonstrate major limitations in their current usefulness. Yet psychologists continue to develop and market these programs with exaggerated claims. This commercialization of psychotherapy raises serious questions that warrant attention. The present article documents current trends and suggests new directions for the future. The public, for many years, has been able to read general books of advice for personal problems. These books could be written by any author who liked to write and believed in what he or she had to say. More recently, the public has been able to choose among a variety of specific treatment books whose instructions have been targeted to specific problems and whose authors are leading experts in the fields of psychotherapy or clinical psychology. Zimbardo (1977) has published on shyness; Lewinsohn and colleagues (Lewinsohn, Munoz, Zeiss, & Youngren, 1979) and Burns (1980) have published on depression; Marks (1978), Wolpe (1981), and others on phobias; the Mahoneys (1976a), Brownell (1980), and others on weight loss; Coates and Thoresen (1977) on insomnia; Heiman and the LoPiccolos (1976) on sexual problems; Danaher and Lichtenstein (1978) on smoking, and the list goes on. At first glance, the involvement of psychologists in the development of self-help programs appears beneficial. Psychologists who provide therapeutic advice to the public appear to be following George Miller's urgings to \"give psychology away\" (Miller, 1969, p. 1074). Miller had used this phrase in his 1969 Presidential Address to the American Psychological Association as a way to refer to what he saw as the major social responsibility of psychologists to learn how to help people to help themselves. This is certainly the spirit of do-it-yourself treatment books: to help people help themselves. The American Psychological Association's (APA) Task Force on Self-Help Therapies (1978) concluded that psychologists were in a unique position to contribute to the self-help movement.' No other professional group combines the clinical and research experiences that form the educational background of a clinical psychologist. Unlike the typical author, clinical psychologists are in a position to assess do-it-yourself treatments systematicaUy and to educate consumers in the proper use of these programs. The fulfillment of this potential would represent a truly new development in the area of self-help (Rosen, 1977). Although the benefits of self-help books may be great, a number of risks exist as well. Do-it-yourself books have few, if any, provisions for arriving at a reliable diagnosis; they lack provisions for monitoring patients' compliance with instructions, and they have few or no provisions for follow-up. Consequently, do-it-yourself therapies may be applied inappropriately. A person with thyroid problems could self-administer a stimulus-control program for insomnia; an individual with headaches caused by a tumor could misapply relaxation techniques; or an individual in the depressive phase of bipolar alfective disorder could suffer needlessly while manipulating pleasant events schedules. Subsequent to diagnosis, there is the posml~ility that an individual could misunderstand instructions, misapply instructions, or fail to comply fully with therapeutic regimens. Should treatment failure occur, t ime are risks of negative self-attributions, of anger toward self or others, and of reduced belief in the efficacy of today's therapeutic techniques (Barrera, Rosen, & Glasgow, 1981). In light of the risks that consumers face when selfadministering therapeutic instructions, I expressed concerns over the proliferation of untested do-it-yourself treatment books (Rosen, 1976a). At that time, I noted that behavioral techniques were being marketed as do-ityourself therapies without the benefit of clinical trial. I suggested that the only contingencies affecting the sale of these programs were monetary and that consumers ran the risk of purchasing ineffective or potentially harmful programs. This article will demonstrate that my previously stated concerns were warranted. Commercial considerations, rather than professional standards, have been influencing the development of treatment books. Rather than \"giving psychology away,\" as suggested by C.morge Miller, many psychologists are simply finding \"new ways to sell it.\" Correspondence concerning this article should be addressed to Gerald M. Rosen, Cabrini Medical Towe~ Suite 1910, 901 Boren Aveaue, Seattle, WA 98104. Members of the Task Force on Self-Help Therapies were Manuel Barrera, Jr., Cyril Franks, Herbert Freudenberg~ Russell GiasBow, Susan Gilmore, Edward Lichtenstein, Peter Nathan, and Gerald Rosen (Chair). Copies of the Task Force's report may be obtained by writing Gelid Rosen. 46 January 1987 9 American Psychologist Copyright 1987 by the American ~ Asmcisficm. Inc. 0003-066X/87/$00.75 ~ L 42, NO~ 1,46-51 Research : W h a t the F indings Tell U s Psychologists are to be credited for the extensive testing of self-help materials. This work has too often gone unrecognized, as evidenced in a recent critique of psychotherapy in which the author stated, \"There has not been any good research on the uses and limits of self-help materials\" (Zilbergeld, 1983, p. 74). This statement fails to acknowledge over 100 studies or case reports that evaluated self-help materials in the 1970s (see reviews by Glasgow & Rosen, 1978, 1982). Additional studies have been conducted since the time of those reviews. The problem in the area of self-help materials is not a dearth of studies, but the failure of psychologists to heed the results of those studies. Let us consider several conclusions that can be drawn from the literature. First, research on do-it-yourself treatment books has demonstrated that techniques applied successfully by a therapist are not always self-administered successfully. Zeiss (1978) conducted a controlled outcome study on the treatment of premature ejaculation. Couples were assigned, on a random basis, to receive either selfadministered treatment, minimal therapist contact, or therap'm-directed treatment. As in earlier reports by Zeiss (1977) and Lowe and Mikulas (1975), treatment with only minimal therapist contact was effective. But of six couples who self-administered treatment, none successfully completed the program. Matson and Ollendick (1977) obtained similar results in an evaluation of Toilet Training in Less Than a Day by Azrin and Foxx (1974). In this study, four of five mothers in a therapist-administered condition successfiflly toilet trained their children, whereas only one of five mothers in a self-administered group was successful. The findings reported by Matson and Ollendick (1977) support a second, and possibly more significant, conclusion: Self-help efforts can lead to the worsening of a problem. These authors observed that unsuccessful selfadministered interventions are associated with an increase in children's problem behaviors and negative emotional side effects between mothers and children. In the context of such findings, it would have been interesting if Zeiss (1978) had conducted a follow-up assessment on those couples who failed to self-treat their premature ejaculation problem. One can imagine how tension would have developed in these couples, especially if they were unaware that all other couples had been equally unsuccessful. More focused concerns regarding treatment failure may apply for specific problem areas. For example, Brownell, Heckerman, and Westlake (1978) discussed how repeated short-term losses in weight can have harmful effects on physical health. After observing minimal weight loss among those who used a self-help book, the







Journal Article•DOI•
TL;DR: The authors describes how psychology, higher education, and American society reflect oppositional centripetal and centrifugal factors, or consolidating and unifying versus diverging and separating qualities, respectively.
Abstract: This chapter describes how psychology, higher education, and American society reflect oppositional centripetal and centrifugal factors, or consolidating and unifying versus diverging and separating qualities, respectively. It discusses how centripetal trends prevailed in psychology, universities, and society at certain times in history and how centrifugal factors dominated at other times. Psychology appears presently to be in a condition where centrifugal forces are very strong, yielding a concern in some quarters that the field is splitting apart. The modern era of psychology in the United States may be said to have begun in the late 1800s, coincident with the founding of Wilhelm Wundt's psychological laboratory in 1879 and the organization of the American Psychological Association in 1892. The dramatic changes in American society described earlier had counterparts in higher education in the form of protest movements, student participation in governance, and new attitudes toward undergraduate and graduate curriculums.




Journal Article•DOI•
TL;DR: Examining the sources of the controversy over normalization will clarify the limits of the knowledge about treatment and open the possibility of theory-based evaluation of service delivery, which should advance the understanding of environmental influences on all human development.
Abstract: Normalization is an ideology of human services based on the proposition that the quality of life increases as one's access to culturally typical activities and settings increases. Applied to individuals who are mentally retarded, normalization fosters deinstitutionalization and the development of community-based living arrangements. Closely allied with normalization is the concept of least restrictive environment—that the places where people live, learn, work, and play should not restrict their involvement in the mainstream of society. Some psychologists are numbered among the chief advocates of normalization and deinstitutionalization, whereas others are vocal critics. Our premise is that examining the sources of the controversy over normalization will clarify the limits of our knowledge about treatment and open the possibility of theory-based evaluation of service delivery. Such evaluation should advance our understanding of environmental influences on all human development. Deinstitutionalization and normalization are probably the most controversial and emotionally charged issues in the field of mental retardation. Their merits and liabilities are debated passionately in courtrooms, legislative hearings, parent meetings, social and health service agencies, professional societies, and the media. Testimony invariably includes accounts of the phenomenal progress of previously institutionalized individuals after they were moved to small community homes and vivid descriptions of shameful conditions that still exist in state institutions, countered by horror stories of deinstitutionalized persons who are isolated, neglected, or abused in the community and by glowing reports of model programs conducted within institutions.


Journal Article•DOI•
Edward Zigler1•