scispace - formally typeset
Search or ask a question

Showing papers in "Public Opinion Quarterly in 1983"


Journal ArticleDOI
TL;DR: A person exposed to a persuasive communication in the mass media sees this as having a greater effect on others than on himself or herself as discussed by the authors, and each individual reasons: “I will not be influenced, but they (the third persons) may well be persuaded.
Abstract: A person exposed to a persuasive communication in the mass media sees this as having a greater effect on others than on himself or herself. Each individual reasons: “I will not be influenced, but they (the third persons) may well be persuaded.” In some cases, a communication leads to action   

1,488 citations


Journal ArticleDOI
TL;DR: The results suggested that the media influenced views about issue importance among the general public and government policy makers, but it was not this change in public opinion which led to subsequent policy changes.
Abstract: Using an experimental design built around a single media event, the authors explored the impact of the media upon the general public, policy makers, interest group leaders, and public policy. The results suggested that the media influenced views about issue importance among the general public and government policy makers. The study suggests, however, that it was not this change in public opinion which led to subsequent policy changes. Instead, policy change resulted from collaboration between journalists and government staff members.

350 citations


Journal ArticleDOI
TL;DR: The Troldahl-C Carter (1964) method as mentioned in this paper is one of the most commonly used methods for respondent selection in telephone surveys. But it requires the interviewer to ask potentially sensitive questions early in the interview, such as how many people 18 years or older live in the household and how many of them are men.
Abstract: RECENT and numerous additions to the survey methodology literature, especially in the area of random-digit-dialing, have helped researchers to generate samples of household units for telephone surveys. However, the literature on selecting survey respondents within those household units has not kept pace. In fact, after searching through the standard texts on telephone surveys (see for example, Blankenship, 1977, or Dillman, 1978) researchers might conclude that there is only one method of respondent selection-the Troldahl-Carter (1964) method. In the Troldahl-Carter method, one of four selection matrices which list various combinations of age and sex of household members is assigned randomly to telephone numbers in the sample. Thus, by asking only two questions (How many people 18 years or older live in your household, and how many of them are men?), the interviewer has enough information to select the respondent who is designated at the intersection point on the matrix. This method, which is less cumbersome and more appropriate to telephone interviews than the complete enumeration of the household proposed by Kish (1949), still requires the interviewer to ask potentially sensitive questions early in the interview. For example, two elderly women who live together

285 citations


Journal ArticleDOI
TL;DR: For instance, survey researchers have long been aware that asking people to participate in surveys, through interviews in person or on the telephone, or through a self-administered questionnaire, might entail a sacrifice of time as well as some psychological discomfort, depending on the nature of the inquiry as mentioned in this paper.
Abstract: "RESPONDENT BURDEN" is a relatively recent concern for the survey profession, at least in the term's specific reference to the presumed hardships entailed in being a survey participant. Of course, survey researchers have long been aware that asking people to participate in surveys, through interviews in person or on the telephone, or through a self-administered questionnaire, might entail a sacrifice of time as well as some psychological discomfort, depending on the nature of the inquiry. In fact, warnings against overly long questionnaires or interviews surfaced as far back as the 1920s (e.g., Chapin, 1920), and continued to appear sporadically during the following decades (e.g., Young, 1939; Ruch, 1941). Despite these concerns, however, the profession generally felt that if a survey were competently fielded, with pleasant and tactful inter

175 citations


Journal ArticleDOI
TL;DR: Singer et al. as mentioned in this paper found that interviewers' age, the size of the interviewing assignment, and interviewers expectations all had a strong effect on overall cooperation rates; the relation of experience to response rate was curvilinear in this sample.
Abstract: This study reports on two sets of findings related to interviewer effects, derived from a national RDD sample of the adult population. The first of these concerns the effect of interviewer characteristics and expectations on overall cooperation rates; the second, the effect of interviewer characteristics and expectations on item nonresponse and response quality. We found that interviewers' age, the size of the interviewing assignment, and interviewers' expectations all had a strong effect on overall cooperation rates; the relation of experience to response rate was curvilinear in this sample. Age and education have consistent but statistically insignificant effects on item nonresponse. The effect of interviewers' expectations on responses within the interview resembles that in earlier studies, but is less pronounced and less consistent. Eleanor Singer is a Senior Research Associate at the Center for the Social Sciences. Martin R. Frankel is Professor of Statistics at Baruch College, CUNY. Marc B. Glassman is an independent statistical consultant in New York City. The authors wish to thank Ed Blair, Charles F. Cannell, Howard Schuman, and Seymour Sudman for reading and commenting on an earlier draft of the paper. The research was made possible by grant SES-78-19797 to the senior author. Public Opinion Quarterly Vol. 47:68-83 ? 1983 by the Trustees of Columbia University Published by Elsevier Science Publishing Co, Inc. 0033-362X/83/0047-68/$2.50 This content downloaded from 157.55.39.67 on Wed, 27 Apr 2016 06:33:38 UTC All use subject to http://about.jstor.org/terms EFECT OF INTERVIEWER CHARACTERISTICS 69 interviews, where typically fewer interviewers take a much larger number of interviews. Consequently, the effect of each interviewer's performance on response rate and response quality is magnified many times. At the same time, the fact that each interviewer on a telephone survey can be assigned to a random sample of respondents makes such effects easier to investigate and avoids the methodological weaknesses plaguing the studies by Singer and Kohnke-Aguirre and by Sudman et al., namely, the confounding of area and interviewer

160 citations


Journal ArticleDOI
TL;DR: Several methods have been proposed to estimate the attributes of nonrespondents (Daniel, 1975) as mentioned in this paper, which are appropriate for certain types of surveys (e.g., list samples only) while others can be used with modification across various methods of administration with various sample frames.
Abstract: Methods for estimating nonresponse bias are reviewed and several methods are tried on the 1980 GSS. The results indicate that various estimating procedures are inappropriate and that even the more promising techniques can provide faulty estimates of nonresponse bias. By its nature, nonresponse bias is very difficult to assess accurately and no simple, certain method exists. Tom W. Smith is Senior Study Director, National Opinion Research Center, University of Chicago. This research was done for the General Social Survey Project directed by James A. Davis. The project is supported by the National Science Foundation, Grant No. SOC77-03279. This is an abridged version of GSS Technical Report No. 25 published by NORC, 1981. The author wishes to thank James A. Davis, Howard Schuman, and Stanley Presser for their comments. Public Opinion Quarterly Vol. 47:386-404 ? 1983 by the Trustees of Columbia University Published by Elsevier Science Publishing Co., Inc. 0033-362X/83/0047-386/$2.50 This content downloaded from 157.55.39.138 on Sun, 26 Jun 2016 06:02:31 UTC All use subject to http://about.jstor.org/terms NONRESPONSE ON THE 1980 GSS 387 ponse rate, we do not know the nonresponse mean since we have no measure of Y among nonrespondents. Two alternatives are usually presented in discussing nonresponse-how to minimize nonresponse and how to estimate and correct for differences between the respondents and nonrespondents. In this paper we ignore the first alternative, accepting that a nonresponse rate of .25 is typical for good, state-of-the-art surveys (Smith, 1978; Davis et al., 1980; and Groves and Kahn, 1979). Instead, we will review the various existing approaches to estimating the characteristics of nonrespondents and then apply several of the proposed approaches to nonresponse on the 1980 GSS. Measuring Nonrespondents and Assessing Nonresponse Bias Numerous methods have been proposed to estimate the attributes of nonrespondents (Daniel, 1975). Some are appropriate for certain types of surveys (e.g., list samples only) while others can be used with modification across various methods of administration with various sample frames (e.g., from mail lists to RDD telephone). Attention will focus primarily on methods that are appropriate, or at least have been offered as appropriate, for face-to-face, national surveys. Among other things, this eliminates list samples where information about the respondent is known prior to the survey. Our review of nonresponse studies found nine major approaches to assess and adjust for nonresponse: 1. External population checks 2. Geographic/aggregate level data 3. Interviewer estimates 4. Interviewing nonrespondents about nonresponse 5. Subsampling of nonrespondents 6. Substitution for nonrespondents 7. Politz-Simmons adjustment 8. Extrapolation based on difficulty 9. Conversion adjustments Probably the simplest check is to compare sample estimates (usually distributions) to some universe figures or preferred sample estimates such as the U.S. Census or the Current Population Survey (Crossley and Fink, 1951; Stephen and McCarthy, 1958: Smith, 1979; and Presser, 1981). Strictly speaking, when using such a criterion comparison, one is not checking how much difference comes from nonresponse but how much comes from nonresponse and all other This content downloaded from 157.55.39.138 on Sun, 26 Jun 2016 06:02:31 UTC All use subject to http://about.jstor.org/terms

154 citations


Journal ArticleDOI
TL;DR: The results suggest that the infusion of information into a social system via the mass media can close as well as open knowledge gaps and that motivation to acquire information in a specific knowledge domain is a factor controlling gap effects.
Abstract: The evaluation of a campaign to increase cardiovascular health knowledge indicates that within the treatment community, education was a significant predictor of knowledge before the campaign but was not a significant predictor after the campaign. Two variables related to motivation to acquire information about cardiovascular health (age and perceived threat of heart attack) were not significant predictors of knowledge before the campaign but were significant predictors afterwards. These results suggest that the infusion of information into a social system via the mass media can close as well as open knowledge gaps and that motivation to acquire information in a specific knowledge domain is a factor controlling gap effects.

136 citations


Journal ArticleDOI
TL;DR: Hopkins et al. as mentioned in this paper found that the explicit consent procedure produced a sample that was approximately half the size of the eligible population and overrepresented white students while underrepresenting blacks and Asian Americans, and no evidence of sample bias with respect to student gender, and the evidence regarding bias on academically related measures was mixed.
Abstract: The parents of an eligible sample of 1618 students in grades four through twelve were contacted to obtain written permission for their children to complete questionnaires related to alcohol and drugs. The distributions of students across the parental response categories (consent-denied, no-reply, or consent-granted) were compared on the student variables of sex, grade level, ethnic group, and reading and vocabulary test scores. The explicit consent procedure produced a sample that was approximately half the size of the eligible population and overrepresented white students while underrepresenting blacks and Asian Americans. There was no evidence of sample bias with respect to student gender, and the evidence regarding bias on academically related measures was mixed. Kathleen A. Kearney was a doctoral candidate at Washington State University when this work was completed; she is now self-employed in Dublin, Ireland. Ronald H. Hopkins is Professor and Chair of Psychology at Washington State University. Armand L. Mauss is Professor of Sociology at Washington State University. Ralph A. Weisheit was a postdoctoral research associate in the Social Research Center at Washington State University when this work was completed; he is now Assistant Professor of Criminal Justice Sciences, Illinois State University at Normal. This research was supported in part by Grant No. 5 H84 AA03734 from the National Institute on Alcohol Abuse and Alcoholism. The authors thank Don A. Dillman and J. Scott Long for their helpful comments on an earlier draft of this paper. Requests for reprints should be addressed to Ronald H. Hopkins, Department of Psychology, Washington State University, Pullman, Washington 99164. Public Opinion Quarterly Vol. 47:96-102 ? 1983 by the Trustees of Columbia University Published by Elsevier Science Publishing Co., Inc. 0033-362X/83/0047-96/$2.50 This content downloaded from 157.55.39.200 on Thu, 01 Dec 2016 05:43:46 UTC All use subject to http://about.jstor.org/terms

98 citations


Journal ArticleDOI
TL;DR: The authors analyze some of the assumptions underlying most current research on television, illustrated by a general discussion of the NIMH report on Television and Behavior and specific discussion of "mainstreaming" and the effects of television violence.
Abstract: The authors analyze some of the assumptions underlying most current research on television. They emphasize the dependence on (1) an individual rather than an institutional level of analysis; (2) a model of research utilization that pays little explicit attention to where sources of leverage lie for changes in programming; (3) extremely simple models of the selection processes associated with different levels of television viewing; and (4) uncritical appraisals of the consequences of effects that many would call small or modest. These issues are illustrated by a general discussion of the NIMH report on Television and Behavior and specific discussion of "mainstreaming" and the effects of television violence. In 1972, POQ's editors invited Leo Bogart to prepare an extended review article of the Surgeon-General's Study of Television and Social Behavior (POQ 36:491-521). When the 10-year follow-up study was released by NIMH in 1982, the editors asked Thomas D. Cook, a distinguished psychologist noted for his research on television, to perform the same function.

87 citations


Journal ArticleDOI
TL;DR: For example, the authors showed that the outcomes of presidential elections can be predicted with some accuracy on the basis of the president's rating in the final preelection popularity poll, contrary to earlier indications (Mueller, 1973).
Abstract: THIS paper updates and extends findings reported by Sigelman (1979), who discovered, contrary to earlier indications (Mueller, 1973), that the outcomes of presidential elections can be predicted with some accuracy on the basis of the president's rating in the final preelection popularity poll. The 1980 election provides an additional case to work with-the eighth time an incumbent president has sought reelection since 1938, when the Gallup presidential popularity question was first asked-and it is of obvious interest to see how closely this most recent case fits into the pattern established earlier. We also bring three previously ignored cases (the 1952, 1960, and 1968 elections) into the analysis by shifting the dependent variable from votes for the incumbent president to votes for the candidate of the incumbent president's party. This substitution is based on indications that presidential popularity has a powerful carryover effect on the outcome of midterm congressional elections (Tufte, 1975). If congressmen of the president's party are held responsible for the incumbent's performance, can we afford to overlook the possibility that the presidential candidate of the president's party is also judged accordingly?

86 citations


Journal ArticleDOI
TL;DR: Herzog et al. as discussed by the authors examined telephone interviewing of older adults and compared it with face-to-face interviews and found that older adults are less likely to participate in an interview when contacted in telephone than when contacted by other adults.
Abstract: This article examines telephone interviewing of older adults and compares it with face-to-face interviews. Specifically, the following issues are examined in several national surveys: (1) differences in age distributions between the samples of adults reached in both modes; (2) explanations for potential differences in age distributions; (3) differences between the two modes in demographic characteristics in the adults reached, in interview process and in response quality, and how these mode differences vary by age of the respondents. Telephone surveys tend to underrepresent older persons, and older persons who do participate in a telephone survey are disproportionately well educated. Implications of the lower response rate among older persons are softened by the fact that reponse distributions across a range of questions show little difference by interview mode between older persons and persons of other age groups. A. Regula Herzog is Assistant Research Scientist at the Institute for Social Research and the Institute of Gerontology at the University of Michigan, Ann Arbor, Michigan. Willard L. Rodgers is Associate Research Scientist at the Institute for Social Research at the University of Michigan, Ann Arbor, Michigan. Richard A. Kulka is Senior Survey Methodologist at Research Triangle Institute, Research Triangle Park, North Carolina. This article is a revised and abbreviated version of three papers presented at the 33rd Annual Meeting of the Gerontological Society, San Diego, November 1980. This research was supported by USPHS Grant No. AGO2038 from the National Institute on Aging. The authors wish to thank Lynn Dielman and Mary Grace Moore for able research assistance; Charles Cannell, Philip Converse, Richard Curtin, Robert Groves, Robert Kahn, and the late Angus Campbell for data from several unreleased surveys; and Charles Cannell, Robert Groves, and Berit Ingersoll for helpful comments on an earlier version of this paper. Public Opinion Quairter-ly Vol. 47 405-418 ? 1983 by the Tr-ustees of Columbia Univelsity Published by ElsevierScience Publishing Co, Inc. 0033-362X/83/0047-405/$2.50 This content downloaded from 207.46.13.105 on Wed, 25 May 2016 06:47:29 UTC All use subject to http://about.jstor.org/terms 406 HERZOG, RODGERS, AND KULKA research has been directed specifically to the use of telephone surveys with older adults. This paper represents an initial effort in that direction. In the type of telephone interview survey considered here, a random-digit dialing method for identifying sample households is used, and interviews with a random adult in each sample household are conducted from a central location. (For a detailed presentation of telephone interviewing methodology see Groves and Kahn, 1979; for a discussion of the sampling procedures see Waksberg, 1978.) A comparison of telephone and face-to-face interview surveys must therefore consider several areas of potential differences between the two: (1) their ability to reach a representative sample of the older population; (2) the nature of the interview process itself; and (3) the quality of the responses obtained. In general, the representativeness of a sample may be jeopardized in two ways. First, the sample may be drawn inaccurately and/or from a frame which systematically excludes certain members of the population. Second, persons who are identified by sampling procedures as respondents may not participate in the survey, thereby introducing systematic bias. With respect to the first point, persons without a telephone are systematically excluded from samples of telephone subscribers. However, this constitutes less of a problem when sampling older persons than when sampling the total population because older persons are slightly more likely than younger persons to have a telephone (Thornberry and Massey, 1978). With respect to the second point, response rates are generally somewhat lower for telephone interviews than similar interviews conducted face-to-face (Groves and Kahn, 1979). Moreover, older adults may be particularly likely to decline an interview on the telephone, since they are more likely than younger adults to have hearing problems (Corso, 1977), less likely to be used to the telephone, and likely to have less formal education. On the other hand, some older persons may be more likely to agree to participate in an interview when contacted by telephone than when contacted in person, because many of them are concerned about being victimized (Clemente and Kleiman, 1976) and interviews by telephone do not require them to admit a stranger to their home. In sum, it is difficult to predict how well telephone interviews will compare with face-to-face interviews in reaching the elderly population, since several potentially important factors apparently work in opposite directions. For several reasons the telephone interview process is expected to be more stressful and demanding than the face-to-face interview, particularly for older respondents. The failing sensory capacities of This content downloaded from 207.46.13.105 on Wed, 25 May 2016 06:47:29 UTC All use subject to http://about.jstor.org/terms INTERVIEWING OLDER ADULTS 407 older persons and their concerns about their performance (Botwinick, 1978) may make an interview which relies entirely on auditory communication particularly stressful. Telephone interviews also limit the amount and nature of feedback that an interviewer can provide to put a respondent at ease and to make the task more personal (Singer, 1981), factors of importance for good learning performance among older persons (Botwinick, 1978). Finally, telephone interviews often proceed at a more rapid pace than do face-to-face interviews (Groves and Kahn, 1979; Groves, 1978),1 and high speed is yet another factor known to be particularly detrimental to the perceptual and learning performance of older respondents (Botwinick, 1978; Corso, 1977). This paper examines telephone interviewing with older adults and compares this mode with face-to-face interviewing. Specifically, it addresses the following three issues: (1) differences in age distributions between the samples of adults that are reached by both modes; (2) explanations for potential differences in age distributions; (3) differences between the two modes in demographic characteristics of the adults that are reached, in interview process, and in response quality, and how these mode differences vary by age of the respondents.

Journal ArticleDOI
TL;DR: Lazarsfeld as discussed by the authors pointed out that historians' explanations of social behavior often depend on imputations of attitudes to crucial actors, yet they usually have weaker evidence concerning attitudes than any other feature of their accounts.
Abstract: WHEN Paul Lazarsfeld gave his 1950 presidential address to the American Association for Public Opinion Research, he made his topic "The Obligations of the 1950 Pollster to the 1984 Historian." In that characteristically wide-ranging talk, Lazarsfeld closed in on a simple but important point: Historians' explanations of social behavior often depend on imputations of attitudes to crucial actors, yet they usually have weaker evidence concerning attitudes than any other feature of their accounts. The pollster of 1950, said Lazarsfeld, being a specialist in the systematic documentation of attitudes, could greatly strengthen the position of future historians. "If for a given period we not only know the standard of living, but also the distribution of ratings on happiness and personal adjustment," he said, "the dynamics of social change will be much better understood" (Lazarsfeld, 1982:94). By 1984, Lazarsfeld thought, instead of the constant obliteration of the past described in George Orwell's totalitarian nightmare, we might have a kind of social bookkeeping that would integrate behaviors and attitudes into a better understanding of social change. The analysis of public opinion, he suggested, might even become a predictive science, a science of sentiments (Lazarsfeld, 1982:95).

Journal ArticleDOI
TL;DR: Nederhof et al. as mentioned in this paper investigated the effect of non-monetary incentives on the response rate of mail surveys in the Netherlands and found that the incentive produced no response bias and little volunteer bias.
Abstract: Two studies were undertaken on the effects of including a material nonmonetary incentive in mail surveys, using various samples of the general public in the Netherlands. The results show that nonmonetary incentives produce a higher initial response rate, but follow-ups reduced the effect of the incentive to a nonsignificant ratio. Inclusion of the incentive produced no response bias and little volunteer bias. Results from these studies offer a possible explanation for why past studies on nonmonetary incentives have often shown positive effects: they were conducted using methods that produced low response rates. When methods that produce high response rates are used, the effect of nonmonetary incentives on response rate disappears. Finally, the use of monetary incentives in mail surveys with a high base response is discussed. Anton J. Nederhof is a Research Fellow at the Center for Social Science Research, University of Leyden, The Netherlands. This paper was written while the author was a Visiting Fulbright Scholar at the Departments of Rural Sociology and Sociology, Washington State University, Pullman, Washington. The author wishes to thank Don A. Dillman and Leo Th. J. van der Kamp for their comments on an earlier draft of this paper. Public Opinion Quarterly Vol. 47:103-111 ?) 1983 by the Trustees of Columbia University Published by Elsevier Science Publishing Co., Inc. 0033-362X/83/0047-103/$2.50 This content downloaded from 207.46.13.127 on Fri, 14 Oct 2016 04:09:15 UTC All use subject to http://about.jstor.org/terms 104 ANTON J. NEDERHOF A number of studies have shown that nonmonetary incentives raised returns relative to a control group (Brennan, 1958; Goodstadt et al., 1977; Hansen, 1980; Houston and Jefferson, 1975; Watson, 1965; Whitmore, 1976), although not all increases were statistically significant. All experiments were geographically limited to the United States. Most of the studies sampled specific homogeneous groups, making generalization of results difficult or unwarranted (Cook and Campbell, 1979). In addition, response rates of no-incentive control groups were generally low. No experiments have been done with control groups with high response rates, such as those obtained by implementing a set of methods developed by Dillman (1978), resulting in returns of more than 70 percent. Thus, it remains unclear whether nonmonetary incentives would lead to a higher response rate in cases when a relatively high base rate can be obtained without use of the incentive. Another issue which deserves attention is the possibility that the use of incentives affects the validity of findings. First, incentives may induce some (groups of) respondents to participate, and others not. This type of bias will be called volunteer bias (Rosenthal and Rosnow, 1975). A second type of bias may exist independent of the first type: incentives may affect subjects' answers. This type of bias is called response bias. Both types of bias have been found with monetary incentives (Gelb, 1975; Rush et al., 1978). Little is known about the biasing effects, if any, of nonmonetary incentives (Whitmore, 1976; Brown and Coney, 1977). The two present studies, executed in a European country, addressed three issues: (1) the effects of a nonmonetary incentive on response rate under conditions of a high base response rate, (2) inducement of response and volunteer bias by inclusion of an incentive, and (3) the cross-cultural effectiveness of methods developed in the United States.

Journal ArticleDOI
TL;DR: Tuchfarber et al. as mentioned in this paper found that the wording of a filter question can make a significant difference in the percentage of "don't know" (DK) responses elicited by an item, especially with topics that are more abstract or less familiar to survey respondents.
Abstract: Extending previous work, the authors find that the wording of a filter question can make a significant difference in the percentage of "don't know" (DK) responses elicited by an item, especially with topics that are more abstract or less familiar to survey respondents. They also find, however, that the content of an item can have a substantial, independent effect on DK or "no opinion" responses, regardless of how the filter question is worded. In general, it appears that the less familiar the issue or topic, the greater the increase in DK responses produced by adding a filter. Even more important, the analysis shows that filtering can in some instances dramatically affect the conclusions a pollster would draw about the distribution of public opinion on an issue. Indeed, such effects may occur more often than has previously been suspected, though the circumstances under which they emerge remain elusive. The authors suggest that such effects may become amenable to analysis by probing respondents about "what they had in mind" as they answered the question. George F. Bishop is Associate Professor of Political Science and a Senior Research Associate at the Institute for Policy Research at the University of Cincinnati. Robert W. Oldendick is Assistant Director and Alfred J. Tuchfarber is Director of the Institute for Policy Research, University of Cincinnati. This research was supported by a grant from the National Science Foundation (SOC 78-07407). The authors want to thank the anonymous reviewers for their useful comments and suggestions for revising the original manuscript. Public Opinion Quarterly Vol. 47:528-546 ? 1983 by the Trustees of Columbia University Published by Elsevier Science Publishmg Co., Inc. 0033-362X/83/0047-528/$2.50 This content downloaded from 207.46.13.114 on Thu, 26 May 2016 06:44:05 UTC All use subject to http://about.jstor.org/terms EFFECTS OF FILTER QUESTIONS 529 about political and social issues, they found that a filter will generally increase the percentage of "don't know" (DK) or "no opinion" responses to an item by about 20-25 percent.1 Furthermore, their analysis indicates that these increments in DK responses do not depend on the content of an issue, and they did not find any relationship between the percentage of DKs which were volunteered on an issue in the absence of an explicit filter question (the standard form) and the percentage of respondents removed by adding one. Yet they did discover that the wording of a filter can make a substantial difference in the percentage of respondents who say they have "no opinion." A filter, that is, which emphasizes the frequency or acceptability of not having an opinion on an issue will screen out many more people than one which does not. Surprisingly, perhaps, their research also suggests that in most instances filtering will have little impact on the distribution of substantive responses to an item once the DKs are excluded from the analysis. The use of filter questions in their experiments, moreover, did not appear to have any significant influence on the magnitude of association between substantive responses to issues and such demographic variables as age, sex, and education. A researcher would, in other words, draw essentially the same conclusion about the nature and determinants of public opinion on an issue on the basis of either a filtered or an unfiltered form (see Schuman and Presser, 1981:126-28,

Journal ArticleDOI
TL;DR: The most common ways of obtaining rare population subjects have been by screening households in known minority communities, recruitment from lists of persons known to belong to the group, such as minority group organization lists, and selecting subjects on the basis of definitive demographic characteristics such as physical appearance, language, and names as mentioned in this paper.
Abstract: BECAUSE members of minority groups constituting a very small population of a national population are difficult and costly to locate using standard probability sampling, social scientists interested in studying small minority groups have frequently had to rely on nonprobability sampling methods. Some of the most common ways of obtaining rare population subjects have been by screening households in known minority communities, recruitment from lists of persons known to belong to the group, such as minority group organization lists, and selecting subjects on the basis of definitive demographic characteristics, such as physical appearance, language, and names. Yet the use of such techniques makes the representativeness of the selected sample and the research results questionable. If research projects on small population groups must rely on nonprobability methods of sampling, therefore, an attempt should be made to determine which of those methods yields the most unbiased


Journal ArticleDOI
TL;DR: Levy et al. as mentioned in this paper examined the differences in polling strategies and performance of the exit poll method on the basis of the 1980 elections and discussed how election day survey data are used by journalists.
Abstract: Methodological details are presented of a survey research technique known as exit or election day polling. Using this technique, major American news organizations collect and analyze voting and attitude data from samples of persons who have just cast ballots. On the basis of the 1980 elections, differences in polling strategies and performance of the exit poll method are examined. How election day survey data are used by journalists is discussed. Mark R. Levy is an Associate Professor, College of Journalism, University of Maryland. Some of the data utilized in this article were made available by the InterUniversity Consortium for Political and Social Research. Neither the collectors of that data nor the Consortium bear any responsibility for the analyses presented here. The author would like to thank those exit pollsters who so graciously shared their time and "secrets." Public Opinion Quarterly Vol. 47:54-67 ? 1983 by the Trustees of Columbia University Published by Elsevier Science Publishing Co., Inc 0033-362X/83/0047-54/$2.50 This content downloaded from 157.55.39.249 on Wed, 03 Aug 2016 05:45:48 UTC All use subject to http://about.jstor.org/terms ELECTION DAY POLLS 55 and, to a lesser degree, academic students of the election process, had relied primarily on preelection surveys and vote returns from selected precincts to interpret the vote (Bicker, 1978; Bohn, 1980). But many election analysts, within and outside the media, believed those two methods to be seriously flawed. Preelection surveys, for instance, suffered from the classic "screening'' problem of how to identify likely voters (Crespi, 1977), and more important, because of publication deadlines, they often required early field work closeout and were thus unlikely to capture late shifts in voter sentiment. Precinct-level vote data too were recognized as having substantial journalistic and scientific limitations. First, election returns contained no explicit information about voter attitudes and perceptions, although journalists often inferred them anyway. Second, so-called tag or analytical precincts were often selected through purposive sampling of election districts with disproportionate concentrations of voters having shared demographic characteristics (race, religion, income, etc.) or with strong partisan voting histories. See, for example, Levy and Kramer (1972). Aware of the ecological fallacy (Robinson, 1950), concerned that "ghettoized" voters did not represent all voters sharing a given demographic attribute, and faced with increasing difficulty in locating homogeneous precincts, media election analysts turned to election day polling. By 1980, the three television networks, the Associa-ted Press, The New York Times, and The Los Angeles Times were all surveying voters as they left the polls.2 This article outlines the methodology of election day polling and compares the different approaches taken in four major polls.3 Examples and data are drawn from the 1980 presidential primaries in New Hampshire, Florida, and California, and from the November 4 general election. This report focuses exclusively on the presidential campaign, although election day polls using virtually the same methods have been and continue to be routinely conducted in state and local contests as well. The elections selected for analysis here were chosen because the author believed that these contests represent the range of challenges faced by the profession. New Hampshire, for instance, is a small, 2 Although six news organizations were involved in election day polling, there were actually only three national election day surveys in November 1980, since CBS News conducted its polls in association with The New York Times, and NBC News teamed up with AP. The L.A. Times polled only during the presidential primaries. 3In addition to these four election day polls, Teichner Associates, Inc. of Princeton, N.J. has conducted approximately 20 exit polls since 1979 for broadcast clients interested in local and statewide elections in major television markets nationwide. This content downloaded from 157.55.39.249 on Wed, 03 Aug 2016 05:45:48 UTC All use subject to http://about.jstor.org/terms

Journal ArticleDOI
Abstract: ALTHOUGH the predominant method of gathering information about drug use and attitudes is self-report, many researchers are unconvinced of the validity of drug surveys. One assumption underlying this sentiment is that respondents may perceive that they are at risk of censure. In order to reduce this perceived risk, respondents may deliberately misrepresent their behavior by reporting that they use fewer drugs, or use drugs more infrequently than they do in actuality. Because invalidity of self-reports is linked with respondent fear of self-incrimination, it is common practice to disguise the identities of respondents or to assure them of the confidentiality of their responses. Several methods of identity protection are currently used to increase the response validity of self-report data. However, the researcher must accept certain limitations on the research design that is dictated by some of these methods. Questionnaires that are responded to anonymously, i.e., with no

Journal ArticleDOI
TL;DR: In this article, the context effects due to placing questions in contiguous positions, with no intervening items, as against having them simply appear in the same questionnaire, are investigated and shown to be both strong and stable over time.
Abstract: THE IMPORTANCE of context effects in survey questionnaires has been pointed up in several recent reports.' In the present paper we start from one of the most firmly established such effects and address a further important issue: to what degree are context effects due to placing questions in contiguous positions, with no intervening items, as against having them simply appear in the same questionnaire. Looked at from a practical standpoint, can investigators reduce context effects by interposing neutral items between questions that are known or thought likely to influence one another? We focus on a pair of items concerning Communist and American reporters where the context effect has been shown to be both strong and stable over time:

Journal ArticleDOI
TL;DR: Tucker et al. as mentioned in this paper examined the effect of interviewer interference on selected items from a number of national polls conducted by CBS News and The New York Times in 1980 and found that these effects were generally quite small and somewhat inconsistent from poll to poll.
Abstract: The interviewer effects for selected items from a number of national polls conducted by CBS News and The New York Times in 1980 were examined. These effects were found to be generally quite small and somewhat inconsistent from poll to poll. The inconsistencies were explained by variable associations with the nonrandom regional distribution of respondents and the political context in which the measurements were obtained. There was some evidence of respondent-interviewer interactions for certain items. Clyde Tucker is Assistant to the Director of the CBS News Election and Survey Unit. An earlier draft of this article was presented at the Annual Conference of the American Association for Public Opinion Research, Buck Hill Falls, Pennsylvania, May 28-31, 1981. The author wishes to express his appreciation for advice given on earlier drafts by Warren Mitofsky, Kathleen Frankovic, Robert Groves, Murray Edelman, Mohammed Yusuf, and two anonymous reviewers. Thanks also go to Solomon Barr, Wayne Reedy and Carolyn Stroock for their assistance in the statistical analysis and preparation of the manuscript. Public Opinion Quarterly Vol. 47:84-95 ? 1983 by the Trustees of Columbia University Published by Elsevier Science Publishing Co., Inc. 0033-362X/83/0047-84/$2.50 This content downloaded from 207.46.13.114 on Thu, 26 May 2016 06:01:31 UTC All use subject to http://about.jstor.org/terms INTERVIEWER EFFECTS IN TELEPHONE SURVEYS 85 son and Marks, 1950; Stock and Hochstim, 1951; Kish, 1962; Sudman and Bradburn, 1974; Bailey et al., 1978; Groves and Kahn, 1979; Groves and Magilavy, 1980). Not only has the existence of interviewer effects been established, but it has also been demonstrated that the magnitude of these effects differs from item to item. As Groves and Magilavy point out, originally it was thought the difference turned on whether the question was factual or nonfactual. They suggest, however, that the effects are actually a function of the amount of interviewer interference which is possible. This interference can take a variety of forms and can be a factor in both factual and nonfactual items. Questions which seem to be most susceptible to interviewer interference are those which concern sensitive topics leading to resistance in asking or responding, open-ended questions (especially those involving probes), and questions requiring a rating or subjective assessment from the interviewer. A number of studies have also found that interviewer effects are related to characteristics of the interviewer and respondent. Older respondents are most open to interviewer effects (Hanson and Marks; Groves and Magilavy). Interviewer effects seem to be related to interviewer competence (determined in a variety of ways) and an interviewer's prior expectations of survey results. There is also evidence to suggest that younger interviewers are less susceptible to interviewer effects. Finally, several studies have shown that interviewer effects can be the product of an interaction between interviewer, respondent, and item characteristics (Athey, et al., 1960; Williams, 1964; Dohrenwend, et al., 1968-69; Schuman and Converse, 1971; Hatchett and Schuman, 1975-76; Freeman and Butler, 1976; Schaffer, 1980; Campbell, 1981). Perhaps the most interesting aspect of these studies is the methodological issues they raise. Estimating interviewer effects has turned out to be a very complicated task. A large part of the problem is design limitations. Most surveys, whether they are done by telephone or personal interview, are constrained by time and cost. Furthermore, they are not usually conducted for methodological purposes so that ease of estimation of interviewer effects is not considered a high priority, and few are willing to jeopardize the quality of a survey by altering procedures in order to measure what may be relatively small effects. There are also the practical problems associated with any survey in a real-life situation which hinder the measurement of interviewer effects. In addition to the problems imposed by design limitations, there are the assumptions of the statistical procedures used in estimating interviewer effects. Often the assumption of equal variances across interThis content downloaded from 207.46.13.114 on Thu, 26 May 2016 06:01:31 UTC All use subject to http://about.jstor.org/terms

Journal ArticleDOI
TL;DR: Hans et al. as discussed by the authors conducted a survey of Delaware residents shortly after the Hinckley trial's conclusion and found that the verdict was perceived as unfair, the psychiatrists' testimony at the trial was not trusted, and the vast majority thought that the insanity defense was a loophole.
Abstract: Public furor over the Not Guilty by Reason of Insanity verdict in the trial of John Hinckley, Jr. already has stimulated legal changes in the insanity defense. This study documents more systematically the dimensions of negative public opinion concerning the Hinckley verdict. A survey of Delaware residents shortly after the trial's conclusion indicated that the verdict was perceived as unfair, Hinckley was viewed as not insane, the psychiatrists' testimony at the trial was not trusted, and the vast majority thought that the insanity defense was a loophole. However, survey respondents were unable to define the legal test for insanity and thought Hinckley would be confined only a short period of time, contrary to the estimates of experts. These findings, in conjunction with other research showing the public is not well informed about the insanity defense, underscore the importance of examining determinants of opinion about the insanity defense before additional reform is undertaken. Valerie P. Hans is an Assistant Professor of Criminal Justice and Psychology, and Dan Slater is an Assistant Professor of Communication, at the University of Delaware. Correspondence concerning this article should be addressed to Valerie P. Hans, Division of Criminal Justice, University of Delaware, Newark, DE 19711. Public Opinion Quarterly Vol. 47.202-212 ? 1983 by the Trustees of Columbia University Published by Elsevier Science Publishing Co., Inc. 0033-362X/83/0047-202/$2.50 This content downloaded from 157.55.39.27 on Wed, 07 Sep 2016 05:29:05 UTC All use subject to http://about.jstor.org/terms HINCKLEY AND THE INSANITY DEFENSE 203 Delaware legislature passed a law providing a Guilty But Mentally Ill verdict alternative in insanity cases. Proposals to abolish or restrict the insanity verdict are before legislatures in other states, and the White House has announced its plan for revision of the insanity defense (Hoffman, 1982; Philadelphia Inquirer, 1982; Putzel, 1982; New York Times, 1982a). The Hinckley trial promises to be a benchmark in the reform of the insanity defense. The purpose of the study reported here is to provide a more complete account of public opinion about the Hinckley trial and insanity defense. In an attempt to understand the determinants of reactions to the trial, the paper also explores demographic and attitudinal correlates of opinions about the Hinckley verdict. Previous research on perceptions of the insanity defense is sparse, but the available literature indicates that the public always has taken a dim view of the defense (e.g., Moran, 1981). Public opinion polls consistently have shown that a majority of Americans believe the insanity defense is a loophole that allows too many guilty people to go free (Bronson, 1970; Fitzgerald and Ellsworth, 1980; Harris, 1971). Perhaps as a consequence of this perception, the reluctance of juries to find defendants NGRI is legendary. More detailed analyses of perceptions of the insanity defense or the criminally insane show decidedly negative attitudes but widespread misconceptions (Howell, 1982; Pasewark, 1981; Steadman and Cocozza, 1977). Therefore, we expected our survey to reveal: (1) considerable negativity about the insanity defense in general, and the Hinckley verdict in particular, and (2) poor to moderate knowledge about the insanity defense.

Journal ArticleDOI
TL;DR: For example, Lippmann as discussed by the authors pointed out that America's attitude towards its multitude of ethnic groups follows the credo of Animal Farm, "All animals are equal, but some animals are more equal than others." Even when our laws have lived up to our ideals of ethnic equality, our folkways have been ethnocentric, replete with negative stereotypes, discrimination, and social exclusiveness.
Abstract: (S)tereotypes are loaded with preference, suffused with affection or dislike, attached to fears, lusts, strong wishes, pride, hope. Whatever invokes the stereotype is judged with the appropriate sentiment. Except where we deliberately keep prejudice in suspense, we do not study a man and judge him to be bad. We see a bad man. We see a dewy morn, a blushing maiden, a sainted priest, a humorless Englishman, a dangerous Red, a carefree bohemian, a lazy Hindu, a wily Oriental, a dreaming Slav, a volatile Irishman, a greedy Jew, a 100 percent American. -WALTER LIPPMANN, Public Opinion (1922) AMERICA'S attitude towards its multitude of ethnic groups follows the credo of Animal Farm, "All animals are equal, but some animals are more equal than others." We proclaimed that all men are created equal in the Declaration of Independence but recognized slavery in the Constitution, opened the golden door to the "huddled masses" but barred the entrance with national origin quotas and gentlemen's agreements, promised equal protection of the laws but in law and custom discriminated against minorities, and declared with Justice Harlan, "There is no caste here. Our Constitution is color-blind," but upheld Jim Crow laws. Even when our laws have lived up to our ideals of ethnic equality, our folkways have been ethnocentric, replete with negative stereotypes, discrimination, and social exclusiveness. Two main principles explain most of the variation in the social ordering of ethnic groups. First, race breaks ethnicities into two large distinct groups, Europeans and non-Europeans; there is virtually no overlap between these groups, with Europeans filling all the top and middle positions and non-Europeans making up the bottom third. Second, within the large European group, the period of predominant immigration orders ethnicities. At the top of the lists are the members of the old stock, host culture-the British and derivative WASPs who dominated the initial waves of colonial immigration, supplied the Founding Fathers, and established their culture and institutions as the cornerstone of American society. Next come the middle stock groups such as Germans, Irish, and Scandinavians who immigrated to America in the mid-nineteenth century, also largely from the northwest quadrant of Europe. They are followed by Europeans from the three remaining quadrants, the Italians, Greeks, Poles, Russians, and Jews who came to America in the late nineteenth and early twentieth

Journal ArticleDOI
TL;DR: Sigelman et al. as mentioned in this paper analyzed data from a series of nationwide polls conducted between 1977 and 1979 and found that the longer a president stays in office, the more decisions he makes, and the more people he antagonizes.
Abstract: According to the "expectation/disillusion" interpretation of the decline of presidential popularity over time, popularity declines as unrealistically high expectations of presidential performance inevitably give way to more realistic assessments. This paper puts that interpretation and several specific aspects of it to the test through analysis of data from a series of nationwide polls conducted between 1977 and 1979. Lee Sigelman is Professor and Chairman, and Kathleen Knight is Assistant Professor, in the Department of Political Science, University of Kentucky. The data employed in this paper were made available by the Inter-university Consortium for Political and Social Research. The data for CBS News/New York Times Polls, 19771979, were originally collected by CBS News and the New York Times. Neither the collectors of the original data nor the Consortium bear any responsibility for the analyses or interpretations presented here. We are grateful to Richard Brody and anonymous reviewers for their helpful comments on an earlier draft of this paper. Public Opinion Quarterly Vol. 47:310-324 ? 1983 by the Trustees of Columbia University Published by Elsevier Science Publishing Co., Inc. 0033-362X/83/0047-310/$2.50 This content downloaded from 157.55.39.127 on Mon, 27 Jun 2016 06:51:36 UTC All use subject to http://about.jstor.org/terms WHY PRESIDENTIAL POPULARITY DECLINES 311 of this paper is to test what seems to us to be the most plausible explanation that has yet been offered, the so-called "expectation/ disillusion theory" (Stimson, 1976). Explaining the Growth of Presidential Unpopularity What accounts for the large-scale erosion in public support that has beset each postwar presidency except Eisenhower's and the prematurely terminated Kennedy's? Mueller, adapting one of Anthony Downs's (1957) ideas, sought the answer in an antipresidential "coalition of minorities." According to this explanation, the longer a president stays in office, the more decisions he makes, and the more decisions he makes, the more people he antagonizes. Even if each decision gains widespread acceptance, each is also likely to displease at least a few people. "A clever opposition, under appropriate circumstances, could therefore form a coalition of these intense minorities until it had enough votes to overthrow the incumbent" (Mueller, 1973: 205). In our view, this explanation falls short in several respects. In the first place, a coalition of minorities cannot form without being abetted by a "clever," well-organized opposition, and in recent decades the out-party has certainly not always been well-organized, let alone clever. Irrespective of the quality of the opposition, a coalition of minorities should emerge only under "appropriate" circumstances. Mueller did not indicate what these circumstances might be, but at the very least they ought to include a public that is well informed and highly issue-oriented, along with a president who is either sufficiently willing or sufficiently inept to make an incessant series of decisions which alienate different segments of his support coalition. There is also reason to believe that it is not presidential decisions per se but the results of these decisions which affect the president's public standing (Kernell, 1978:521). If this is true, then an explanation which focuses on presidential decision making seems somewhat misdirected. According to a second explanation, length of time in office has no substantive meaning of its own, but simply provides the context within which various phenomena, such as economic fluctuations, wars, and international rally points, occur. It is these time-based phenomena and not time itself which influence presidential popularity (Kernell, 1978). Consistent with this view, Kernell (1978:520) reports that when the effects of various time-based phenomena are held constant, the passage of time no longer has any impact on presidential popularity. However, Kernell's argument does not explain why popularity almost always declines over time. The only answer that This content downloaded from 157.55.39.127 on Mon, 27 Jun 2016 06:51:36 UTC All use subject to http://about.jstor.org/terms 312 LEE SIGELMAN AND KATHLEEN KNIGHT Kernell's interpretation seems to permit is that substantive time-based phenomena (i.e., occurrences in the domestic and international arenas) invariably begin to deteriorate very early in the president's term and continue to do so until some point in the third year-not, we think, a very realistic assumption. The real problem is to determine why the reaction to the substance of presidential actions nearly always has a cumulatively negative impact on presidential popularity. Mueller's "coalition of minorities" interpretation provides an answer, but one which addresses the individual-level opinion dynamics underlying aggregate popularity change in a very oblique fashion. This brings us to an alternative interpretation which has been most clearly articulated by James Stimson (1976) but which, as Stimson acknowledged, is derived from Mueller's War, Presidents and Public Opinion. The basic idea underlying this interpretation is that "in the process of being elected, the president invariably says or implies that he will do more than he can do, and disaffection of once bemused supporters is all but inevitable" (Mueller, 1973:206). In contrast to the "coalition of minorities" interpretation, which places the blame squarely on the president's shoulders, Stimson (1976) argued that the erosion of presidential popularity is impervious to presidential initiatives and is best understood as a result of the inevitable disjuncture between presidential promise and performance. Stimson assumed, on the basis of considerable evidence, that many Americans are poorly informed and have no well-developed policy preferences. In these circumstances, the "utter simplicity" of campaign pledges contributes to the spread of naive expectations about what a candidate will be able to accomplish if he is elected. In

Journal ArticleDOI
TL;DR: In this article, the authors compared two alternative respondent selection procedures for both validity and efficiency in a survey involving over 2,500 United States households, which was performed by Chilton Research Services (CRS) using strictly proportionate sample techniques.
Abstract: FREQUENTLY, telephone surveys are intended to represent the general adult population. In such cases, only one respondent is interviewed in each household. To avoid biased results, a randomized respondent selection procedure is generally used. Because Chilton Research Services (CRS) conducts many such consumer surveys, we are acutely interested in developing procedures which are both valid and efficient. This study was designed to compare two alternative respondent selection procedures for both validity and efficiency. The test of these two procedures was superimposed on a survey involving over 2,500 United States households. CRS was contracted by SRI International to conduct a combination telephone and mail survey. Using strictly proportionate sample techniques, CRS developed a national probability sample of telephone households. These were contacted by phone, and randomly selected respondents were asked to complete and return a mail questionnaire. In the telephone contact phase, one respondent was randomly selected from all adults living in the household. When the selected respondent was unavailable, interviewers would determine a conve

Journal ArticleDOI
TL;DR: The change in US-US relations can be traced in American public opinion as clearly as in official diplomatic announcements and news stories as discussed by the authors, and the public has also judged that relations between the United States and the Soviet Union have deteriorated since the early 1970s.
Abstract: DURING the first half of the 1970s detente warmed Soviet-American relations. A series of major treaties from the SALT I accords in 1972 to the Helsinki Agreements in July 1975 raised the promise of peaceful coexistence and normalized relations. Since then Soviet-American relations have chilled in the face of the huge Soviet arms buildup, Russian-Cuban adventurism in Africa, the Soviet invasion of Afghanistan, and Communist repression in Poland, and commentators are talking of a second cold war.' The change in Soviet-American relations can be charted in American public opinion as clearly as in official diplomatic announcements and news stories. From intense dislike of the Russians during the peak of the Cold War of the 1950s, American favorableness toward the Soviets increased until in 1973 a majority of Americans had rather neutral feelings and nearly a fifth liked the Soviets. Since 1973 favorable opinion of the Soviet Union has fallen sharply, reaching a low point in the immediate aftermath of the invasion of Afghanistan. Likewise, negative ratings of Communism as a form of government and concern about Russia and Communism have increased monotonically since the early 1970s. The public has also judged that relations between the United States and the Soviet Union have deteriorated since the early 1970s. The public evaluated President Carter's approach as tending to be too soft, and at least initially has voiced much more satisfaction with President Reagan's harder line toward the Soviets.2

Journal ArticleDOI
TL;DR: Kandel et al. as discussed by the authors found that women who were reinterviewed were less deviant than the non-interviewers, while the opposite was observed among women, and the paradoxical finding for females may result from changing marital status in that particular period of the life cycle.
Abstract: A nine-year follow-up of former adolescents reveals sex differences in the relative deviance and drug involvement of individuals lost to the panel in young adulthood. As expected, men who were reinterviewed were less deviant than the noninterviewed, while the opposite was observed among women. Specification by race indicates that the female pattern applies especially to nonwhites, but all women who are reinterviewed, irrespective of race, are no less deviant than the nonreinterviewed. The paradoxical finding for females may result from changing marital status in that particular period of the life cycle and an inverse relationship between delinquency and marriage. Denise Kandel is Professor of Public Health in Psychiatry, in the Department of Psychiatry, Columbia University, and Research Scientist, New York State Psychiatric Institute. Victoria Raveis is Staff Associate, in the School of Public Health, Columbia University. John Logan is Research Associate in the Department of Psychiatry, Columbia University, and Research Scientist in the New York State Psychiatric Institute. This work was partially supported by research grant DA01097 and ADAMHA Research Scientist Award DA00081 from the National Institute on Drug Abuse, the Center for Socio-Cultural Research on Drug Use, of Columbia University, and grants from the William T. Grant Foundation and the John D. and Catherine T. McArthur Foundation. Address request for reprints to: D. Kandel, 722 West 168th Street, New York, N.Y. 10032. Public Opinion Quarterly Vol 47.567-575 ? 1983 by the Trustees of Columbia University Published by Elsevier Science Publishing Co., Inc. 0033-362X/83/0047-567/$2 50 This content downloaded from 207.46.13.129 on Mon, 27 Jun 2016 06:01:06 UTC All use subject to http://about.jstor.org/terms 568 KANDEL, RAVEIS, AND LOGAN men (U.S. Dept. of Labor, 1974; Mott, et al., 1977), although Clarridge, et al. (1978) report no sex differences in a 17-year follow-up of former high school students. Furthermore, as compared to those who are reinterviewed, those who are not reinterviewed perform less favorably on various measures, whether involving academic, psychological, or social functioning, and are also generally more deviant. For example, youths lost to a panel have been found to be more likely to be using drugs (Kandel, 1975b; Kandel, et al. 1978; Josephson and Rosen, 1978), to hold less favorable attitudes toward school (Josephson and Rosen, 1978), to drop out of school (Bachman, et al., 1971), and are less likely to be enrolled in school or employed than adolescents who are retained (U.S. Dept. of Labor, 1974). In adult samples, however, no relationship has been reported between initial health status and subsequent reinterviewing whether after nine years (Berkman and Syme, 1979) or 20 (Singer, et al., 1976). By their nature, longitudinal studies can span various phases of the life cycle in which important changes take place in individuals' roles and statuses, beyond those associated with aging per se. We propose that these status changes can affect the difficulty involved in locating and recontacting members of the cohort, so that the same attribute can be differentially related to reinterviewing rate at different stages of the life cycle. We here present a seemingly paradoxical finding, namely, that among young women, those lost to a panel are no less conforming and in some cases are even more conforming than those who continue their participation. An interpretation in terms of change in marital status in young adulthood is presented that may resolve the paradox.




Journal ArticleDOI
TL;DR: The results of these surveys show an increase in American sympathy for Israel during and immediately after the 1967 and 1973 wars and after the withdrawal of the Israeli forces from the Sinai at the end of April 1982 as discussed by the authors.
Abstract: SINCE its inception, the state of Israel has been repeatedly at war with its Arab neighbor countries: in 1956, the Suez crisis and the Israeli occupation of the Sinai; in June 1967, the Six-Day War; in October 1973, the Yom Kippur War; and in 1982, the invasion of Lebanon by Israeli forces. In this issue of The Polls we present results of opinion surveys in the United States and in some Western European countries on sympathy for Israel and the Arab countries, on the Palestinians and the PLO, on arms supplies to the countries of the Middle East, and on recent developments in Lebanon. The question of support for Israel or for the Arab countries was frequently submitted to the public by Gallup from 1967 through 1982. The results of these surveys show an increase in American sympathy for Israel during and immediately after the 1967 and 1973 wars and after the withdrawal of the Israeli forces from the Sinai at the end of April 1982. The percentage of those sympathizing with Israel was fairly constant during the intermediate periods. The invasion of Lebanon on June 6, 1982 did not have the favorable effect on public opinion in the United States that earlier wars had elicited. There was hardly change in the results of opinion surveys into the sympathies expressed in favor of Israel and of the Arab countries. But some days after the massacres in the Palestinian camps of Sabra and Shatila (September 18-19, 1982), a Gallup poll conducted for Newsweek revealed a decline in the sympathies of the American people toward Israel: 32 percent were more sympathetic to Israel (vs. 49 percent in July 1981), and 28 percent were more sympathetic to the Arab nations (vs. 10 percent in July 1981). According to another Gallup survey, the favorable opinion on Israel appears to have declined even before the massacre in the Palestinian camps. In 1981, 75 percent of Americans had a favorable opinion of Israel, but by mid August 1982 this percentage had dropped to 56, lower than it had been during the preceding 20 years. In the countries of Western Europe, sympathies with Israel have been declining since about 1973, although not in favor of the Arab countries; rather, a preference for neutrality is being expressed. In West Germany, however, the opinion polls conducted by Demoskopie Allensbach did reveal a marked upswing in favor of Israel after the Yom Kippur War (October 1973). One of the major problems in the Middle East conflict is the question of what is to happen with the Palestinian people. Mediators in the conflict have not yet succeeded in working out a solution acceptable to both parties. The public, too, is finding it difficult to pronounce itself in favor of any of the possible choices offered in opinion polls. The percentage of "don't knows" often exceeds 30,