scispace - formally typeset
Search or ask a question

Showing papers in "Sociological Methods & Research in 2008"


Journal ArticleDOI
TL;DR: There is little empirical support for the use of .05 or any other value as universal cutoff values to determine adequate model fit, regardless of whether the point estimate is used alone or jointly with the confidence interval.
Abstract: This article is an empirical evaluation of the choice of fixed cutoff points in assessing the root mean square error of approximation (RMSEA) test statistic as a measure of goodness-of-fit in Structural Equation Models. Using simulation data, the authors first examine whether there is any empirical evidence for the use of a universal cutoff, and then compare the practice of using the point estimate of the RMSEA alone versus that of using it jointly with its related confidence interval. The results of the study demonstrate that there is little empirical support for the use of .05 or any other value as universal cutoff values to determine adequate model fit, regardless of whether the point estimate is used alone or jointly with the confidence interval. The authors' analyses suggest that to achieve a certain level of power or Type I error rate, the choice of cutoff values depends on model specifications, degrees of freedom, and sample size.

1,159 citations


Journal ArticleDOI
TL;DR: Yang and Land as discussed by the authors compared the fixed-versus-random-effects model specifications for APC analysis and found that the random-effects hierarchical APC model is the most appropriate specification for each of the two data conditions studied.
Abstract: Yang and Land (2006) and Yang (forthcoming-b) developed a mixed (fixed and random) effects model for the age–period–cohort (APC) analysis of micro data sets in the form of a series of repeated cross-section sample surveys that are increasingly available to demographers. The authors compare the fixedversus random-effects model specifications for APC analysis. They use data on verbal test scores from 15 cross sections of the General Social Survey (GSS), 1974 to 2000, for substantive illustrations. Strengths and weaknesses are identified for both the random- and fixed-effects formulations. However, under each of the two data conditions studied, the random-effects hierarchical APC model is the most appropriate specification. While additional analyses and comparisons of random- and fixed-effects APC models using other data sets are necessary before generalizations can be drawn, this finding is consistent with results from other methodological studies with unbalanced data designs.

342 citations


Journal ArticleDOI
TL;DR: Web-based RDS (WebRDS) is found to be highly efficient and effective and methods for testing the validity of assumptions required by RDS estimation are presented.
Abstract: This study tests the feasibility, effectiveness, and efficiency of respondent-driven sampling (RDS) as a Web-based sampling method. Web-based RDS (WebRDS) is found to be highly efficient and effective. The online nature of WebRDS allows referral chains to progress very quickly, such that studies with large samples can be expected to proceed up to 20 times faster than with traditional sampling methods. Additionally, the unhidden nature of the study population allows comparison of RDS estimators to institutional data. Results indicate that RDS estimates are reasonable but not precise. This is likely due to bias associated with the random recruitment assumption and small sample size of the study. Finally, this article presents methods for testing the validity of assumptions required by RDS estimation.

209 citations


Journal ArticleDOI
TL;DR: The effect of the.05 significance level on the pattern of published findings is examined using a ''caliper'' test, and the hypothesis of no publication bias can be rejected at approximately the 1 in 10 million level.
Abstract: Despite great attention to the quality of research methods in individual studies, if publication decisions of journals are a function of the statistical significance of research findings, the published literature as a whole may not produce accurate measures of true effects. This article examines the two most prominent sociology journals (the American Sociological Review and the American Journal of Sociology) and another important though less influential journal (The Sociological Quarterly) for evidence of publication bias. The effect of the .05 significance level on the pattern of published findings is examined using a ``caliper'' test, and the hypothesis of no publication bias can be rejected at approximately the 1 in 10 million level. Findings suggest that some of the results reported in leading sociology journals may be misleading and inaccurate due to publication bias. Some reasons for publication bias and proposed reforms to reduce its impact on research are also discussed.

172 citations


Journal ArticleDOI
TL;DR: In this article, Caren and Panofsky present methods for assessing temporality that are more amenable to truth table analysis and the use of existing software, fsQCA.
Abstract: Caren and Panofsky (2005) seek to advance qualitative comparative analysis (QCA) by demonstrating that it can be used to study causal conditions that occur in sequences and introduce a technique they call TQCA (temporal QCA). In their formulation, the causal conjuncture is a sequence of conditions or events. The authors applaud their effort and agree that it is important to address this aspect of causation. This comment clarifies and corrects aspects of their analysis and present methods for assessing temporality that are more amenable to truth table analysis and the use of existing software, fsQCA. The methods presented utilize codings that indicate event order in addition to codings that indicate whether specific events occurred. They also demonstrate how to use ``don't care'' codings to bypass consideration of event sequences when they are not relevant (e.g., as when only a single event occurs).

117 citations


Journal ArticleDOI
TL;DR: In this article, the identification of age-period-cohort (APC) models has been studied in the context of political alienation, where the goal is to specify the mechanisms through which the age, period, and cohort variables affect the outcome and in doing so identify the model.
Abstract: This article offers a new approach to the identification of age‐period‐cohort (APC) models that builds on Pearl’s work on nonparametric causal models, in particular his front-door criterion for the identification of causal effects. The goal is to specify the mechanisms through which the age, period, and cohort variables affect the outcome and in doing so identify the model. This approach allows for a broader set of identification strategies than has typically been considered in the literature and, in many circumstances, goodness of fit tests are possible. The authors illustrate the utility of the approach by developing an APC model for political alienation.

115 citations


Journal ArticleDOI
TL;DR: In this article, a structural equation modeling approach is applied to measure the stability of one acquiescence factor behind two concepts among the same respondents for a 4-year period, using representative population surveys in 1995 and 1999 from the Belgian Election Study in which balanced sets of items are used for measuring two interrelated constructs.
Abstract: This article addresses the question of to what extent one type of response style, called acquiescence (or agreeing response bias), is stable over time. A structural equation modeling approach is applied to measure the stability of one acquiescence factor behind two concepts among the same respondents for a 4-year period. The data used are representative population surveys in 1995 and 1999 from the Belgian Election Study in which balanced sets of items are used for measuring two interrelated constructs: perceived ethnic threat and distrust in politics. This study provides empirical support that acquiescence is stable and consistent for a 4-year period.

96 citations


Journal ArticleDOI
TL;DR: In this paper, the multilevel nature of the problem of age-period-cohort modeling has been investigated and advances in methods including nonparametric smoothing, fixed and random effects, and identification in structural or causal models have been made.
Abstract: Social indicators and demographic rates are often arrayed over time by age The patterns of rates by age at one point in time may not reflect the effects associated with aging, which are more properly studied in cohorts Cohort succession, aging, and period-specific historical events provide accounts of social and demographic change Because cohort membership can be defined by age at a particular period, the statistical partitioning of age from period and cohort effects focuses attention on identifying restrictions When applying statistical models to social data, identification issues are ubiquitous, so some of the debates that vexed the formative literature on age–period– cohort models can now be understood in a larger context Four new articles on age–period–cohort modeling call attention to the multilevel nature of the problem and draw on advances in methods including nonparametric smoothing, fixed and random effects, and identification in structural or causal models

63 citations


Journal ArticleDOI
TL;DR: A smoothing cohort model is proposed that allows flexible structure of the effects for age, period, and cohort and avoids the identifiability problem in age–period–cohort (APC) analysis.
Abstract: This article considers the effects of age, period, and cohort in social studies and chronic disease epidemiology through age–period–cohort (APC) analysis. These factors are linearly dependent; thus, the multiple classification model, a regression model that takes these factors as covariates in APC analysis, suffers from an identifiability problem with multiple estimators. A data set of homicide arrest rates is used to illustrate the problem. A smoothing cohort model is proposed that allows flexible structure of the effects for age, period, and cohort and avoids the identifiability problem. Results are provided for the consistency of estimation of model intercept and age effects as the number of periods goes to infinity under a mild bounded cohort condition. This also leads to consistent estimation for period and cohort effects. Analyses of homicide arrest rate and lung cancer mortality rate data demonstrate that the smoothing cohort model yields unique parameter estimation with sensible trend interpretation.

61 citations


Journal ArticleDOI
TL;DR: In this paper, the authors treated cohort effects as random effects and age and period effects as fixed effects in a mixed model and used this approach to assess the amount of variance in the dependent variable associated with cohorts.
Abstract: For more than 30 years, sociologists and demographers have struggled to come to terms with the age, period, cohort conundrum: Given the linear dependency between age groups, periods, and cohorts, how can these effects be estimated separately? This article offers a partial solution to this problem. The authors treat cohort effects as random effects and age and period effects as fixed effects in a mixed model. Using this approach, they can (1) assess the amount of variance in the dependent variable that is associated with cohorts while controlling for the age and period dummy variables, (2) model the dependencies that result from the age-period-specific rates for a single cohort being observed multiple times, and (3) assess how much of the variance in observations that is associated with cohorts is explained by differences in the characteristics of cohorts. The authors use empirical data to see how their results compare with other analyses in the literature.

59 citations


Journal ArticleDOI
TL;DR: The authors analyzes the mathematical connections between two kinds of inequality: inequality between persons (e.g., income inequality) and inequality between subgroups (i.e., racial inequality).
Abstract: This article analyzes the mathematical connections between two kinds of inequality: inequality between persons (e.g., income inequality) and inequality between subgroups (e.g., racial inequality). ...

Journal ArticleDOI
TL;DR: Results are reported from a methodological experiment in which different types of actors who are party to the data production and research process were asked to solve artificially generated inconsistencies in real survey data.
Abstract: Data editing, a crucial task in the data production process, has received little scientific attention. Consequently, there is no consensus among social scientists about how data should be edited or by whom. While some argue that it should be left to data managers and data users, others claim that it is primarily a task for fieldworkers. The authors review these divergent approaches to editing and evaluate the underlying theoretical arguments. Results are reported from a methodological experiment in which different types of actors who are party to the data production and research process were asked to solve artificially generated inconsistencies in real survey data. Results are informative on two counts. First, the least accurate editors were the researchers with no field experience in the survey sites. Second, when provided with only partial information on which to make editing decisions, fieldworkers edited more accurately than both data managers and data users.


Journal ArticleDOI
TL;DR: The multi-item RRT as discussed by the authors extends the RRT procedure to scales composed of multiple items, which reduces the added variability contributed by the procedure and affords more accurate estimates of parameters than does a single item RRT.
Abstract: The randomized response technique (RRT) attempts to reduce social desirability bias in self-reports by creating a probabilistic relationship between the response given and the question posed. The multi-item RRT extends the RRT procedure to scales composed of multiple items. The multi-item RRT reduces the added variability contributed by the procedure and affords more accurate estimates of parameters than does a single-item RRT. Formulas are presented for correcting the mean, standard deviation, and correlation coefficient for the procedure. Data are presented from a study (Jarman 1996) of male date rape to illustrate the application of the multi-item RRT. Those data show higher reports of rape-supportive attitudes, beliefs, and sexual aggression under RRT conditions.

Journal ArticleDOI
TL;DR: This article provides a framework for the successful conduct of gene—environment studies and explains reasons why epidemiological studies that incorporate gene-environment interaction have been unable to demonstrate statistically significant interactions and why conflicting results are reported.
Abstract: Given recent genetic advances, it is not surprising that genetics information is increasingly being used to improve health care. Thousands of conditions caused by single genes (Mendelian diseases) have been identified over the last century. However, Mendelian diseases are rare; thus, few individuals directly benefit from gene identification. In contrast, common complex diseases, such as obesity, breast cancer, and depression, directly affect many more individuals. Common complex diseases are caused by multiple genes, environmental factors, and/or interaction of genetic and environmental factors. This article provides a framework for the successful conduct of gene—environment studies. To accomplish this goal, the basic study designs and procedures of implementation for gene—environment interaction are described. Next, examples of gene—environment interaction in obesity epidemiology are reviewed. Last, the authors review reasons why epidemiological studies that incorporate gene—environment interaction have ...

Journal ArticleDOI
TL;DR: ELSI perspectives on the challenges that confront investigators who undertake gene—environment research are introduced and nine recommendations based on this literature are offered.
Abstract: Sociologists are increasingly involved with the design and execution of studies that examine the interplay between genes and environment, requiring expertise in measurement of both genetic and nong...

Journal ArticleDOI
TL;DR: A non-technical and intuitive introduction to the basic concepts and techniques that are used to establish statistical connections between genetic variants and human phenotypes is given in this paper, with a focus on basic linkage analysis and association studies, the essential ideas behind the methods, and a limited amount of molecular genetics needed for understanding the ideas.
Abstract: The objective of this article is to provide a nontechnical and intuitive introduction to the basic concepts and techniques that are used to establish statistical connections between genetic variants and human phenotypes. Depressive symptoms and delinquent behaviors that are of interest to sociologists are a subset of such human phenotypes. This article focuses on basic linkage analysis and association studies, the essential ideas behind the methods, and a very limited amount of molecular genetics needed for understanding the ideas. The article is written with those social scientists in mind who are interested in the topic but not yet ready to engage the vast and rapidly developing primary literature (journal articles).

Journal ArticleDOI
TL;DR: In this article, the authors discuss the application of variance components to data on quantitative traits for the estimation of polygenic and environmental variances as well as, for the detection of quantitative trait loci using methods of linkage and association.
Abstract: Variance components methods have been used in various aspects of genetic analysis for many decades. This article discusses their application to data on quantitative traits for the estimation of polygenic and environmental variances as well as, for the detection of quantitative trait loci using methods of linkage and association. Multivariate approaches are discussed, along with regression-based related methods. Additionally, examples of the application of these methods from the literature are presented.

Journal ArticleDOI
Richard Breen1
TL;DR: In this paper, the benefits of defining the interaction parameters of a model to have a one-to-one relationship with the odds ratios of interest are discussed. But the authors do not consider the relationship between the interaction parameter and the odds ratio of interest of interest.
Abstract: In the analysis of cross-classified data, the quantities of interest are frequently odds ratios. Although odds ratios are functions of the interaction parameters in association models, the usual way of normalizing and identifying these parameters means that their relationship with the odds ratios of interest is indirect. This can lead to interpretative confusions. The author points to the benefits of defining the interaction parameters of a model to have a one-to-one relationship with the odds ratios of interest, thus overcoming problems of interpretation. Three examples are presented to illustrate the argument.


Journal ArticleDOI
TL;DR: A new mixture model is developed that allows researchers to address the problem of misclassification of group membership using the data they have and produces reasonably robust estimates and improves the fit compared to the no-adjustment model.
Abstract: Social scientists often rely on survey data to examine group differences A problem with survey data is the potential misclassification of group membership due to poorly trained interviewers, inconsistent responses, or errors in marking questions In data containing unequal subsample sizes, the consequences of misclassification can be considerable, especially for groups with small sample sizes In this study, the authors develop a new mixture model that allows researchers to address the problem using the data they have By supplying additional information from the data, this two-stage model is estimated using a Bayesian method The method is illustrated with the Early Childhood Longitudinal Study data As anticipated, the more information supplied to adjust for group membership, the better the model performs Even when small amounts of information are supplied, the model produces reasonably robust estimates and improves the fit compared to the no-adjustment model Sensitivity analyses are conducted on cho

Journal ArticleDOI
TL;DR: In this paper, a three-dimensional latent variable (trait) model for analyzing attitudinal scaled data is proposed, which is successfully applied to two examples: one with 12 binary items and the other with 8 items of five categories each.
Abstract: The author proposes a three-dimensional latent variable (trait) model for analyzing attitudinal scaled data. It is successfully applied to two examples: one with 12 binary items and the other with 8 items of five categories each. The models are exploratory instead of confirmatory, and subscales from which data were selected are clearly identified. For binary items, it gives similar results with factor analysis. For polytomous items, it can estimate category scores simultaneously with the internal structure. From that, another dimension of the degree to take moderate views is extracted. This is because conventional analyses usually fix category scores as numbers, while they are free to vary in latent variable models. Computational problems are discussed, and it is expected that more than three dimensions are possible given today's computing power and tailor-made methods such as adaptive quadrature points for numerical integrations.

Journal ArticleDOI
Kenneth C. Land1
TL;DR: Exploratory Social Network Analysis With Pajek as discussed by the authors is a textbook on network analysis and as an introduction to network analysis for the experienced networker, but it is difficult to learn and not completely coherent.
Abstract: Both vertices and edges can be labeled, colored, and sized according to their attributes. Rudimentary statistical analyses, including descriptive statistics and frequency distributions, are available within the program. However, the menus and their associated tree structures are difficult to learn and not completely coherent. For example, a random network can be created under the ‘‘Network’’ or the ‘‘Partition’’ headings. ‘‘Clusters’’ and ‘‘Partitions,’’ which both appear in the menus, are not clearly different. Partitions can be created under the ‘‘Net’’ heading or under ‘‘Partition.’’ The lack of logic is irrelevant if one performs the same operation repeatedly (you learn what to do), but it is confusing for the student. A second problem is that it is very difficult to edit data within the program. There ought to be a simple internal spreadsheet, like the one available within UCINET. Third, matrix operations (e.g., multiplication, addition, transpose, and inverse) are not available. This limits the usefulness of the program both for research, when one may want to create one’s own measures, and for teaching the relation between graphs and matrices. In summary, Exploratory Social Network Analysis With Pajek is useful as a textbook on network analysis and as an introduction to Pajek for the experienced networker.