scispace - formally typeset
Search or ask a question

Showing papers in "Organizational Research Methods in 2006"


Journal ArticleDOI
TL;DR: The authors argued that the popular position that common method variance automatically affects variables measured with the same method is a distortion and oversimplification of the true state of affairs, reaching the status of urban legend.
Abstract: It has become widely accepted that correlations between variables measured with the same method, usually self-report surveys, are inflated due to the action of common method variance (CMV), despite a number of sources that suggest the problem is overstated. The author argues that the popular position suggesting CMV automatically affects variables measured with the same method is a distortion and oversimplification of the true state of affairs, reaching the status of urban legend. Empirical evidence is discussed casting doubt that the method itself produces systematic variance in observations that inflates correlations to any significant degree. It is suggested that the term common method variance be abandoned in favor of a focus on measurement bias that is the product of the interplay of constructs and methods by which they are assessed. A complex approach to dealing with potential biases involves their identification and control to rule them out as explanations for observed relationships using a variety ...

3,264 citations


Journal ArticleDOI
TL;DR: In this article, the authors trace four widely cited and reported cutoff criteria to their (alleged) original sources to determine whether they really said what they are cited as having said about the cutoff criteria, and if not, what the original sources really said.
Abstract: Everyone can recite methodological “urban legends” that were taught in graduate school, learned over the years through experience publishing, or perhaps just heard through the grapevine. In this article, the authors trace four widely cited and reported cutoff criteria to their (alleged) original sources to determine whether they really said what they are cited as having said about the cutoff criteria, and if not, what the original sources really said. The authors uncover partial truths in tracing the history of each cutoff criterion and in the end endorse a set of 12 specific guidelines for effective academic referencing provided by Harzing that, if adopted, should help prevent the further perpetuation of methodological urban legends.

1,708 citations


Journal ArticleDOI
TL;DR: The structural equation modeling approach to testing for mediation is compared to the Baron and Kenny approach as discussed by the authors, and the approaches are essentially the same when the hypothesis being tested predicts paring.
Abstract: The structural equation modeling approach to testing for mediation is compared to the Baron and Kenny approach. The approaches are essentially the same when the hypothesis being tested predicts par...

877 citations


Journal ArticleDOI
TL;DR: This paper analyzed response rate data from 231 studies that surveyed executives and appeared in top management journals from 1992 to 2003 and found mean response rates to be declining over the period, yielding an overall 32% rate.
Abstract: The authors developed hypotheses about the effectiveness of response rate techniques for organizational researchers surveying executives. Using meta-analytic procedures to test those hypotheses, the authors analyzed response rate data from 231 studies that surveyed executives and appeared in top management journals from 1992 to 2003. They found mean response rates to be declining over the period, yielding an overall 32% rate. Of the various methods suggested to increase response rates in other populations, none were found to be effective for executives. However, topical salience and sponsorship by an organization or person in the executive’s social networks did bring about response rate increases. The authors provide recommendations about what (not) to do when trying to collect original data from members of a firm’s upper echelons.

540 citations


Journal ArticleDOI
TL;DR: Although a kernel of truth or appropriateness underlies each one, numerous methodological and statistical criteria have been stretched beyond that kernel and have evolved into myths and urban legen as mentioned in this paper.
Abstract: Although a kernel of truth or appropriateness underlies each one, numerous methodological and statistical criteria have been stretched beyond that kernel and have evolved into myths and urban legen...

203 citations


Journal ArticleDOI
TL;DR: The authors examined the construct validity of the Goldberg International Personality Item Pool (IPIP) measure by comparing it to a well-developed measure of the five-factor model, the NEO Five-Fact...
Abstract: This study examines the construct validity of the Goldberg International Personality Item Pool (IPIP) measure by comparing it to a well-developed measure of the five-factor model, the NEO Five-Fact...

168 citations


Journal ArticleDOI
TL;DR: In this article, the authors illustrate how models using parcels as indicator variables erroneously indicate that measurement invariance exists much more often than do models using items as indicators, and show that item-by-item tests are often more informative than were tests of the entire parameter matrices.
Abstract: Combining items into parcels in confirmatory factor analysis (CFA) can improve model estimation and fit. Because adequate model fit is imperative for CFA tests of measurement invariance, parcels have frequently been used. However, the use of parcels as indicators in a CFA model can have serious detrimental effects on tests of measurement invariance. Using simulated data with a known lack of invariance, the authors illustrate how models using parcels as indicator variables erroneously indicate that measurement invariance exists much more often than do models using items as indicators. Moreover, item-by-item tests of measurement invariance were often more informative than were tests of the entire parameter matrices.

149 citations


Journal ArticleDOI
TL;DR: In this paper, confirmatory factor analysis was applied to the responses of 4, 909 employees of a multinational organization with locations in 50 countries to examine the measurement equivalence of otherwise identical Web-based and paper-and-pencil versions of 20 items comprising the transformational leadership component of Bass and Avolio's Multifactor Leadership Questionnaire.
Abstract: Multigroup confirmatory factor analysis was applied to the responses of 4, 909 employees of a multinational organization with locations in 50 countries to examine the measurement equivalence of otherwise identical Web-based and paper-and-pencil versions of 20 items comprising the transformational leadership component of Bass and Avolio's Multifactor Leadership Questionnaire. The results supported configural, metric, scalar, measurement error, and relational equivalence across administration modes, indicating that the psychometric properties of the 20 items were similar whether administered as a paper-and-pencil or Web-based measure. Although caution is always advised when considering multiple modes of administration, the results suggest that there are minimal measurement differences for well-developed, psychometrically sound instruments applied using either a paper-and-pencil or an online format. Thus, the results open a methodological door for survey researchers wishing to (a) assess transformational leadership with a Web-based platform and (b) compare or combine responses collected with paper-and-pencil and Web-based applications.

98 citations


Journal ArticleDOI
TL;DR: In this paper, the authors argue that given the embedded nature of organizations, a narrative methodology offers an alternative and complementary approach to developing our understanding in cross-cultural research, using examples of story-driven investigations into cultural differences.
Abstract: The prevailing literature on cross-cultural research in management studies has tended to conceptualize the meaning and the impact of culture on organizations by using distinct categories. This article argues that given the embedded nature of organizations, a narrative methodology offers an alternative and complementary approach to developing our understanding in cross-cultural research. Using examples of story-driven investigations into cultural differences, it explains the potential of this approach. It therefore seeks to offer a contribution to the variety of methods for organizational research on cross-cultural issues.

89 citations


Journal ArticleDOI
TL;DR: In this article, the authors analyze longitudinal data using hierarchical linear modeling to illustrate a random coefficients modeling alternative for examining firm performance, which allows researchers to explicitly model different conceptual approaches to testing change in performance over time and model predictor variables and cross-level interactions at multiple levels of analysis, and also allows for investigation of time series errors.
Abstract: The strategic management literature is unclear about how firm and industry effects influence performance, and the analysis of longitudinal data therein continues to be problematic. The authors analyze longitudinal data using hierarchical linear modeling to illustrate a random coefficients modeling alternative for examining firm performance. This approach allows researchers to explicitly model different conceptual approaches to testing change in performance over time and model predictor variables and cross-level interactions at multiple levels of analysis, and it also allows for investigation of time series errors. The results have implications for the strategic management field's goal of understanding multilevel determinants of firm performance over time.

86 citations


Journal ArticleDOI
TL;DR: In this article, the authors discuss the methodology that can be applied when researching the field of academic research management, in which the adoption of a knowledge-based view (KBV) is especially appropriate.
Abstract: This article addresses the methodology that can be applied when researching the field of academic research management, in which the adoption of a knowledge-based view (KBV) is especially appropriate. In particular, it discusses whether the adoption of a grounded theory approach (GTA) in this type of research is justifiable, given the contested character of the KBV constituents. GTA, so it is argued, is especially useful for investigating such a field because of three interrelated arguments: (a) that KBV and related debates provide insufficient theoretical guidance, (b) that the research managers’ experience and viewpoints should form the basis of theory development and relevancy, and (c) that the concepts of knowledge and management are obscure. Adopting a GTA does not completely remove the KBV perspective from the methodological discussions. Instead, it may be useful for modifying the GTA outcomes, thus engendering theoretical plausibility, applicability, and credibility.

Journal ArticleDOI
TL;DR: The purpose of this article is to examine the role of reliability in familiar statistics and to show how ignoring the consequences of (less than perfect) reliability in common statistical techniques can lead to false conclusions and erroneous interpretation.
Abstract: Measurement error, or reliability, affects many common applications in statistics, such as correlation, partial correlation, analysis of variance, regression, factor analysis, and others. Despite i...

Journal ArticleDOI
TL;DR: The findings indicate that RMR may be suitable for only a small number of situations and that repeated-measures ANOVA, multivariate repeated-measure ANOVA, and multilevel modeling may be better suited to analyze multileVEL data under most circumstances.
Abstract: The authors assess the suitability of repeated-measures regression (RMR) to analyze multilevel data in four popular multilevel research designs by comparing results of RMR analyses to results of analyses using techniques known to produce correct results in these designs. The findings indicate that RMR may be suitable for only a small number of situations and that repeated-measures ANOVA, multivariate repeated-measures ANOVA, and multilevel modeling may be better suited to analyze multilevel data under most circumstances. The authors conclude by offering recommendations regarding the appropriateness of the different techniques given the different research designs.

Journal ArticleDOI
TL;DR: The concept of fit in contingency theory has been modeled in ways that made it less conducive to detection as discussed by the authors, and each model of fit postulates a specific relationship between the structural, contingency, and outcome variables.
Abstract: The concept of fit in contingency theory has been modeled in ways that made it less conducive to detection. Each model of fit postulates a specific relationship between the structural, contingency, and outcome variables. Operationalization is then derived from this relationship. Conversely, a polynomial regression model would allow for a more generalized notion of fit while capturing the same forms of fit implied by existing models in a simpler, general, robust, and less constrained way.

Journal ArticleDOI
TL;DR: The extent of potential equivalent models in strategy research and the possible effect of such models on strategic-management theory are highlighted and a statistical demonstration of the potential effect is provided.
Abstract: The use of structural-equation modeling (SEM) in strategic-management research has grown dramatically during recent years. Although this statistical technique offers researchers a valuable tool for testing hypothesized models, certain challenges accompany the use of SEM. The current article examines one of these challenges, equivalent models, and its prevalence in strategy research. An equivalent model is an alternative model that fits the data equally well, thus producing the same covariance or correlation matrix but often differing significantly in theoretical interpretation. We examined the application of SEM in 109 strategic-management studies and found that equivalent models are a cause for concern in most strategic-management studies. Using a published article, we also provide a statistical demonstration of the potential effect of equivalent models. This article highlights both the extent of potential equivalent models in strategy research and the possible effect of such models on strategic-manageme...

Journal ArticleDOI
TL;DR: In this paper, implicit measurement using latencies is proposed as a complement to conventional measurement to assess organizational constructs (e.g., job satisfaction), to assist in personnel decisions, and to evaluate organizational performance.
Abstract: Implicit measurement using latencies is proposed as a complement to conventional measurement to assess organizational constructs (e.g., job satisfaction), to assist in personnel decisions (e.g., se...

Journal ArticleDOI
TL;DR: The authors used Revelle's (1979) coefficient beta and item clustering to identify the homogeneity and internal dimensional structure of summated scale constructs than do traditional principal components analyses, which is a viable alternative methodology for scale construction in management, organizational, and cross-cultural contexts.
Abstract: Summated scales are widely used in management research to measure constructs such as job satisfaction and organizational commitment. This article suggests that Revelle’s (1979) coefficient beta, implemented in Revelle’s (1978) ICLUST item-clustering procedure, should be used in conjunction with Cronbach’s coefficient alpha measure of internal consistency as criteria for judging the dimensionality and internal homogeneity of summated scales. The approach is demonstrated using ICLUST reanalyses of sample responses to Warr’s (1990) affective well-being scale and O’Brien, Dowling, and Kabanoff’s (1978) job satisfaction scale. Coefficient beta and item clustering are shown to more clearly identify the homogeneity and internal dimensional structure of summated scale constructs than do traditional principal components analyses. Given these benefits, Revelle’s approach is a viable alternative methodology for scale construction in management, organizational, and cross-cultural contexts, especially when researchers...

Journal ArticleDOI
TL;DR: In this article, the authors examined the efficacy of weights derived from either direct estimates (DE) of importance or regression-derived statistical weights from policy-capturing (PC) studies for predicting job offers.
Abstract: When studying applicants' job attribute preferences, researchers have used either direct estimates (DE) of importance or regression-derived statistical weights from policy-capturing (PC) studies. Although each methodology has been criticized, no research has examined the efficacy of weights derived from either method for predicting choices among job offers. In this study, participants were assigned to either a DE or PC condition, and weights for 14 attribute preferences were derived. Three weeks later, the participants made choices among hypothetical job offers. As predicted, PC weights outperformed DE weights when a noncompensatory strategy was assumed, and DE weights outperformed PC weights when a compensatory strategy was assumed. Implications for researchers' choice of methodology when studying attribute preferences are discussed.

Journal ArticleDOI
TL;DR: In this article, the authors investigate how statistics in financial reports and executive metatheatric presentations were used to persuade Wall Street experts to recommend Enron stock, when the writing was on the fourth wall.
Abstract: This article investigates the numeric construction, rhetorical moves, and metatheatre (defined as multiple stages for performing organization stories) pertaining to the widely publicized failure of Enron Corporation. The authors thus examine how statistics in financial reports and executive metatheatric presentations were used to persuade Wall Street experts to recommend Enron stock, when the writing was on the fourth wall. The authors' contribution to ethnostatistics is fourfold. First, they show that financial reports and discourse are a suitable and important topic for ethnostatistical analysis. Second, they extend ethnostatistics beyond how academic professionals tell stories with numbers, to how professional practitioners in organizations tell such stories. Third, they show the important role the rhetorical construction of financial performance measures played in the Enron failure. And fourth, they extend ethnostatistics by integrating ethnostatistics' third moment of rhetoric with theatrical theory ...

Journal ArticleDOI
TL;DR: In this article, three levels of ethnostatistics are identified and explained: constructing statistics, statistics at work, and the rhetoric of statistics, and contributions of these three levels are discussed.
Abstract: Ethnostatistics is the empirical study of how professional scholars construct and use statistics and numerals in scholarly research. This article provides an overview of the objectives, contents, and contributions of the current theme issue. The nature and relevance of ethnostatistics to organizational issues are discussed. Three levels of ethnostatistics are identified and explained— constructing statistics, statistics at work, and the rhetoric of statistics. The contributions the theme issue provides to these three levels of ethnostatistics are discussed. Foundational perspectives that have shaped ethnostatistics are explored to highlight important assumptions of the field and to distinguish ethnostatistics from related fields. The theme issue broadens the field of ethnostatistics to address statistical practices used by business professionals for organizational purposes. The article concludes by arguing that the field of ethnostatistics needs to develop rapidly at this point in time to address the emer...

Journal ArticleDOI
TL;DR: In this paper, the authors combine ethnostatistics with Weick's sense-making framework to explore how and why Canadian business schools and universities use comparative rankings and performance measures to signa...
Abstract: This article combines ethnostatistics with Weick's sensemaking framework to explore how and why Canadian business schools and universities use comparative rankings and performance measures to signa...

Journal ArticleDOI
TL;DR: In this paper, it is shown that it is difficult to detect interactions between continuous variables in field research using moderated multiple regression (MMR) due to multivariate normality, which occurs in field res...
Abstract: It is difficult to detect interactions between continuous variables in field research using moderated multiple regression (MMR). One reason is that multivariate normality, which occurs in field res...

Journal ArticleDOI
TL;DR: In this paper, the authors examined the judgment calls concerning the use of a psychophysics power function, the employment of a time-sensitive model, the imputation and exclusion of data, and the addition of new variables.
Abstract: Four published reanalyses of part of the celebrated medical innovation data focus on network effects on the diffusion of tetracycline. Each reanalysis rejects the findings of the previous analysis. This article shows that in the early 1950s, when the original data were collected, private practice physicians, swamped by demand, but facing threats to their autonomy from the federal government and from university medical centers, struggled to keep up with new therapies and relied on colleagues and friends within the profession for help and advice. This article examines the reanalyses' judgment calls concerning the use of a psychophysics power function, the employment of a time-sensitive model, the imputation and exclusion of data, and the addition of new variables. Given the radical undecidability of numerical evidence in the absence of context, the reanalysis of stand-alone data is likely to produce a continuing series of conflicting results.

Journal ArticleDOI
TL;DR: In this paper, the authors explain how compensation strategies have become fetishes and then use semiotics and ethnostatistics to explore the fetish nature of compensation contracts that award annual s...
Abstract: Research into compensation management continues to draw attention from scholars in many organizational disciplines ranging from accounting and finance to human resources and strategic management. Scholars search for empirical links between various compensation strategies and improved performance. A prime example is the institutionalization of executive compensation contracts that theoretically, at least, should reward executives on the basis of their firms' financial performance. Yet scholars are finding inconsistent evidence of any links. The authors suggest that an ethnostatistical analysis of these studies will yield one consistent finding: obsession with the value of rational data as the “truth” about organizational practices. This reliance on quantitative measures has become an organizational fetish. In this article, the authors explain how compensation strategies have become fetishes and then use semiotics and ethnostatistics to explore the fetish nature of compensation contracts that award annual s...

Journal ArticleDOI
TL;DR: Mplus is actually a very comprehensive set of statistical analysis tools for dealing with a variety of simple and complex latent variable analytical requirements, and its ability to analyze conceptual models that contain both continuous and categorical latent variables is of particular interest to the organizational sciences.
Abstract: As of this writing, I am using Mplus 3.13 (Muthén & Muthén, 1998-2004), which incorporates some added features and minor changes to the original release of 3.0. A “version history” may be found at http://www.statmodel.com/verhistory.shtml. Furthermore, Mplus Version 4.0 has been announced and is slated for release in late 2005 or early 2006. Limited information regarding 4.0 may be found at http://www.statmodel.com/version4.shtml. For those unfamiliar with Mplus, it is in simple terms a commercial structural equation modeling (SEM) software package in the same genre as other packages, such as AMOS (Arbuckle, 2003; Arbuckle & Wothke, 1999; SPSS, Inc., 2005), EQS (Bentler, 1995; Multivariate Software, Inc., 2004), and LISREL (Jöreskog & Sörbom, 1996; Jöreskog, Sörbom, Du Toit, & Du Toit, 2001). However, Mplus is actually a very comprehensive set of statistical analysis tools for dealing with a variety of simple and complex latent variable analytical requirements. Although I am going to limit my specific review to the tools within Mplus that I have actually used, the package includes the following general features. First, Mplus accommodates both continuous and categorical latent variables. With respect to continuous latent variables, the user will find the typical features expected in all SEM programs such as exploratory factor analysis, confirmatory factor analysis, simple structural equation modeling, and even simpler path and regression analysis. Among the more advanced features are complex SEM (i.e., the use of interaction terms or other nonlinear constraints), latent growth modeling (with or without nonlinear terms), discrete time survival analysis (which is actually a tool with both continuous and categorical latent variables), latent means analysis, and many other types of complex analyses. In situations involving latent categorical variables, again, the user will find tools to undertake both relatively simple analyses (e.g., logistic regression, loglinear analyses, etc.) and complex ones (e.g., latent class modeling, latent class growth analysis, finite mixture modeling, etc.). Within all those tools, Mplus provides a set of features to adjust to a variety of different data requirements, such as multiple groups, missing data, different estimation procedures (e.g., maximum likelihood, weighted least squares, etc.), bootstrapping, and sample weighting. Of particular interest to the organizational sciences is the ability of Mplus to analyze conceptual models that contain both continuous and categorical latent variables (e.g., predicting turnover from employee indices of workplace morale, using SEC classification codes as a predictor of corporate philanthropic giving, predicting timevarying industrial accident patterns from time-varying levels of job stress, etc.). Included among the latter tools are growth mixture modeling, latent class analysis with random effects, and SEM mixture modeling. Also, given the growing interest over the past 10 years within the organizational sciences in conceptual frameworks using a multilevel perspective, Mplus accommodates both the modeling of intercepts and slopes as latent exogenous or endogenous variables at the higher unit level, and one can test a different structural model at one level versus the other (e.g., Richardson & Vandenberg, 2005). Furthermore, the


Journal ArticleDOI
TL;DR: This is a comprehensive study of experimental ANOVA designs covering simple and complex factorial designs and repeated measures and introduces the logic of hypothesis testing, the F ratio, and the 1 degree-of-freedom contrast.
Abstract: Previous editions of this book, written by the first author, Geoffrey Keppel, have been used over the years to train social scientists in statistics. The book is a classic, and the new edition written with Thomas Wickens also promises to be well received by students and researchers. Having been totally reorganized and rewritten in parts, the new edition has been brought up to date covering recent statistical procedures while keeping its fundamental focus on research design. Overall, this is an excellent book that should be very useful. I recap the book below. The book is a comprehensive study of experimental ANOVA designs covering simple and complex factorial designs and repeated measures. One of the many strengths of the book is that it develops themes early in the discussion of factorial designs that are followed through the complex designs, including power, sample size, effect size, contrasts, and violation of assumptions. Surprisingly, the book does not have a chapter on regression. In my opinion, this is a weakness, especially when the authors address the ANCOVA model later in chapter 15. The book begins with the discussion of control and isolation in research designs and moves quickly to the discussion of the one-way ANOVA model. Here, the authors introduce the logic of hypothesis testing, the F ratio, and the 1 degree-of-freedom contrast. A contrast, they argue, can be used to test a theoretical prediction or unexpected pattern of means. The discussion of contrasts is organized around three types of questions:



Journal ArticleDOI
TL;DR: HLM 6 software for multilevel modeling has evolved a lot over the past decade or more, and while setting up an analysis using the entry screens, it is possible to go back to the main menu and view/edit the controlling lines of text, and troubleshooting can be facilitated.
Abstract: The columnists of my hometown newspaper are pictured every week next to their columns. One columnist, the reviewer of restaurants, is pictured wearing a hat with its large brim turned down so that you cannot see her face. That is how I felt when I made some inquiries with the distributor of the hierarchical linear modeling (HLM) software for multilevel modeling that is the subject of this review (HLM 6 software), knowing that I would be writing this review afterward. One basis for evaluating commercial software is service, so I took note of my encounters with the distributor, Scientific Software International (SSI), when I purchased an upgrade from the previous HLM version. SSI is not Amazon; after Googling “HLM software” to make the purchase, I did not see a way to use a credit card and a secure Web form. But faxing an order form obtained from the SSI Web site seemed OK. When my software package did not turn up in my mailbox, I phoned and reached a human immediately—no interminable hold as you get with local utility companies. The service representative helped me track the order with UPS and such, and his hypothesis was right: The package was kicking around the university. A few weeks later, I thought I had found a bug, so I tried e-mailing the support address with a description. Not only did I receive a response within 24 hours, but the troubleshooter complimented me on the inclusion of screenshots in my description. In reviewer parlance, score four out of four stars for service. The HLM software has evolved a lot over the past decade or more. It started as a DOS program, in which the user input for commands and specifications consisted of lines in a text file, each line stating a parameter followed by its value. As the Windows interface became prominent, SSI created a user interface with now-familiar pull-down menus and information entry screens or windows. Based on these entries, HLM apparently generates lines of text (in the form of parameter:value) that are still the basis of program control (telling, for example, where to find the data and the nature of the data). This approach has pros and cons. While setting up an analysis using the entry screens, it is possible to go back to the main menu and view/edit the controlling lines of text. Sometimes, variations on analyses can be set up more quickly by editing this text, and troubleshooting can be facilitated. One downside is that you must close that text file before doing anything else. This can be confusing because in most other Windows applications, you can click around from window to window, essentially multitasking. Not in HLM; if you do not close that window, when you go back to the main HLM screen, the program just hangs and you think you have encountered a bug (at least I did). This quirk is the kind of thing that is soluble in principle with redesign and reimplementation, sweeping away the DOS legacy (e.g., implementing a text editor that is integral to