scispace - formally typeset
Search or ask a question
Book

Reliability and Validity Assessment

TL;DR: The paper shows how reliability is assessed by the retest method, alternative-forms procedure, split-halves approach, and internal consistency method.
Abstract: Explains how social scientists can evaluate the reliability and validity of empirical measurements, discussing the three basic types of validity: criterion related, content, and construct. In addition, the paper shows how reliability is assessed by the retest method, alternative-forms procedure, split-halves approach, and internal consistency method.
Citations
More filters
Journal ArticleDOI
TL;DR: In this article, the authors analyze the internal stickiness of knowledge transfer and test the resulting model using canonical correlation analysis of a data set consisting of 271 observations of 122 best-practice transfers in eight companies.
Abstract: The ability to transfer best practices internally is critical to a firtn's ability to build competitive advantage through the appropriation of rents from scarce internal knowledge. Just as a firm's distinctive competencies tnight be dificult for other firms to imitate, its best prczctices could be dfficult to imitate internnlly. Yet, little systematic attention has been pcrid to such internal stickiness. The author analyzes itlterrzal stickiness of knowledge transfer crnd tests the resulting model using canonical correlation analysis of a data set consisting of 271 observations of 122 best-practice transfers in eight companies. Contrary to corzverztiorzrzl wisdom that blames primarily motivational factors, the study findings show the major barriers to internal knowledge transfer to be knowledge-related factors such as the recipient's lack oj absorptive capacity, causal anzbiguity, and an arciuous relationship between the source and the recipient. The identification and transfer of best practices cally are hindered less by confidentiality and legal is emerging as one of the most important and obstacles than external transfers, they could be widespread practical management issues of the faster and initially less complicated, all other latter half of the 1990s. Armed with meaningful, things being equal. For those reasons, in an era detailed performance data, firms that use fact- when continuous organizational learning and based management methods such as TQM, bench- relentless performance improvement are needed to marking, and process reengineering can regularly remain competitive, companies must increasingly compare the performance of their units along resort to the internal transfer of capabilitie~.~ operational dimensions. Sparse but unequivocal Yet, experience shows that transferring capaevidence suggests that such comparisons often bilities within a firm is far from easy. General reveal surprising performance differences between Motors had great difficulty in transferring manuunits, indicating a need to improve knowledge facturing practices between divisions (Kerwin and utilization within the firm (e.g., Chew, Bresnahan, Woodruff, 1992: 74) and IBM had limited suc

6,805 citations

Journal ArticleDOI
TL;DR: The current paper reviews four recent studies in the strategic management area which use PLS and notes that the technique has been applied inconsistently, and at times inappropriately, and suggests standards for evaluating future PLS applications.
Abstract: Advances in causal modeling techniques have made it possible for researchers to simultaneously examine theory and measures. However, researchers must use these new techniques appropriately. In addition to dealing with the methodological concerns associated with more traditional methods of analysis, researchers using causal modeling approaches must understand their underlying assumptions and limitations. Most researchers are well equipped with a basic understanding of LISREL-type models. In contrast, current familiarity with PLS in the strategic management area is low. The current paper reviews four recent studies in the strategic management area which use PLS. The review notes that the technique has been applied inconsistently, and at times inappropriately, and suggests standards for evaluating future PLS applications. Copyright © 1999 John Wiley & Sons, Ltd.

6,205 citations

Journal ArticleDOI
TL;DR: A new latent variable modeling approach is provided that can give more accurate estimates of interaction effects by accounting for the measurement error that attenuates the estimated relationships.
Abstract: The ability to detect and accurately estimate the strength of interaction effects are critical issues that are fundamental to social science research in general and IS research in particular. Within the IS discipline, a significant percentage of research has been devoted to examining the conditions and contexts under which relationships may vary, often under the general umbrella of contingency theory (cf. McKeen et al. 1994, Weill and Olson 1989). In our survey of such studies, the majority failed to either detect or provide an estimate of the effect size. In cases where effect sizes are estimated, the numbers are generally small. These results have led some researchers to question both the usefulness of contingency theory and the need to detect interaction effects (e.g., Weill and Olson 1989). This paper addresses this issue by providing a new latent variable modeling approach that can give more accurate estimates of interaction effects by accounting for the measurement error that attenuates the estimated relationships. The capacity of this approach at recovering true effects in comparison to summated regression is demonstrated in a Monte Carlo study that creates a simulated data set in which the underlying true effects are known. Analysis of a second, empirical data set is included to demonstrate the technique's use within IS theory. In this second analysis, substantial direct and interaction effects of enjoyment on electronic-mail adoption are shown to exist.

5,639 citations

Posted Content
01 Jan 2001
TL;DR: This paper gives a lightning overview of data mining and its relation to statistics, with particular emphasis on tools for the detection of adverse drug reactions.
Abstract: The growing interest in data mining is motivated by a common problem across disciplines: how does one store, access, model, and ultimately describe and understand very large data sets? Historically, different aspects of data mining have been addressed independently by different disciplines. This is the first truly interdisciplinary text on data mining, blending the contributions of information science, computer science, and statistics. The book consists of three sections. The first, foundations, provides a tutorial overview of the principles underlying data mining algorithms and their application. The presentation emphasizes intuition rather than rigor. The second section, data mining algorithms, shows how algorithms are constructed to solve specific problems in a principled manner. The algorithms covered include trees and rules for classification and regression, association rules, belief networks, classical statistical models, nonlinear models such as neural networks, and local "memory-based" models. The third section shows how all of the preceding analysis fits together when applied to real-world data mining problems. Topics include the role of metadata, how to handle missing data, and data preprocessing.

3,765 citations


Cites background from "Reliability and Validity Assessment..."

  • ...Carmines and Zeller (1979) also discuss such issues. A key work on incomplete data and different types of missing data mechanisms is Little and Rubin (1987). The bank loan example of distorted samples is taken from Hand, McConway, and Stanghellini (1997)....

    [...]

  • ...Carmines and Zeller (1979) also discuss such issues. A key work on incomplete data and different types of missing data mechanisms is Little and Rubin (1987). The bank loan example of distorted samples is taken from Hand, McConway, and Stanghellini (1997). Goldstein (1995) is a key work on multilevel modeling....

    [...]

01 Jan 2000
TL;DR: In fact, most of the archaeologically recoverable information about human thought and human behavior is text, the good stuff of social science as mentioned in this paper, which is what we use in this paper.
Abstract: This chapter is about methods for managing and analyzing qualitative data. By qualitative data the authors mean text: newspapers, movies, sitcoms, e-mail traffic, folktales, life histories. They also mean narratives--narratives about getting divorced, about being sick, about surviving hand-to-hand combat, about selling sex, about trying to quit smoking. In fact, most of the archaeologically recoverable information about human thought and human behavior is text, the good stuff of social science.

3,671 citations