scispace - formally typeset
Search or ask a question

Showing papers in "Information Systems Research in 2003"


Journal ArticleDOI
TL;DR: A new latent variable modeling approach is provided that can give more accurate estimates of interaction effects by accounting for the measurement error that attenuates the estimated relationships.
Abstract: The ability to detect and accurately estimate the strength of interaction effects are critical issues that are fundamental to social science research in general and IS research in particular. Within the IS discipline, a significant percentage of research has been devoted to examining the conditions and contexts under which relationships may vary, often under the general umbrella of contingency theory (cf. McKeen et al. 1994, Weill and Olson 1989). In our survey of such studies, the majority failed to either detect or provide an estimate of the effect size. In cases where effect sizes are estimated, the numbers are generally small. These results have led some researchers to question both the usefulness of contingency theory and the need to detect interaction effects (e.g., Weill and Olson 1989). This paper addresses this issue by providing a new latent variable modeling approach that can give more accurate estimates of interaction effects by accounting for the measurement error that attenuates the estimated relationships. The capacity of this approach at recovering true effects in comparison to summated regression is demonstrated in a Monte Carlo study that creates a simulated data set in which the underlying true effects are known. Analysis of a second, empirical data set is included to demonstrate the technique's use within IS theory. In this second analysis, substantial direct and interaction effects of enjoyment on electronic-mail adoption are shown to exist.

5,639 citations


Journal ArticleDOI
TL;DR: The framework indicates ways in which researchers in information systems and other fields may properly lay claim to generalizability, and thereby broader relevance, even when their inquiry falls outside the bounds of sampling-based research.
Abstract: Generalizability is a major concern to those who do, and use, research. Statistical, sampling-based generalizability is well known, but methodologists have long been aware of conceptions of generalizability beyond the statistical. The purpose of this essay is to clarify the concept of generalizability by critically examining its nature, illustrating its use and misuse, and presenting a framework for classifying its different forms. The framework organizes the different forms into four types, which are defined by the distinction between empirical and theoretical kinds of statements. On the one hand, the framework affirms the bounds within which statistical, sampling-based generalizability is legitimate. On the other hand, the framework indicates ways in which researchers in information systems and other fields may properly lay claim to generalizability, and thereby broader relevance, even when their inquiry falls outside the bounds of sampling-based research.

1,570 citations


Journal ArticleDOI
TL;DR: Results support the model, suggesting that the process models used to understand information adoption can be generalized to the field of knowledge management, and that usefulness serves a mediating role between influence processes and information adoption.
Abstract: This research investigates how knowledge workers are influenced to adopt the advice that they receive in mediated contexts. The research integrates the Technology Acceptance Model (Davis 1989) with dual-process models of informational influence (e.g., Petty and Cacioppo 1986, Chaiken and Eagly 1976) to build a theoretical model of information adoption. This model highlights the assessment of information usefulness as a mediator of the information adoption process. Importantly, the model draws on the dual-process models to make predictions about the antecedents of informational usefulness under different processing conditions.The model is investigated qualitatively first, using interviews of a sample of 40 consultants, and then quantitatively on another sample of 63 consultants from the same international consulting organization. Data reflect participants' perceptions of actual e-mails they received from colleagues consisting of advice or recommendations. Results support the model, suggesting that the process models used to understand information adoption can be generalized to the field of knowledge management, and thatusefulness serves a mediating role between influence processes and information adoption. Organizational knowledge work is becoming increasingly global. This research offers a model for understanding knowledge transfer using computer-mediated communication.

1,184 citations


Journal ArticleDOI
TL;DR: The results indicate that the sample size, data source, and industry in which the study is conducted influence the likelihood of the study finding greater improvements on firm performance, and the choice of the dependent variable appears to influence the outcome.
Abstract: Payoffs from information technology (IT) continue to generate interest and debate both among academicians and practitioners. The extant literature cites inadequate sample size, lack of process orientation, and analysis methods among the reasons some studies have shown mixed results in establishing a relationship between IT investment and firm performance.In this paper we examine the structural variables that affect IT payoff through a meta analysis of 66 firm-level empirical studies between 1990 and 2000. Employing logistic regression and discriminant analyses, we present statistical evidence of the characteristics that discriminate between IT payoff studies that observed a positive effect and those that did not. In addition, we conduct ordinary least squares (OLS) regression on a continuous measure of IT payoff to examine the influence of structural variables on the result of IT payoff studies.The results indicate that the sample size, data source (firm-level or secondary), and industry in which the study is conducted influence the likelihood of the study finding greater improvements on firm performance. The choice of the dependent variable(s) also appears to influence the outcome (although we did not find support for process-oriented measurement), the type of statistical analysis conducted, and whether the study adopted a cross-sectional or longitudinal design. Finally, we present implications of the findings and recommendations for future research.

710 citations


Journal ArticleDOI
TL;DR: It is concluded that many findings from research on control of internal ISD projects apply to the outsourced context as well, but with some interesting differences.
Abstract: This paper examines the evolution of portfolio of controls over the duration of outsourced information systems development (ISD) projects. Drawing on five cases, it concludes that many findings from research on control of internal ISD projects apply to the outsourced context as well, but with some interesting differences. The portfolios of control in outsourced projects are dominated by outcome controls, especially at the start of the project; although the precision and frequency of these controls varies across projects. Behavior controls are often added later in the project, as are controls aimed to encourage and enable vendor self-control. Clan controls were used in only two of the cases--when the client and vendor had shared goals, and when frequent interactions led to shared values. In general, the outsourced projects we studied began with relatively simple controls but often required significant additional controls after experiencing performance problems. Factors influencing the choice and evolution of controls are also examined.

640 citations


Journal ArticleDOI
TL;DR: A new theoretical model of the underlyingobservational learning processes by which modeling-based training interventions influence computer task performance is developed and tested, which should enable future research to systematically evaluate the effectiveness of a wide range of modeling- based training interventions.
Abstract: Computer skills are key to organizational performance, and past research indicates that behavior modeling is a highly effective form of computer skill training. The present research develops and tests a new theoretical model of the underlyingobservational learning processes by which modeling-based training interventions influence computer task performance. Observational learning processes are represented as a second-order construct with four dimensions (attention, retention, production, and motivation). New measures for these dimensions were developed and shown to have strong psychometric properties. The proposed model controls for two pretraining individual differences (motivation to learn and self-efficacy) and specifies the relationships among three training outcomes (declarative knowledge, post-training self-efficacy, and task performance). The model was tested using PLS on data from an experiment ( N = 95) on computer spreadsheet training. As hypothesized, observational learning processes significantly influenced training outcomes. A representative modeling-based training intervention (retention enhancement) significantly improved task performance through its specific effects on the retention processes dimension of observational learning. The new model provides a more complete theoretical account of the mechanisms by which modeling-based interventions affect training outcomes, which should enable future research to systematically evaluate the effectiveness of a wide range of modeling-based training interventions. Further, the new instruments can be used by practitioners to refine ongoing training programs.

576 citations


Journal ArticleDOI
TL;DR: Recommendations are given as to how organizations can enhance their business managers IT knowledge and experience to achieve stronger IT leadership from line people.
Abstract: With the increased importance of IT in organizations, business managers are now expected to show stronger leadership in regard to its deployment of IT in organizations. This requires greater focus on their capability to understand and use IT resources effectively. This paper explores the concept of IT competence of business managers as a contributor to their intention to champion IT within their organizations. Based on the knowledge literature, IT competence is defined as "the set of IT-related knowledge and experience that a business manager possesses."The relationship between IT knowledge, IT experience, and championing IT is tested empirically using Structural Equation Modeling with LISREL. Four hundred and four business managers from two large insurance organizations were surveyed. Specific areas of IT knowledge and IT experience were first identified and the first half of the data set was utilized to assess the measurement properties of the instrument in a confirmatory analysis. The contribution of IT knowledge and IT experience to their intention to champion IT was assessed using the second half of the data set. The results show that IT knowledge and IT experience together explain 34% of the variance in managers' intentions to champion IT. Recommendations are given as to how organizations can enhance their business managers IT knowledge and experience to achieve stronger IT leadership from line people.

434 citations


Journal ArticleDOI
TL;DR: The study of 32, 5- and 6- person groups supports the belief that interpretation underlies information sharing and is necessary for favorable decision outcomes and supports the proposed negative effect of low social presence media on interpretation in terms of depth of information sharing.
Abstract: Research on information sharing has viewed this activity as essential for informing groups on content relevant to a decision. We propose and examine an alternate function of information sharing, i.e., the social construction of meaning. To accomplish this goal, we turn to social construction, social presence, and task closure theories. Drawing from these theories, we hypothesize relationships among the meeting environment, breadth and depth of information shared during a meeting, and decision quality. We explore these relationships in terms of the effects of both the media environment in which the group is situated and the medium that group memberschoose to utilize for their communication.Our study of 32, 5- and 6-person groups supports our belief that interpretation underlies information sharing and is necessary for favorable decision outcomes. It also supports the proposed negative effect of low social presence media on interpretation in terms of depth of information sharing; a low social presence medium, however, promotes information sharing breadth. Finally, the findings indicate that when in multimedia environments and faced with a relatively complex task,choosing to utilize an electronic medium facilitates closure and, therefore, favorable outcomes.

415 citations


Journal ArticleDOI
TL;DR: The novel contribution of this work is to show how XML can also be used to describe workflow process schemas to support flexible routing of documents in the Internet environment.
Abstract: The full potential of the Web as a medium for electronic commerce can be realized only when multiple partners in a supply chain can route information among themselves in a seamless way. Commerce on the Internet is still far from being "friction free," because business partners cannot exchange information about their business processes in an automated manner. In this paper, we propose the design for aneXchangeable Routing Language (XRL) using eXtensible Markup Language (XML) syntax. XML is a means for trading partners to exchange business data electronically. The novel contribution of our work is to show how XML can also be used to describe workflow process schemas to support flexible routing of documents in the Internet environment. The design of XRL is grounded in Petri nets, which is a well-known formalism. By using this formalism, it is possible to analyze correctness and performance of workflows described in XRL. Architectures to facilitate interoperation through loose and tight integration are also discussed. Examples illustrate how this approach can be used for implementing interorganizational electronic commerce applications. As a proof of concept, we have also developedXRL/flower, a prototype implementation of a workflow management system based on XRL.

246 citations


Journal ArticleDOI
TL;DR: The overall conclusion is that DQI should be made available to managers without domain-specific experience and incorporated into data warehouses used on an ad hoc basis by managers.
Abstract: Data Quality Information (DQI) is metadata that can be included with data to provide the user with information regarding the quality of that data. As users are increasingly removed from any personal experience with data, knowledge that would be beneficial in judging the appropriateness of the data for the decision to be made has been lost. Data tags could provide this missing information. However, it would be expensive in general to generate and maintain such information. Doing so would be worthwhile only if DQI is used and affects the decision made.This work focuses on how the experience of the decision maker and the available processing time influence the use of DQI in decision making. It also explores other potential issues regarding use of DQI, such as task complexity and demographic characteristics. Our results indicate increasing use of DQI when experience levels progress through the stages from novice to professional. The overall conclusion is that DQI should be made available to managers without domain-specific experience. From this it would follow that DQI should be incorporated into data warehouses used on an ad hoc basis by managers.

217 citations


Journal ArticleDOI
TL;DR: This work considers how the government should set the fine for copying, tax on copying medium, and subsidy on legitimate purchases, whereas a monopoly publisher sets price and spending on detection.
Abstract: We consider how the government should set the fine for copying, tax on copying medium, and subsidy on legitimate purchases, whereas a monopoly publisher sets price and spending on detection. There are two segments of potential software users--ethical users who will not copy, and unethical users who would copy if the benefit outweighs the cost. In deciding on policy, the government must consider how the publisher adjusts price and detection to changes in the fine, tax, and subsidy. Our key welfare result is that increases in detection affect welfare more negatively than price cuts. We also show that the tax is welfare superior to the fine, and that a subsidy is optimal. Generally, government policies that focus on penalties alone will miss the social welfare optimum.

Journal ArticleDOI
TL;DR: Findings show that role overload, the presence of strong ties between manager and contractor, and the lack of prior outsourcing experience increased the persistence of managerial expectations, which had a distinct influence on managerial perceptions of contractor performance.
Abstract: This paper investigates the persistence of managerial expectations in an IT outsourcing context where the traditional relationship between supervisor and subordinate changes to one of client-manager and contractor. A mixed-method approach was used, in which a qualitative methodology preceded a large-scale quantitative survey. Data were collected from 147 survivors of a government IT organization which had undergone IT outsourcing in the previous year. Findings show that role overload, the presence of strong ties between manager and contractor, and the lack of prior outsourcing experience increased the persistence of managerial expectations. In turn, persistence of expectations had a distinct influence on managerial perceptions of contractor performance.

Journal ArticleDOI
TL;DR: A simulation approach that provides a relatively risk-free and cost-effective environment to examine the decision space for both bid takers and bid makers in web-based dynamic price setting processes and finds that hybrid-bidding strategies have the potential of significantly altering bidders' likelihood of winning, as well as their surplus.
Abstract: We present a simulation approach that provides a relatively risk-free and cost-effective environment to examine the decision space for both bid takers and bid makers in web-based dynamic price setting processes. The applicability of the simulation platform is demonstrated for Yankee auctions in particular. We focus on the optimization of bid takers' revenue, as well as on examining the welfare implications of a range of consumer-bidding strategies--some observed, some hypothetical. While these progressive open discriminatory multiunit auctions with discrete bid increments are made feasible by Internet technologies, little is known about their structural characteristics, or their allocative efficiency. The multiunit and discrete nature of these mechanisms renders the traditional analytic framework of gametheory intractable (Nautz and Wolfstetter 1997). The simulation is based on theoretical revenue generating properties of these auctions. We use empirical data from real online auctions to instantiate the simulation's parameters. For example, the bidding strategies of the bidders are specified based on three broad bidding strategies observed in real online auctions. The validity of the simulation model is established and subsequently the simulation model is configured to change the values of key control factors, such as the bid increment. Our analysis indicates that the auctioneers are, most of the time, far away from the optimal choice of bid increment, resulting in substantial losses in a market with already tight margins. The simulation tool provides a test bed forjointly exploring the combinatorial space of design choices made by the auctioneer's and the bidding strategies adopted by the bidders. For instance, a multinomial logit model reveals that endogenous factors, such as the bid increment and the absolute magnitude of the auction have a statistically significant impact on consumer-bidding strategies. This endogeniety is subsequently modeled into the simulation to investigate whether the effects are significant enough to alter the optimal bid increments or auctioneer revenues. Additionally, we investigate hybrid-bidding strategies, derived as a combination of three broad strategies, such as jump bidding and strategic-at-margin (SAM) bidding. We find that hybrid strategies have the potential of significantly altering bidders' likelihood of winning, as well as their surplus.

Journal ArticleDOI
TL;DR: This research study the incentive structure in the decentralized organization and design a market-based coordination system that is incentive aligned, i.e., it gives the participants the incentives to act in a manner that is beneficial to the overall system.
Abstract: Traditional development of large-scale information systems is based on centralized information processing and decision making. With increasing competition, shorter product life-cycle, and growing uncertainties in the marketplace, centralized systems are inadequate in processing information that grows at an explosive rate and are unable to make quick responses to real-world situations. Introducing a decentralized information system in an organization is a challenging task. It is often intertwined with other organizational processes. The goal of this research is to outline a new approach in developing a supply chain information system with a decentralized decision making process. Particularly, we study the incentive structure in the decentralized organization and design a market-based coordination system that is incentive aligned, i.e., it gives the participants the incentives to act in a manner that is beneficial to the overall system. We also prove that the system monotonically improves the overall organizational performance and is goal congruent.

Journal ArticleDOI
TL;DR: Learning mechanisms for improving analysis pattern reuse in conceptual design are developed and the results suggest that the methodology has the potential to benefit practice.
Abstract: Conceptual design is an important, but difficult, phase of systems development. Analysis patterns can greatly benefit this phase because they capture abstractions of situations that occur frequently in conceptual modeling. NaA¯ve approaches to automate conceptual design with reuse of analysis patterns have had limited success because they do not emulate the learning that occurs over time. This research develops learning mechanisms for improving analysis pattern reuse in conceptual design. The learning mechanisms employ supervised learning techniques to support the generic reuse tasks of retrieval, adaptation, and integration, and emulate expert behaviors of analogy making and designing by assembly. They are added to a naA¯ve approach and the augmented methodology implemented as an intelligent assistant to a designer for generating an initial conceptual design that a developer may refine. To assess the potential of the methodology to benefit practice, empirical testing is carried out on multiple domains and tasks of different sizes. The results suggest that the methodology has the potential to benefit practice.

Journal ArticleDOI
TL;DR: This work presents an approach to facilitate this type of synthesis and decomposition through formal analysis of process structure using a mathematical structure called a metagraph.
Abstract: Organizations today face increasing pressures to integrate their processes across disparate divisions and functional units, in order to remove inefficiencies as well as to enhance manageability. Process integration involves two major types of changes to process structure: (1) synthesizing processes from separate but interdependent subprocesses, and (2) decomposing aggregate processes into distinct subprocesses that are more manageable. We present an approach to facilitate this type of synthesis and decomposition through formal analysis of process structure using a mathematical structure called a metagraph.

Journal ArticleDOI
TL;DR: This research note suggests that in addition to the convergent and discriminant validity that Salisbury et al. (2002) provided for the consensus on appropriation scale, there may have an opportunity to further refine the measurement of this construct.
Abstract: Measurement is perhaps the most difficult aspect of behavioral research. In a recent edition ofISR, a scale for consensus on appropriation was developed. Consensus on appropriation is one of three global constructs incorporated in adaptive structuration theory (Poole and DeSanctis 1990). The principal components analysis on the initial questionnaire revealed two factors with eigenvalues greater than one. While the methods used to develop the scale were thorough, the weaker factor was excluded from the rest of the analysis with little justification. We suggest that this finding has two possible explanations, multidimensionality or response bias. This research note suggests that in addition to the convergent and discriminant validity that Salisbury et al. (2002) provided for the consensus on appropriation scale, we may have an opportunity to further refine the measurement of this construct. By further exploring this principal component finding, consensus on appropriation may be better understood and measured.

Journal ArticleDOI
TL;DR: The authors provide their thoughts regarding the questions raised by Allport and Kerler (A&K) in their research note (2003) regarding the appropriate balance between theory and data in scale development.
Abstract: We are pleased that our effort has received such interest, and thank the editor for allowing us the opportunity to provide our thoughts regarding the questions raised by Allport and Kerler (A&K) in their research note (2003). A&K's paper has triggered a response on two fronts. The first concerns technical issues specific to our study (Salisbury et al. 2002), and the second is a more general question that would seem implicit in their paper: What is the appropriate balance between theory and data in scale development?