scispace - formally typeset
Search or ask a question

Showing papers in "Information Systems Research in 1995"


Journal ArticleDOI
TL;DR: The results indicate that the decomposed Theory of Planned Behavior provides a fuller understanding of behavioral intention by focusing on the factors that are likely to influence systems use through the application of both design and implementation strategies.
Abstract: The Technology Acceptance Model and two variations of the Theory of Planned Behavior were compared to assess which model best helps to understand usage of information technology. The models were compared using student data collected from 786 potential users of a computer resource center. Behavior data was based on monitoring 3,780 visits to the resource center over a 12-week period. Weighted least squares estimation revealed that all three models performed well in terms of fit and were roughly equivalent in terms of their ability to explain behavior. Decomposing the belief structures in the Theory of Planned Behavior provided a moderate increase in the explanation of behavioral intention. Overall, the results indicate that the decomposed Theory of Planned Behavior provides a fuller understanding of behavioral intention by focusing on the factors that are likely to influence systems use through the application of both design and implementation strategies.

8,127 citations


Journal ArticleDOI
TL;DR: Self-efficacy exerted a strong influence on performance in both models, and behavior modeling was found to be more effective than the traditional method for training in Lotus 1-2-3, resulting in higher self- efficacy and higher performance.
Abstract: While computer training is widely recognized as an essential contributor to the productive use of computers in organizations, very little research has focused on identifying the processes through which training operates, and the relative effectiveness of different methods for such training. This research examined the training process, and compared a behavior modeling training program, based on Social Cognitive Theory Bandura [Bandura, A. 1977. Self-efficacy: Toward a unifying theory of behavioral change. Psych. Rev.842 191--215; Bandura, A. 1978. Reflections on self-efficacy. Adv. Behavioral Res. Therapy1 237--269; Bandura, A. 1982. Self-efficacy mechanism in human agency. Amer. Psychologist372 122--147; Bandura, A. 1986. Social Foundations of Thought and Action. Prentice-Hall, Englewood Cliffs, NJ.], to a more traditional, lecture-based program. According to Social Cognitive Theory, watching others performing a behavior, in this case interacting with a computer system, influences the observers' perceptions of their own ability to perform the behavior, or self-efficacy, and the expected outcomes that they perceive, as well as providing strategies for effective performance. The findings provide only partial support for the research model. Self-efficacy exerted a strong influence on performance in both models. In addition, behavior modeling was found to be more effective than the traditional method for training in Lotus 1-2-3, resulting in higher self-efficacy and higher performance. For WordPerfect, however, modeling did not significantly influence performance. This finding was unexpected, and several possible explanations are explored in the discussion. Of particular surprise were the negative relationships found between outcome expectations and performance. Outcome expectations were expected to positively influence performance, but the results indicated a strong negative effect. Measurement limitations are presented as the most plausible explanation for this result, but further research is necessary to provide conclusive explanations.

1,490 citations


Journal ArticleDOI
TL;DR: This paper proposes and test a new process-oriented methodology for ex post measurement to audit IT impacts on a strategic business unit SBU or profit center's performance and shows significant positive impacts of IT at the intermediate level.
Abstract: An important management question today is whether the anticipated economic benefits of Information Technology IT are being realized. In this paper, we consider this problem to be measurement related, and propose and test a new process-oriented methodology for ex post measurement to audit IT impacts on a strategic business unit SBU or profit center's performance. The IT impacts on a given SBU are measured relative to a group of SBUs in the industry. The methodology involves a two-stage analysis of intermediate and higher level output variables that also accounts for industry and economy wide exogenous variables for tracing and measuring IT contributions. The data for testing the proposed model were obtained from SBUs in the manufacturing sector. Our results show significant positive impacts of IT at the intermediate level. The theoretical contribution of the study is a methodology that attempts to circumvent some of the measurement problems in this domain. It also provides a practical management tool to address the question of why or why not certain IT impacts occur. Additionally, through its process orientation, the suggested approach highlights key variables that may require managerial attention and subsequent action.

1,265 citations


Journal ArticleDOI
TL;DR: The paper contributes to analysis of the development of the IS field as a whole, and provides some conceptual ideas and a reference point for further work in this relatively neglected area of research.
Abstract: This paper investigates aspects of the history and current state of interpretivism in IS research. The emergence of interpretivism is explored through the identification of a network of IS researchers working in the interpretive tradition, through an examination of the role of mainstream and alternative IS journals, and through an analysis of the rhetoric used to support interpretive claims. The paper contributes to analysis of the development of the IS field as a whole, and provides some conceptual ideas and a reference point for further work in this relatively neglected area of research.

1,253 citations


Journal ArticleDOI
TL;DR: An initial contingency framework for OMIS development depending on the organization's environment and its life-cycle stage is proposed, and the relationships between an OMIS and organizational learning and decision making are discussed.
Abstract: Preservation of organizational memory becomes increasingly important to organizations as it is recognized that experiential knowledge is a key to competitiveness. With the development and widespread availability of advanced information technologies IT, information systems become a vital part of this memory. We analyze existing conceptualizations and task-specific instances of IT-supported organizational memory. We then develop a model for an organizational memory information system OMIS that is rooted in the construct of organizational effectiveness. The framework offers four subsystems that support activities leading to organizational effectiveness. These subsystems rest on the foundation of five mnemonic functions that provide for acquisition, retention, maintenance, search, and retrieval of information. We then identify the factors that will limit the success of OMIS implementation, although full treatment of this issue is outside the scope of the paper. To initiate a research agenda on OMIS, we propose an initial contingency framework for OMIS development depending on the organization's environment and its life-cycle stage, and discuss the relationships between an OMIS and organizational learning and decision making.

890 citations


Journal ArticleDOI
TL;DR: The model suggests that residual performance risk, i.e., the difficulty in estimating performance-related outcomes during the later stages of the project, can clarify the relationship between project uncertainty, coordination mechanisms and performance.
Abstract: In this research, a study of the effects of coordination mechanisms and risk drivers such as project uncertainty on the performance of software development projects was conducted. Two types of coordination mechanisms were considered: vertical and horizontal. The former refers to the extent to which coordination between users and IS staff is undertaken by authorized entities such as project managers or steering committees. The latter refers to the extent to which coordination is undertaken through mutual adjustments and communications between users and IS staff. A new research model was developed by synthesizing research using the structural contingency perspective from Organization Theory and the risk-based perspective in Software Engineering. The model suggests that residual performance risk, i.e., the difficulty in estimating performance-related outcomes during the later stages of the project, can clarify the relationship between project uncertainty, coordination mechanisms and performance. Eight hypotheses were derived from the model for empirical testing. Data were collected from 64 software development projects in the banking and other industries. The results provide considerable support for a revised research model. As expected, project uncertainty increases residual performance risk. Both in turn have a direct negative effect on performance. Vertical coordination significantly reduces both project uncertainty and residual performance risk. However, horizontal coordination does not have any significant effect on residual performance risk. Instead, it has a direct positive effect on project performance. Moreover, higher levels of both vertical and horizontal coordination lead to higher levels of overall performance. Their differential impacts on residual performance risk are interesting areas of future research.

574 citations


Journal ArticleDOI
TL;DR: Through analysis of a generic family of environments, procedures are suggested for reducing the negative consequences of this accuracy-timeliness tradeoff.
Abstract: It is well known, of course, that the assessment of this month's economic activity will improve with the passage of time. The same situation exists for many of the inputs to managerial and strategic decision processes. Information regarding some situation or activity at a fixed point in time becomes better with the passage of time. However, as a consequence of the dynamic nature of many environments, the information also becomes less relevant over time. This balance between using current but inaccurate information or accurate but outdated information we call the accuracy-timeliness tradeoff. Through analysis of a generic family of environments, procedures are suggested for reducing the negative consequences of this tradeoff. In many of these situations, rather general knowledge concerning relative weights and shapes of functions is sufficient to determine optimizing strategies.

216 citations


Journal ArticleDOI
TL;DR: The evidence supports the use of the 13-item instrument as a measure of an overall UIS; and four component factors for explaining the UIS construct, as well as testing alternative models of underlying factor structure and the reliability and validity of factors and items.
Abstract: The structure and dimensionality of the user information satisfaction UIS construct is an important theoretical issue that has received considerable attention. Building upon the work of Bailey and Pearson Bailey, J. E., S. W. Pearson. 1983. Development of a tool for measuring and analyzing computer user satisfaction. Management Sci.295, May 530--545., Ives et al. Ives, B., M. Olson, J. J. Baroudi. 1983. The measure of user information satisfaction. Comm. ACM2610, October 785--793. conduct an exploratory factor analysis and recommend a 13-item instrument two indicators per item for measuring user information satisfaction. Ives et al. also contend that UIS is comprised of three component measures information product, EDP staff and services, and user knowledge or involvement. In a replication using exploratory techniques, Baroudi and Orlikowski Baroudi, J. J., W. J. Orlikowski. 1988. A short-form measure of user information satisfaction: A psychometric evaluation and notes on use. J. Management Inform. Systems44, Spring 44--59. confirm the three factor structure and support the diagnostic utility of the three factor model. Other researchers have suggested a need for caution in using the UIS instrument as a single measure of user satisfaction; they contend that the instrument's three components measure quite different dimensions whose antecedents and consequences should be studied separately. The acceptance of UIS as a standardized instrument requires confirmation that it explains and measures the user information satisfaction construct and its components. Based on a sample of 224 respondents, this research uses confirmatory factor analysis LISREL to test alternative models of underlying factor structure and assess the reliability and validity of factors and items. The results provide support for a revised UIS model with four first-order factors and one second-order higher-order factor. To cross-validate these results, the authors reexamine two data sets, including the original Baroudi and Orlikowski data, to assess the revised UIS model. The results show that the revised model provides better model-data fit in all three data sets. Thus, the evidence supports the use of: 1 the 13-item instrument as a measure of an overall UIS; and 2 four component factors for explaining the UIS construct.

202 citations


Journal ArticleDOI
TL;DR: It is argued that it is possible to reconcile these two strategies, despite the clear differences that exist between them, and some possible methods of combining variance and process strategies are examined, the most powerful of which jointly applies these strategies while maintaining their distinct forms.
Abstract: Information systems researchers commonly describe variance and process strategies for studying information system development (ISD) as alternatives that may be difficult to reconcile. In this paper, we argue that it is possible to reconcile these two strategies, despite the clear differences that exist between them. Some possible methods of combining variance and process strategies are examined, the most powerful of which jointly applies these strategies while maintaining their distinct forms. This method is used in this paper, with variance strategy being implemented using levels of participation of key actors and process strategy being implemented using sequences of actions. Based on empirical analysis of 50 ISD projects, five clusters of ISD processes are examined. Results show that projects that are similar based on levels of participation are also similar based on event sequences, thus indicating that variance and process strategies can be reconciled. The insights that variance strategy, process stra...

174 citations


Journal ArticleDOI
TL;DR: Two direct manipulation tools were developed and applied to the treemap to support AHP sensitivity analysis, which dramatically speeds up exploration and provides a better understanding of the relative impact of the component criteria.
Abstract: Treemaps, a visualization method for large hierarchical data spaces, are used to augment the capabilities of the Analytic Hierarchy Process (AHP) for decision-making. Two direct manipulation tools, presented metaphorically as a “pump” and a “hook,” were developed and applied to the treemap to support AHP sensitivity analysis. Users can change the importance of criteria dynamically on the two-dimensional treemap and immediately see the impact on the outcome of the decision. This fluid process dramatically speeds up exploration and provides a better understanding of the relative impact of the component criteria. A usability study with six subjects using a prototype AHP application showed that treemap representation was acceptable from a visualization and data operation standpoint.

88 citations


Journal ArticleDOI
TL;DR: The role of application domain knowledge in the processes used to comprehend computer programs is demonstrated by proposing a key role for knowledge of the application domain under examination and arguing that programmers use more top-down comprehension processes when they are familiar with the applicationdomain.
Abstract: The field of software, has, to date, focused almost exclusively on application-independent approaches. In this research, we demonstrate the role of application domain knowledge in the processes used to comprehend computer programs. Our research sought to reconcile two apparently conflicting theories of computer program comprehension by proposing a key role for knowledge of the application domain under examination. We argue that programmers use more top-down comprehension processes when they are familiar with the application domain. When the application domain is unfamiliar, programmers use processes that are more bottom-up in nature. We conducted a protocol analysis study of 24 professional programmers comprehending programs in familiar and unfamiliar application domains. Our findings confirm our thesis.

Journal ArticleDOI
TL;DR: In re-analyses of Doll and Torkzadeh's original covariance measures, it is shown how model fit is extremely dependent on model specification, while still maintaining the same number of constructs and respective measures.
Abstract: In a survey of IS instruments spanning the years 1973 to 1988 Zmud and Boynton [Zmud, R. W., A. C. Boynton. 1991. Survey measures and instruments in MIS inventory and appraisal. K. I. Kraemer, ed. The Information Systems Research Challenge Survey Research Methods, Vol. 3. Harvard Business School, Boston, 149--180.], Doll and Torkzadeh's Doll, W. J., G. Torkzadeh. 1988. The measurement of end-user computing satisfaction. MIS Quart. June 259--274. 12-item End-User Computing Satisfaction instrument was reported as one of three IS instruments that met conditions to qualify as “well developed.” Recently, Etezadi-Amoli and Farhoomand Etezadi-Amoli, J., A. F. Farhoomand. 1991. Issues and opinions on end-user computing satisfaction. MIS Quart. March 1--5. questioned the validity of these measures. Part of their critique centered on the poor model fit obtained in a re-analysis using LISREL. While other potentially valid points were raised by Etezadi-Amoli and Farhoomand's critique, this report focuses only on their use of confirmatory factor analysis. In our re-analyses of Doll and Torkzadeh's original covariance measures, we show how model fit is extremely dependent on model specification. While still maintaining the same number of constructs and respective measures, we demonstrate how two alternatives to the original model analyzed by Etezadi-Amoli and Farhoomand can result in models with acceptable fits.

Journal ArticleDOI
TL;DR: This paper model the process of coordination in the construction phase of incrementally developed, modular software systems and shows that more complex systems need a higher level of coordination than simpler ones, and if the time available for construction reduces, it is optimal to reduce thelevel of coordination.
Abstract: Software development projects are typically team efforts, wherein groups of specialists work toward the common goal of building a software system. The individual efforts of team members need to be coordinated to ensure product quality and effectiveness of the team. In this paper we model the process of coordination in the construction phase of incrementally developed, modular software systems. The analytical model proposed here supports macro-level decisions regarding the development team size and the coordination policy, based upon micro-level interactions between the modules in a system. The objective in this model is to minimize the effort spent on coordination activities subject to the requirement that the system must be completed within a specified period. Results from the model are used to examine coordination related trade-offs. We show that: 1 more complex systems need a higher level of coordination than simpler ones, 2 if the time available for construction reduces, it is optimal to reduce the level of coordination, and 3 marginal productive output is a diminishing function of team size. The sensitivity of the analytical model with respect to its assumptions is studied by constructing a set of simulation experiments where these assumptions are relaxed. The results of these experiments provide support in establishing the robustness of the analytical model.

Journal ArticleDOI
TL;DR: This paper develops a perspective to modeling team processes by drawing on concepts from team theory, and the informational processing and organizational paradigms, and indicates that there are complex relationships between information structure and team performance.
Abstract: This paper develops a perspective to modeling team processes by drawing on concepts from team theory, and the informational processing and organizational paradigms. In such a perspective, humans and their interactions in a team are modeled as objects in a computerized environment. The behavior of the objects are specified in terms of the executable programs. A simulation testbed is described. Various information structures for team decision making in an example financial domain are examined. Questions regarding the relationship between information structure who knows what, when, and how the information is used and team performance are studied for the example. Thus this study can be seen as a step in the translation of behavioral and normative viewpoints of team decision making into a computational framework. The results indicate that there are complex relationships between information structure and team performance. The conventional wisdom relating improved performance to more information is not always true. The experiments demonstrate several situations of team interaction where more information can lead to dysfunctional effects.

Journal ArticleDOI
TL;DR: An explicit algorithm is developed that uses a clean training set and an explicit measure of the input noise level and is compared to a traditional implicit algorithm, ID3p (the ID3 algorithm with the pessimistic pruning procedure), which shows analytically that the implicit algorithm has the same expected partitioning behavior as the explicit algorithm.
Abstract: Inductive expert systems typically operate with imperfect or noisy input attributes. We study design differences in inductive expert systems arising from implicit versus explicit handling of input noise. Most previous approaches use an implicit approach wherein inductive expert systems are constructed using input data of quality comparable to problems the system will be called upon to solve. We develop an explicit algorithm (ID3ecp) that uses a clean (without input errors) training set and an explicit measure of the input noise level and compare it to a traditional implicit algorithm, ID3p (the ID3 algorithm with the pessimistic pruning procedure). The novel feature of the explicit algorithm is that it injects noise in a controlled rather than random manner in order to reduce the performance variance due to noise. We show analytically that the implicit algorithm has the same expected partitioning behavior as the explicit algorithm. In contrast, however, the partitioning behavior of the explicit algorithm ...

Journal ArticleDOI
TL;DR: The mechanism design approach is shown to be robust with respect to uncertainty on the part of the central management about the degree of incentive conflict with the IS manager, and to be at least as good as a profit center as well as outperforming any other centralized method of control.
Abstract: The control of an information systems IS department is studied when its manager has private information about the department's cost and has objectives which may differ from those of the organization. The computing resource is represented by a queueing model, and it is assumed there is no access to external information processing markets by either users or the IS department. A mechanism design approach is used. We derive conditions that the optimal mechanism must satisfy; the first-order conditions of the full-information problem generalize in a clear way with the virtual marginal cost replacing the full information marginal capacity cost. The consequences of the information asymmetry include reduced capacity, arrival rate and utilization rate, and higher prices and mean waiting time compared to the full-information solution. Thus the organization suffers losses due not only to the IS manager's informational rent, but also to the opportunity cost of jobs not served. The revelation principle guarantees that the resulting mechanism is at least as good as a profit center, as well as outperforming any other centralized method of control. The mechanism design approach is also shown to be robust with respect to uncertainty on the part of the central management about the degree of incentive conflict with the IS manager. An example and numerical results give some feeling for the magnitudes of the effects, and managerial implications are also discussed. The paper also serves to illustrate the application of mechanism design to an IS problem; we briefly discuss other promising IS applications of this important methodology.