scispace - formally typeset
Search or ask a question

Showing papers in "Information Systems Research in 1993"


Journal ArticleDOI
TL;DR: Empirical evidence indicates that, on average, IT investments are zero net present value NPV investments; they are worth as much as they cost; innovative IT investments, however, increase the value of the firm.
Abstract: Determining whether investments in information technology IT have an impact on firm performance has been and continues to be a major problem for information systems researchers and practitioners. Financial theory suggests that managers should make investment decisions that maximize the value of the firm. Using event-study methodology, we provide empirical evidence on the effect of announcements of IT investments on the market value of the firm for a sample of 97 IT investments from the finance and manufacturing industries from 1981 to 1988. Over the announcement period, we find no excess returns for either the full sample or for any one of the industry subsamples. However, cross-sectional analysis reveals that the market reacts differently to announcements of innovative IT investments than to followup, or noninnovative investments in IT. Innovative IT investments increase firm value, while noninnovative investments do not. Furthermore, the market's reaction to announcements of innovative and noninnovative IT investments is independent of industry classification. These results indicate that, on average, IT investments are zero net present value NPV investments; they are worth as much as they cost. Innovative IT investments, however, increase the value of the firm.

676 citations


Journal ArticleDOI
TL;DR: The use of computer-based teaching methods requiring hands-on student use appear to offer an advantage over traditional methods and over computer- based methods not requiring hands on student use in providing a forum for exploratory analysis during class and for acquiring technical procedural knowledge.
Abstract: Information technology is slowly becoming a part of educational classrooms and corporate training facilities. The current study examines the use and outcomes of computer-based instructional technology in the context of graduate business education. Case study data is gathered to explore how computer technology is used in the university classroom, and how computer-based teaching methods differ from traditional teaching methods in terms of class interaction and in-class learning. The study found that there are many potential computer-based teaching methods and that the methods can have different outcomes. The use of computer-based teaching methods requiring hands-on student use appear to offer an advantage over traditional methods and over computer-based methods not requiring hands-on student use in providing a forum for exploratory analysis during class and for acquiring technical procedural knowledge. A model of in-class learning is developed for future research.

311 citations


Journal ArticleDOI
TL;DR: This research examined the use of electronic messaging EM by ongoing management groups performing a cooperative task and proposed that FTF, being highly interactive, is appropriate for building a shared interpretive context among group members, while CMC is more appropriate for communicating within an established context.
Abstract: Management is communication intensive and, therefore, managers may derive benefits from computer-based alternatives to the traditional communication modes of face-to-face FTF, telephone, and written memo. This research examined the use of electronic messaging EM by ongoing management groups performing a cooperative task. By means of an in-depth multimethod case study of the editorial group of two daily newspapers, it examined the fit between the interactivity of the chosen communication mode FTF vs. EM and the mode of discourse it was used for alternation vs. interaction/discussion. Two propositions were derived from this exploratory study. The first proposes that FTF, being highly interactive, is appropriate for building a shared interpretive context among group members, while CMC, being less interactive, is more appropriate for communicating within an established context. Groups exhibiting effective communication will use FTF primarily for interactive discourse and EM for discourse consisting primarily of alternating adjacency pairs. The second proposes that to the extent that the appropriate communication modes are chosen, communication will be more effective.

306 citations


Journal ArticleDOI
TL;DR: An analysis of the manuscripts submitted to the journal Information Systems ResearchISR during its start-up years, 1987 through 1992, in an effort to provide a foundation for examining the performance of the journal, and to open a window on to the information systems IS field during that period.
Abstract: The flow of manuscripts through the editorial offices of an academic journal can provide valuable information both about the performance of the journal as an instrument of its field and about the structure and evolution of the field itself. We undertook an analysis of the manuscripts submitted to the journal Information Systems ResearchISR during its start-up years, 1987 through 1992, in an effort to provide a foundation for examining the performance of the journal, and to open a window on to the information systems IS field during that period. We identified the primary research question for each of 397 submissions to ISR, and then categorized the research questions using an iterative classification procedure. Ambiguities in classification were exploited to identify relationships among the categories, and some overarching themes were exposed in order to reveal levels of structure in the journal's submissions stream. We also examined the distribution of submissions across categories and over the years of the study period, and compared the structures of the submissions stream and the publication stream. We present the results with the goal of broadening the perspectives which individual members of the IS research community have of ISR and to help fuel community discourse about the nature and proper direction of the field. We provide some guidelines to assist readers in this interpretive task, and offer some observations and speculations to help launch the discussion.

191 citations


Journal ArticleDOI
TL;DR: The results confirm that after controlling for other factors such as organizational size, experience with computer technology, current investment inComputer technology, procurement practices, and the task environment of the organization, the sector an organization operates within has a major differential effect on adoption of microcomputer technology.
Abstract: Microcomputer and work-station technology is the latest wave in computing technology to influence day-to-day operations in business and government organization. Does sector affect adoption of this new information technology? If so, how? Utilizing the data from a large comparative national survey of data processing organizations, this proposition was examined. The results confirm that after controlling for other factors such as organizational size, experience with computer technology, current investment in computer technology, procurement practices, and the task environment of the organization, the sector an organization operates within has a major differential effect on adoption of microcomputer technology. Public organizations have more microcomputers per employee, a result that is potentially due to a more information intensive task environment and the potential use of microcomputer technology as a side payment in lieu of salary. The latter factor derives from lower wage rates faced by public employees.

119 citations


Journal ArticleDOI
TL;DR: Conclusions are drawn concerning the value of taxonomy in studying information systems, in suggesting possible research directions, and the desirability of rationalizing research efforts within the IS discipline.
Abstract: Seventeen major types of information systems are identified and defined by vectors of their attributes and functions. These systems are then classified by numerical methods. The quantitative analysis is interpreted in terms of the development history of information system types. Two major findings are that the numerical classification autonomously follows the chronological appearance of system types and that, along the time line, systems have followed two major paths of development; these have been termed the applied artificial intelligence path and the human interface path. The development of new types of systems is considered within the framework of a theory of technological evolution. It is shown that newer types of systems result from gradual accretion of new technologies on one hand, and loss of older ones on the other. Conclusions are drawn concerning the value of taxonomy in studying information systems, in suggesting possible research directions, and the desirability of rationalizing research efforts within the IS discipline.

111 citations


Journal ArticleDOI
TL;DR: How information technology can be designed to mediate feedback communication and deliver feedback that promotes feedback seeking is explained and the effects of information technology and the perceived mood of the feedback giver on the behavior of feedback seekers are examined.
Abstract: A major tenet in organizational behavior literature is that feedback improves performance. If feedback is thought to improve performance, then individuals should actively seek feedback in their work. Yet, surprisingly, individuals seldom seek feedback perhaps because of face-loss costs of obtaining feedback face-to-face. Furthermore, in cases where the giver is perceived to be in a bad mood, individuals may be even more reluctant to seek feedback if they believe seeking feedback risks the giver's wrath and a negative evaluation. In this paper, we explain how information technology can be designed to mediate feedback communication and deliver feedback that promotes feedback seeking. In a laboratory experiment, the effects of information technology and the perceived mood of the feedback giver on the behavior of feedback seekers are examined. The results showed that individuals in both the computer-mediated feedback environment and the computer-generated feedback environment sought feedback more frequently than individuals in the face-to-face feedback environment. In addition, individuals sought feedback more frequently from a giver who was perceived to be in a good mood than from a giver who was perceived to be in a bad mood.

88 citations


Journal ArticleDOI
TL;DR: An induction algorithm that seeks to develop inductive expert systems that maximize value is presented and results from an extensive set of experiments indicate that the algorithm will result in more valuable systems than theID3 algorithm and the ID3 algorithm with pessimistic pruning.
Abstract: There is a growing interest in the use of induction to develop a special class of expert systems known as inductive expert systems. Existing approaches to develop inductive expert systems do not attempt to maximize system value and may therefore be of limited use to firms. We present an induction algorithm that seeks to develop inductive expert systems that maximize value. The task of developing an inductive expert system is looked upon as one of developing an optimal sequential information acquisition strategy. Information is acquired to reduce uncertainty only if the benefits gained from acquiring the information exceed its cost. Existing approaches ignore the costs and benefits of acquiring information. We compare the systems developed by our algorithm with those developed by the popular ID3 algorithm. In addition, we present results from an extensive set of experiments that indicate that our algorithm will result in more valuable systems than the ID3 algorithm and the ID3 algorithm with pessimistic pruning.

48 citations


Journal ArticleDOI
TL;DR: The analysis indicates that modeling is a synthetic process that relates specific features found in the problem to its mathematical model, and that users of MODFORM build models comparable to those formulated by experts.
Abstract: The value of mathematical modeling and analysis in the decision support context is well recognized. However, the complex and evolutionary nature of the modeling process has limited its widespread use. In this paper, we describe our work on knowledge-based tools which support the formulation and revision of mathematical programming models. In contrast to previous work on this topic, we base our work on an indepth empirical investigation of experienced modelers and present three results: a a model of the modeling process of experienced modelers derived using concurrent verbal protocol analysis. Our analysis indicates that modeling is a synthetic process that relates specific features found in the problem to its mathematical model. These relationships, which are seldom articulated by modelers, are also used to revise models. b an implementation of a modeling support system called MODFORM based on this observationally derived model, and c the results of a preliminary experiment which indicates that users of MODFORM build models comparable to those formulated by experts. We use the formulation of mathematical programming models of production planning problems illustratively throughout the paper.

29 citations


Journal ArticleDOI
TL;DR: A modeling environment in which decision-trees are cast as attributed-graphs, and reframing operations on trees are implemented as graph-grammar productions is presented, to illustrate how a general-purpose modeling environment can be used to produce a specialized decision support system for problems that have a strong graphical orientation.
Abstract: One fundamental requirement in the expected utility model is that the preferences of rational persons should be independent of problem description. Yet an extensive body of research in descriptive decision theory indicates precisely the opposite: when the same problem is cast in two different but normatively equivalent "frames," people tend to change their preferences in a systematic and predictable way. In particular, alternative frames of the same decision-tree are likely to invoke different sets of heuristics, biases and risk-attitudes in the user's mind. The paper presents a modeling environment in which decision-trees are cast as attributed-graphs, and reframing operations on trees are implemented as graph-grammar productions. In addition to the basic functions of creating and analyzing decision-trees, the environment offers a natural way to define a host of "debiasing mechanisms" using graphical programming techniques. Some of these mechanisms have appeared in the decision theory literature, whereas others were directly inspired by the novel use of graph-grammars in modeling decision problems. The modeling environment was constructed using NETWORKS, a new model management system based on a graph-grammar formalism. Thus, a second objective of the paper is to illustrate how a general-purpose modeling environment can be used to produce, with relatively little effort, a specialized decision support system for problems that have a strong graphical orientation.

10 citations


Journal ArticleDOI
TL;DR: It is argued that the task of identifying discrepancies between independent bodies of knowledge is an inevitable part of any large knowledge acquisition effort, and the heuristics developed in this work are applicable even when knowledge acquisition is not done by reconciling two complete knowledge bases.
Abstract: One of the major unsolved problems in knowledge acquisition is reconciling knowledge originating from different sources. This paper proposes a technique for reconciling knowledge in two independent knowledge bases, describes a working program built to implement that technique, and discusses an exploratory study for validating the technique. The technique is based on the use of heuristics for identifying and resolving discrepancies between the knowledge bases. Each heuristic developed provides detection and resolution procedures for a distinct variety of discrepancy in the knowledge bases. Sample discrepancies include using synonyms for the same term, conflicting rules, and extra reasoning steps. Discrepancies are detected and resolved through the use of circumstantial evidence available from the knowledge bases themselves and by asking sharply focussed questions to the experts responsible for the knowledge bases. The technique was tested on two independently developed knowledge bases designed to aid novice statisticians in diagnosing problems in linear regression models. The heuristics located a significant number of the discrepancies between the knowledge bases and assisted the experts in creating a consensus knowledge base for diagnosing multicollinearity problems. We argue that the task of identifying discrepancies between independent bodies of knowledge is an inevitable part of any large knowledge acquisition effort. Hence the heuristics developed in this work are applicable even when knowledge acquisition is not done by reconciling two complete knowledge bases. We also suggest that our approach can be extended to other knowledge representations such as frames and database schemas, and speculate about its potential application to other domains involving the reconciliation of knowledge, such as requirements determination, negotiation, and design.

Journal ArticleDOI
TL;DR: A decision-theoretic model is presented that enables the characterization of feasible, efficient, and optimal ICD strategies for a dual-processor DEPS system and useful heuristic procedures for constructing high-quality efficient I CD strategies are developed.
Abstract: In this paper, we consider the problem of generating effective information-gathering, communication, and decision-making ICD strategies for a distributed expert problem-solving DEPS system. We focus on the special case of a dual-processor DEPS system and present a decision-theoretic model that enables the characterization of feasible, efficient, and optimal ICD strategies. In view of the tremendous amount of computing needed to generate optimal strategies for problems of practical size, we develop useful heuristic procedures for constructing high-quality efficient ICD strategies. We illustrate the use of the model and the solution procedure through an example.

Journal ArticleDOI
TL;DR: In this article, the authors extend the comparative analysis of subjective Bayesian SB and CF to the field of expert systems, where subjective degrees of belief and different elicitation procedures are likely to complicate their analytic similarity and impact their actual validity.
Abstract: Rule-based expert systems deal with inexact reasoning through a variety of quasi-probabilistic methods, including the widely used subjective Bayesian SB and certainty factors CF models, versions of which are implemented in many commercial expert system shells. Previous research established that under certain independence assumptions, SB and CF are ordinally compatible: when used to compute the beliefs in several hypotheses of interest under the same set of circumstances, the hypothesis that will attain the highest posterior probability will also attain the highest certainty factor, etc. This monotonicity is important in the context of expert systems, where most inference-engines and explanation facilities are designed to utilize relative scales of posterior beliefs, making little or no use of their absolute magnitudes. This research extends the comparative analysis of SB and CF to the field, where subjective degrees of belief and different elicitation procedures are likely to complicate their analytic similarity and impact their actual validity. In particular, we describe an experiment in which CF was shown to dominate SB in terms of several validity criteria, a finding which we attribute to parsimony and robustness considerations. The paper is relevant to i practitioners who use belief languages in rule-based systems, and ii researchers who seek a methodology to investigate the validity of other belief languages in controlled experiments.