scispace - formally typeset
Search or ask a question

Showing papers in "Information Systems Research in 2000"


Journal Article•DOI•
TL;DR: This work presents and tests an anchoring and adjustment-based theoretical model of the determinants of system-specific perceived ease of use, and proposes control, intrinsic motivation, and emotion as anchors that determine early perceptions about the ease ofuse of a new system.
Abstract: Much previous research has established that perceived ease of use is an important factor influencing user acceptance and usage behavior of information technologies. However, very little research has been conducted to understand how that perception forms and changes over time. The current work presents and tests an anchoring and adjustment-based theoretical model of the determinants of system-specific perceived ease of use. The model proposes control (internal and external--conceptualized as computer self-efficacy and facilitating conditions, respectively), intrinsic motivation (conceptualized as computer playfulness), and emotion (conceptualized as computer anxiety) as anchors that determine early perceptions about the ease of use of a new system. With increasing experience, it is expected that system-specific perceived ease of use, while still anchored to the general beliefs regarding computers and computer use, will adjust to reflect objective usability, perceptions of external control specific to the new system environment, and system-specific perceived enjoyment. The proposed model was tested in three different organizations among 246 employees using three measurements taken over a three-month period. The proposed model was strongly supported at all points of measurement, and explained up to 60% of the variance in system-specific perceived ease of use, which is twice as much as our current understanding. Important theoretical and practical implications of these findings are discussed.

5,807 citations


Journal Article•DOI•
TL;DR: It is described how two broad types of computer self-efficacy beliefs are constructed across different computing tasks by suggesting that initial general CSE beliefs will strongly predict subsequentspecific CSE Beliefs, and the emergent patterns of the hypothesized relationships are examined.
Abstract: The concept of computer self-efficacy (CSE) recently has been proposed as important to the study of individual behavior toward information technology. This paper extends current understanding about the concept of self-efficacy in the context of computer software. We describe how two broad types of computer self-efficacy beliefs, general self-efficacy and task-specific self-efficacy, are constructed across different computing tasks by suggesting that initial general CSE beliefs will strongly predict subsequentspecific CSE beliefs. The theorized causal relationships illustrate the malleability and development of CSE beliefs over time, within a training environment where individuals are progressively provided with greater opportunity for hands-on experience and practice with different software. Consistent with the findings of prior research, judgments of self-efficacy then serve as key antecedents of the perceived cognitive effort (ease of use) associated with technology usage. Further, we theorize that self-efficacy judgments in the task domain of computing are strongly influenced by the extent to which individuals believe that they are personally innovative with respect to information technology. Panel data were collected using a longitudinal research design within a training context where 186 subjects were taught two software packages in a sequential manner over a 14-week period. The emergent patterns of the hypothesized relationships are examined using structural equation modeling techniques. Results largely support the relationships posited.

810 citations


Journal Article•DOI•
TL;DR: The platform logic is articulate as a conceptual framework for both viewing the organizing of IT management activities as well as for framing important questions for future research.
Abstract: Prior research has generated considerable knowledge about the design of effective IT organizational architectures. Today, however, increasing signs have accumulated that this wisdom might be inadequate in shaping appropriate insights for contemporary practice. This essay seeks to direct research attention toward the following question:How should firms organize their IT activities in order to manage the imperatives of the business and technological environments in the digital economy? We articulate the platform logic as a conceptual framework for both viewing the organizing of IT management activities as well as for framing important questions for future research. In articulating this logic, we aim to shift thinking away from the traditional focus on governance structures (i.e., choice of centralized, decentralized, or federal forms) and sourcing structures (i.e., insourcing, outsourcing) and toward more complex structures that are reflective of contemporary practice. These structures are designed around important IT capabilities and network architectures.

423 citations


Journal Article•DOI•
TL;DR: An empirically derived model of the CSE construct proposed by Marakas, Yi, and Johnson (1998) is offered to highlight potential theoretical, methodological, and measurement issues which may have contributed to or exacerbated the unexpected results obtained in the Compeau and Higgins study.
Abstract: Recent empirical work by Compeau and Higgins (1995) investigated the role of behavioral modeling training in the development of computer skills. Their efforts have provided insight into our understanding of the role of computer self-efficacy (CSE) and behavioral modeling (BM) techniques with regard to training effectiveness. Contrary to their expectations, however, several of the hypothesized relationships were not supported, especially those relating to outcome expectancy. In this paper, an empirically derived model of the CSE construct proposed by Marakas, Yi, and Johnson (1998) is offered to highlight potential theoretical, methodological, and measurement issues which may have contributed to or exacerbated the unexpected results obtained in the Compeau and Higgins study. The empirical work contained herein is intended to both replicate and extend the work of Compeau and Higgins and to assist in resolving several key issues left unsettled by their seminal work in this area.

342 citations


Journal Article•DOI•
TL;DR: The results support the thenetwork externalities hypothesis that banks in markets that can generate a larger effective network size and a higher level of externalities tend to adopt early, while the size of a bank's own branch network reduces the probability of early adoption.
Abstract: Recent theoretical work suggests that network externalities are a determinant of network adoption. However, few empirical studies have reported the impact of network externalities on the adoption of networks. As a result, little is known about the extent to which network externalities may influence network adoption and diffusion. Using electronic banking as a context and an econometric technique calledhazard modeling, this research examines empirically the impact of network externalities and other influences that combine to determine network membership. The results support thenetwork externalities hypothesis. We find that banks in markets that can generate a larger effective network size and a higher level of externalities tend to adopt early, while the size of a bank's own branch network (a proxy for the opportunity cost of adoption)decreases the probability of early adoption.

323 citations


Journal Article•DOI•
TL;DR: Analyzing the impact of information technology in a healthcare setting using a longitudinal sample of hospital data from 1976 to 1994 provides evidence that IT contributes positively to the production of services in the healthcare industry.
Abstract: This research paper analyzes the impact of information technology (IT)in a healthcare setting using a longitudinal sample of hospital data from 1976 to 1994. We classify production inputs into labor and capital categories. Capital is classified into three components--medical IT capital, medical capital, and IT capital--and labor is classified into two components, medical labor and IT labor. Results provide evidence that IT contributes positively to the production of services in the healthcare industry.

277 citations


Journal Article•DOI•
TL;DR: In this paper, a cross-sectional survey of 80 specialty retailers found that adoption of the Quick Response (QR) program at a minimal level was associated with higher performance, although there was no performance impact due to higher levels of QR use.
Abstract: The Quick Response (QR) program is a hierarchical suite of information technologies (IT) and applications designed to improve the performance of retailers. Consultants advise retailers to adopt the program wholesale, implying that more and higher levels of technology are better than less technology and lower levels. Academicians, on the other hand, argue that good technology is "appropriate" technology. That is firms should adopt only those technologies that suit the specific strategic directions pursued by the firm. Who is right? Which approach to investing in IT yields better performance results? Surprisingly, this cross-sectional survey of 80 specialty retailers found more support for the practitioners' claims than for the academicians'. Adoption of the QR program at a minimal level was associated with higher performance, although there was no performance impact due to higher levels of QR use. Firms did appear to match their IT usage to their business strategies, but there was no linkage between strategic alignment and firm performance, and there was surprisingly little variation in business or IT strategy. In short, the findings of our study suggest that both practitioners and academicians need to refine their theories and advice about what makes IT investments pay off.

263 citations


Journal Article•DOI•
TL;DR: A conceptual model of the assessment of user competence is presented to organize and clarify the diverse literature regarding what user competence means and the problems of assessment and to discuss how user competence can be incorporated into the Task-Technology Fit model.
Abstract: Organizations today face great pressure to maximize the bene its from their investments in information technology (IT). They are challenged not just to use IT, but to use it as effectively as possible. Understanding how to assess the competence of users is critical in maximizing the effectiveness of IT use. Yet theuser competence construct is largely absent from prominent technology acceptance and it models, poorly conceptualized, and inconsistently measured. We begin by presenting a conceptual model of the assessment of user competence to organize and clarify the diverse literature regarding what user competence means and the problems of assessment. As an illustrative study, we then report the findings from an experiment involving 66 participants. The experiment was conducted to compare empirically two methods (paper and pencil tests versus self-report questionnaire), across two different types of software, or domains of knowledge (word processing versus spreadsheet packages), and two different conceptualizations of competence (software knowledge versus self-efficacy). The analysis shows statistical significance in all three main effects. How user competence is measured, what is measured, what measurement context is employed:all influence the measurement outcome. Furthermore, significant interaction effects indicate that different combinations of measurement methods, conceptualization, and knowledge domains produce different results. The concept of frame of reference, and its anchoring effect on subjects' responses, explains a number of these findings. The study demonstrates the need for clarity in both defining what type of competence is being assessed and in drawing conclusions regarding competence, based upon the types of measures used. Since the results suggest that definition and measurement of the user competence construct can change the ability score being captured, the existing information system (IS) models of usage must contain the concept of an ability rating. We conclude by discussing how user competence can be incorporated into the Task-Technology Fit model, as well as additional theoretical and practical implications of our research.

243 citations


Journal Article•DOI•
TL;DR: This work identifies and defines important unresolved problems in each of the three major capabilities enabled by such environments: Mobile Computing, Intelligent Agents, and Net-Centric Computing and proposes research strategies to address them.
Abstract: Application-driven, technology-intensive research is critically needed to meet the challenges of globalization, interactivity, high productivity, and rapid adaptation faced by business organizations. Information systems researchers are uniquely positioned to conduct such research, combining computer science, mathematical modeling, systems thinking, management science, cognitive science, and knowledge of organizations and their functions. We present an agenda for addressing these challenges as they affect organizations in heterogeneous and distributed environments. We focus on three major capabilities enabled by such environments: Mobile Computing, Intelligent Agents, and Net-Centric Computing. We identify and define important unresolved problems in each of these areas and propose research strategies to address them.

171 citations


Journal Article•DOI•
TL;DR: It is shown how a mathematical construct called a metagraph can be used to represent workflows, so that such questions can be addressed through formal operations, leading to more effective design of organizational processes.
Abstract: Agile manufacturing, fast-response micromarketing, and the rise of the virtual organization have led managers to focus on cross-functional business processes that link various divisions and organizations These processes may be realized as one or more workflows, each of which is an instantiation of a process under certain conditions Because an ability to adapt processes to workflow conditions is essential for organizational responsiveness, identifying and analyzing significant workflows is an important activity for managers, organization designers, and information systems specialists A variety of software systems have been developed to aid in the structuring and implementation of workflow systems, but they are mostly visualization tools with few analytical capabilities For example, they do not allow their users to easily determine which information elements are needed to compute other information elements, whether certain tasks depend on other tasks, and how resource availability affects information and tasks Analyses of this type can be performed by inspection, but this gives rise to the possibility of error, especially in large systems In this paper, we show how a mathematical construct called a metagraph can be used to represent workflows, so that such questions can be addressed through formal operations, leading to more effective design of organizational processes

157 citations


Journal Article•DOI•
TL;DR: Findings show how decisional guidance that provides system explanations at breakpoints in group interaction can improve MCDM GDSS usability and support Dhaliwal and Benbasat's (1996) conjecture that system explanations can improve decisional outcomes due to improvement in user understanding of decision models.
Abstract: Intelligent user interfaces, particularly in interactive group settings, can be based on system explanations that guide model building, application, and interpretation. Here we extend Silver's (1990, 1991) conceptualization of decisional guidance and the theory of breakpoints in group interaction to operationalize feedback and feedforward for a complex multicriteria modeling system operating within a group decision support system context. We outline a design approach for providing decisional guidance in GDSS and then test the feasibility of the design in a preliminary laboratory experiment. Findings show how decisional guidance that provides system explanations at breakpoints in group interaction can improve MCDM GDSS usability. Our findings support Dhaliwal and Benbasat's (1996) conjecture that system explanations can improve decisional outcomes due to improvement in user understanding of decision models. Further research on intelligent agents, particularly in interactive group settings, can build on the concepts of decisional guidance outlined in this paper.

Journal Article•DOI•
TL;DR: It is found that structure moderates the relationship between complexity, volatility, and enhancement outcomes, such that higher levels of structure are more advantageous for the more complex and more volatile applications in terms of reduced enhancement costs and errors.
Abstract: The cost of enhancing software applications to accommodate new and evolving user requirements is significant. Many enhancement cost-reduction initiatives have focused on increasing software structure in applications. However, while software structure can decrease enhancement effort by localizing data processing, increased effort is also required to comprehend structure. Thus, it is not clear whether high levels of software structure are economically efficient in all situations. In this study, we develop a model of the relationship between software structure and software enhancement costs and errors. We introduce the notion of software structure as a moderator of the relationship between software volatility, total data complexity, and software enhancement outcomes. We posit that it is efficient to more highly structure the more volatile applications, because increased familiarity with the application structure through frequent enhancement enables localization of maintenance effort. For more complex applications, software structure is more beneficial than for less complex applications because it facilitates the comprehension process where it is most needed. Given the downstream enhancement benefits of structure for more volatile and complex applications, we expect that the optimal level of structure is higher for these applications. We empirically evaluate our model using data collected on the business applications of a major mass merchandiser and a large commercial bank. We find that structure moderates the relationship between complexity, volatility, and enhancement outcomes, such that higher levels of structure are more advantageous for the more complex and more volatile applications in terms of reduced enhancement costs and errors. We also find that more structure is designed in for volatile applications and for applications with higher levels of complexity. Finally, we identify application type as a significant factor in predicting which applications are more volatile and more complex at our research sites. That is, applications with induction-based algorithms such as those that support planning, forecasting, and management decision-making activities are more complex and more volatile than applications with rule-based algorithms that support operational and transaction-processing activities. Our results indicate that high investment in software quality practices such as structured design is not economically efficient in all situations. Our findings also suggest the importance of organizational mechanisms in promoting efficient design choices that lead to reduced enhancement costs and errors.

Journal Article•DOI•
TL;DR: The results suggest that multimedia presentations, but not text-based presentations, reduce the influence of first impression bias.
Abstract: First impression bias refers to a limitation of human information processing in which people are strongly influenced by the first piece of information that they are exposed to, and that they are biased in evaluating subsequent information in the direction of the initial influence. The psychology literature has portrayed first impression bias as a virtually "inherent" human bias. Drawing from multimedia literature, this study identifies several characteristics of multimedia presentations that have the potential to alleviate first impression bias. Based on this literature, a set of predictions was generated and tested through a laboratory experiment using a simulated multimedia intranet.Half of the 80 subjects were provided with a biased cue. Subjects were randomly assigned to four groups: (1) text with first impression bias cue, (2) multimedia with first impression bias cue, (3) text without biased cue, and (4) multimedia without biased cue. The experimental task involved conducting a five-year performance appraisal of a department head. The first impression bias cue was designed to provide incomplete and unfavorable information about the department head, but the information provided subsequently was intended to be favorable of his performance.Results show that the appraisal score of the text with biased cue group was significantly lower than the text only (without biased cue) group. On the other hand, the appraisal score of the multimedia with biased cue group was not significantly different from the multimedia only (without biased cue) group. As a whole, the results suggest that multimedia presentations, but not text-based presentations, reduce the influence of first impression bias.

Journal Article•DOI•
TL;DR: Issues relating to qualitative research, emic versus etic approaches, and a structured, yet flexible, qualitative research interviewing technique, which decreases the potential for bias on the part of the researcher are presented.
Abstract: As more business is being conducted internationally and corporations establishthemselves globally, the impact of cross-cultural aspects becomes an important research issue. The need to conduct cross-cultural research is perhaps even more important in the relatively newly emerging and quickly changing information systems (IS)field. This article presents issues relating to qualitative research, emic versus etic approaches, and describes a structured, yet flexible, qualitative research interviewing technique, which decreases the potential for bias on the part of the researcher. The grounded theory technique presented in this article is based on Kelly's Repertory Grid (RepGrid), which concentrates on "laddering," or the further elaboration of elicited constructs, to obtain detailed researchparticipant comments about an aspect within the domain of discourse. The technique provides structure to a "one-to-one "interview. But, at the same time, RepGrids allow sufficient flexibility for the research participants to be able to express their own interpretation about a particular topic. This article includes a brief outline of a series of research projects that employed the RepGrid technique to examine similarities and differences in the way in which "excellent" systems analysts are viewed in two different cultures. Also included is a discussion of the technique's applicability for qualitative researchin general and cross-cultural studies specifically. The article concludes by suggesting ways in which the RepGrid technique addresses some of the major methodological issues in cross-cultural research.

Journal Article•DOI•
TL;DR: The results of the experiment revealed that understanding a system represented by multiple diagrams involves a process of searching for related information and of developing hypotheses about the target system, and showed that these perceptual and conceptual integration processes were facilitated by incorporating visual cues and contextual information in the multiple diagrams as representation aids.
Abstract: In order to understand diagrammatic reasoning with multiple diagrams, this study proposes a theoretical framework that focuses on the cognitive processes of perceptual and conceptual integration. The perceptual integration process involves establishing interdependence between relevant system elements that have been dispersed across multiple diagrams, while the conceptual integration process involves generating and refining hypotheses about a system by combining higher-level information inferred from the diagrams. This study applies a diagrammatic reasoning framework of a single diagram to assess the usability of multiple diagrams as an integral part of a system development methodology. Our experiment evaluated the effectiveness and usability of design guidelines to aid problem solving with multiple diagrams. The results of our experiment revealed that understanding a system represented by multiple diagrams involves a process of searching for related information and of developing hypotheses about the target system. The results also showed that these perceptual and conceptual integration processes were facilitated by incorporating visual cues and contextual information in the multiple diagrams as representation aids. Visual cues indicate which elements in a diagram are related to elements in other diagrams; the contextual information indicates how the individual datum in one diagram is related to the overall hypothesis about the entire system.

Journal Article•DOI•
TL;DR: An examination of the unusual rise and fall of the PC-98 shows how victory in a standards competition can be negated by the introduction of a new architectural layer that spans two or more previously incompatible architectures.
Abstract: For more than a decade NEC dominated the Japanese PC market with its PC-98 architecture, which was incompatible both with its major Japanese rivals and the global PC standard. However, NEC was powerless to prevent the introduction of Japanese versions of Windows 3.1 and 95 that ran on its competitors' architectures as well as on the PC-98, unifying the Japanese PC market and creating a common set of application programming interfaces for all Intel-based Japanese PCs. The introduction of Windows rendered obsolete the large DOS-based software library that had provided strong positive externalities for the NEC architecture. Absent those advantages, the market share of the PC-98 standard fell from 60% to 33% in five years, and NEC finally abandoned the PC-98 in favor of the global standard. An examination of the unusual rise and fall of the PC-98 shows how victory in a standards competition can be negated by the introduction of a new architectural layer that spans two or more previously incompatible architectures.

Journal Article•DOI•
TL;DR: This work identifies conditions under which an entrant will launch a next generation product thereby preventing the incumbent from employing a protection strategy and shows that the competition may require the launching firm to lose money at the margin on the nextgeneration product.
Abstract: The most difficult challenge facing a market leader is maintaining its leading position. This is especially true in information technology and telecommunications industries, where multiple product generations and rapid technological evolution continually test the ability of the incumbent to stay ahead of potential entrants. In these industries, an incumbent often protects its position by launching prematurely to retain its leadership. Entry, however, happens relatively frequently. We identify conditions under which an entrant will launch a next generation product thereby preventing the incumbent from employing a protection strategy. We define a capabilities advantage as the ability to develop and launch a next generation product at a lower cost than a competitor, and a product with a greater market response is one with greater profit flows. Using these definitions, we find that an incumbent with a capabilities advantage in one next generation product can be overtaken by an entrant with a capabilities advantage in another next generation product only if the entrant's capabilities advantage is in a disruptive technology that yields a product with a greater market response. This can occur even though both next generation products are available to both firms. We also show that the competition may require the launching firm to lose money at the margin on the next generation product.

Journal Article•DOI•
TL;DR: This work integrates the framework of organizational memory with intelligent agent technology to provide a coordination mechanism that enables the structuring of awareness events and gives information about the users' feedback control.
Abstract: In this paper we first present an empirical study of groupware use illustrating problems that users faced with restricted feedback about others' activities. Awareness can aid users in learning interdependencies, and in forming conventions to regulate system use and information-sharing. As a solution to providing awareness, we integrate the framework of organizational memory with intelligent agent technology to provide a coordination mechanism that enables the structuring of awareness events and gives information about the users' feedback control. In the proposed model, feedback control relationships are captured into a multilayered model of organizational memory and transferred to users by agents-facilitators. The approach is based on a system dynamics approach to organizational learning.

Journal Article•DOI•
TL;DR: The application of production theory to the production of information services can yield useful insights from both a theoretical and managerial perspective, and it is concluded that the underlying form of the production function is the same at the level of both the firm and the economy.
Abstract: Previous research has demonstrated that the production of information services can be characterized at the aggregate economy-wide level by the Cobb-Douglas production function. However, the underlying production process at the firm level has not yet been ascertained. The objective of this paper is to determine the form of the production process for information systems services at the firm level by conducting an empirical analysis of IS budget data. The production of information services is modeled using a production function with two inputs, hardware and personnel. We estimate various econometric specifications to determine several characteristics of the provision of information services, including the allocation of the information systems budget to its two largest components--hardware and personnel--and its implications for the form of the production function. After controlling for industry sector, we find that the ratio of personnel to hardware is independent of scale, which indicates a homothetic production function. We also find that the ratio of factor shares is constant with time, consistent with the Cobb-Douglas production function.We conclude that the underlying form of the production function is the same at the level of both the firm and the economy. Our analysis demonstrates how the application of production theory to the production of information services can yield useful insights from both a theoretical and managerial perspective.

Journal Article•DOI•
TL;DR: The simulation results show that the priority pricing mechanism not only maximizes organizational benefits but also outperforms in all aspects of traditional performance measures compared to frequently used database scheduling techniques, such as first-come-first-served, earliest deadline first and least slack first.
Abstract: We propose priority pricing as an on-line adaptive resource scheduling mechanism to manage real-time databases within organizations. These databases provide timely information for delay sensitive users. The proposed approach allows diverse users to optimize their own objectives while collectively maximizing organizational benefits. We rely on economic principles to derive priority prices by modeling the fixed-capacity real-time database environment as an economic system. Each priority is associated with a price and a delay, and the price is the premium (congestion toll resulting from negative externalities) for accessing the database. At optimality, the prices are equal to the aggregate delay cost imposed on all other users of the database. These priority prices are used to control admission and to schedule user jobs in the database system. The database monitors the arrival processes and the state of the system, and incrementally adjusts the prices to regulate the flow. Because our model ignores the operational intricacies of the real-time databases (e.g., intermediate queues at the CPU and disks, memory size, etc.) to maintain analytical tractability, we evaluate the performance of our pricing approach through simulation. We evaluate the database performance using both the traditional real-time database performance metrics (e.g., the number of jobs serviced on time, average tardiness) and the economic benefits (e.g., benefits to the organization). The simulation results, under various database workload parameters, show that our priority pricing mechanism not only maximizes organizational benefits but also outperforms in all aspects of traditional performance measures compared to frequently used database scheduling techniques, such as first-come-first-served, earliest deadline first and least slack first.

Journal Article•DOI•
TL;DR: This paper examines the aspect of belief revision and develops a generalized algorithm that can be used for the modification of existing data in a probabilistic relational database and the belief revision scheme is shown to be closed, consistent, andcomplete.
Abstract: The inherent uncertainty pervasive over the real world often forces business decisions to be made using uncertain data. The conventional relational model does not have the ability to handle uncertain data. In recent years, several approaches have been proposed in the literature for representing uncertain data by extending the relational model, primarily using probability theory. The aspect of database modification, however, has not been addressed in prior research. It is clear that any modification of existing probabilistic data, based on new information, amounts to the revision of one's belief about real-world objects. In this paper, we examine the aspect of belief revision and develop a generalized algorithm that can be used for the modification of existing data in a probabilistic relational database. The belief revision scheme is shown to beclosed,consistent, andcomplete.

Journal Article•DOI•
TL;DR: A combined mean-risk measure is developed that has desirable theoretical properties (consistency and separability) and is supported by empirical results on decision making under risk and incorporated into the Risk-Based induction algorithm.
Abstract: Notably absent in previous research on inductive expert systems is the study of meanrisk trade-offs. Such trade-offs may be significant when there are asymmetries such as unequal classification costs, and uncertainties in classification and information acquisition costs. The objective of this research is to developmodels to evaluate mean-risk trade-offs in value-based inductive approaches. We develop a combined mean-risk measure and incorporate it into the Risk-Based induction algorithm. The mean-risk measure has desirable theoretical properties (consistency and separability) and is supported by empirical results on decision making under risk. Simulation results using the Risk-Based algorithm demonstrate: (i) an order of magnitude performance difference between mean-based and risk-based algorithms and (ii) an increase in the performance difference between these algorithms as either risk aversion, uncertainty, or asymmetry increases given modest thresholds of the other two factors.

Journal Article•DOI•
TL;DR: This work identifies a graphical framework for decomposing systems in a manner that enables modular verification of large knowledge-based systems, and discusses a meta-verification procedure that enables us to check if decompositions under consideration do indeed satisfy the requirements for an ordered polytree structure.
Abstract: We examine the verification of large knowledge-based systems. When knowledge bases are large, the verification process poses several problems that are usually not significant for small systems. We focus on decompositions that allow verification of such systems to be performed in a modular fashion. We identify a graphical framework, that we call an ordered polytree, for decomposing systems in a manner that enables modular verification. We also determine the nature of information that needs to be available for performing local checks to ensure accurate detection of anomalies. We illustrate the modular verification process using examples, and provide a formal proof of its accuracy. Next, we discuss a meta-verification procedure that enables us to check if decompositions under consideration do indeed satisfy the requirements for an ordered polytree structure. Finally, we show how the modular verification algorithm leads to considerable improvements in the computational effort required for verification as compared to the traditional approach.

Journal Article•DOI•
TL;DR: The modeling in this research report suggests that testing probably needs to be conducted over more than half of the useful life of a system in order to discover even one-third of the total errors in the system.
Abstract: Error search and correction are major contributors to software development cost, yet typically uncover only a small fraction of software errors. Postrelease errors, i.e., those that are only observed after a system is released, threaten a variety of potential failures and consequences, each with low individual probability of occurrence. The combined effect of postrelease errors can and often does result in a significant rate of occurrence of these potential failures, with unpredictable consequences and severity. One particular source of postrelease errors that has received extensive publicity is the year 2000, or Y2K, error. The modeling in this research report suggests that testing probably needs to be conducted over more than half of the useful life of a system in order to discover even one-third of the total errors in the system. It suggests that short product lifecycles, lifetime testing, and effective feedback loops for error reporting are necessary to assure reliable software.