scispace - formally typeset
Search or ask a question

Showing papers on "Implementation published in 1995"


Book
01 Mar 1995
TL;DR: In this article, the authors present a framework for parallel programming, based on three conceptual classes for understanding parallelism and three programming paradigms for implementing parallel programs, including result parallelism, which centers on parallel computation of all elements in a data structure, agenda parallelism which specifies an agenda of tasks for parallel execution, and specialist parallelism in which specialist agents solve problems cooperatively.
Abstract: We present a framework for parallel programming, based on three conceptual classes for understanding parallelism and three programming paradigms for implementing parallel programs. The conceptual classes are result parallelism, which centers on parallel computation of all elements in a data structure; agenda parallelism, which specifies an agenda of tasks for parallel execution; and specialist parallelism, in which specialist agents solve problems cooperatively. The programming paradigms center on live data structures that transform themselves into result data structures; distributed data structures that are accessible to many processes simultaneously; and message passing, in which all data objects are encapsulated within explicitly communicating processes. There is a rough correspondence between the conceptual classes and the programming methods, as we discuss. We begin by outlining the basic conceptual classes and programming paradigms, and by sketching an example solution under each of the three paradigms. The final section develops a simple example in greater detail, presenting and explaining code and discussing its performance on two commercial parallel computers, an 18-node shared-memory multiprocessor, and a 64-node distributed-memory hypercube. The middle section bridges the gap between the abstract and the practical by giving an overview of how the basic paradigms are implemented.We focus on the paradigms, not on machine architecture or programming languages: The programming methods we discuss are useful on many kinds of parallel machine, and each can be expressed in several different parallel programming languages. Our programming discussion and the examples use the parallel language C-Linda for several reasons: The main paradigms are all simple to express in Linda; efficient Linda implementations exist on a wide variety of parallel machines; and a wide variety of parallel programs have been written in Linda.

384 citations


01 Jun 1995
TL;DR: This paper describes the NFS version 3 protocol and provides a template for writing compatible implementations so that people can write compatible implementations.
Abstract: This paper describes the NFS version 3 protocol. This paper is provided so that people can write compatible implementations.

239 citations


Proceedings ArticleDOI
15 May 1995
TL;DR: The real-time publisher/subscriber model as discussed by the authors is a variation of group-based programming and anonymous communication techniques, which can address issues of programming ease, portability, scalability and analyzability.
Abstract: Distributed real-time systems are becoming more pervasive in many domains including process control, discrete manufacturing, defense systems, air traffic control, and online monitoring systems in medicine. The construction of such systems, however, is impeded by the lack of simple yet powerful programming models and the lack of efficient, scalable, dependable and analyzable interfaces and their implementations. We argue that these issues need to be resolved with powerful application-level toolkits similar to that provided by ISIS. We consider the inter-process communication requirements which form a fundamental block in the construction of distributed real-time systems. We propose the real-time publisher/subscriber model, a variation of group-based programming and anonymous communication techniques, as a model for distributed real-time inter-process communication which can address issues of programming ease, portability, scalability and analyzability. The model has been used successfully in building a software architecture for building upgradable real-time systems. We provide the programming interface, a detailed design and implementation details of this model along with some preliminary performance benchmarks. The results are encouraging in that the goals we seek look achievable.

174 citations


Proceedings ArticleDOI
Greg Abram1, Lloyd A. Treinish1
29 Oct 1995
TL;DR: This paper extends the execution model of the IBM Visualization Data Explorer to incorporate a more complete and efficient programming infrastructure while still preserving the virtues of pure "data-flow".
Abstract: Modular visualization environments utilizing a data-flow execution model have become quite popular in recent years, especially those that incorporate visual programming tools. However, simplistic implementations of such an execution model are quite limited when applied to problems of realistic complexity, which negate the intuitive advantage of data-flow systems. This situation can be resolved by extending the execution model to incorporate a more complete and efficient programming infrastructure while still preserving the virtues of "pure data-flow". This approach has been used for the implementation of a general-purpose software package, IBM Visualization Data Explorer.

78 citations


Journal ArticleDOI
01 Jan 1995
TL;DR: An experimental software repository system that provides organization, storage, management, and access facilities for reusable software components and offers facilities for visual presentation of the soft-ware objects is presented.
Abstract: We present an experimental software repository system that provides organization, storage, management, and access facilities for reusable software components. The system, intended as part of an applications development environment, supports the representation of information about requirements, designs and implementations of software, and offers facilities for visual presentation of the software objects. This article details the features and architecture of the repository system, the technical challenges and the choices made for the system development along with a usage scenario that illustrates its functionality. The system has been developed and evaluated within the context of the ITHACA project, a technology integration/software engineering project sponsored by the European Communities through the ESPRIT program, aimed at developing an integrated reuse-centered application development and support environment based on object-oriented techniques.

74 citations


Proceedings Article
01 Jan 1995
TL;DR: The W3Object model is described, and it is shown, through a prototype implementation, how the model is used to address the problems of referential integrity and transparent object (resource) migration.
Abstract: In this paper we discuss some of the problems of the current Web and show how the introduction of object-orientation provides flexible and extensible solutions. Web resources become encapsulated as objects, with well-defined interfaces through which all interactions occur. The interfaces and their implementations can be inherited by builders of objects, and methods (operations) can be redefined to better suit the object. New characteristics, such as concurrency control and persistence, can be obtained by inheriting from suitable base classes, without necessarily requiring any changes to users of these resources. We describe the W3Object model which we have developed based upon these ideas, and show, through a prototype implementation, how we have used the model to address the problems of referential integrity and transparent object (resource) migration. We also give indications of future work.

42 citations


Proceedings ArticleDOI
15 Mar 1995
TL;DR: The top-down approach proposed focuses instead on understanding the abstractions represented by the classical data structures without regard to their physical implementation.
Abstract: Programming is traditionally taught using a bottom-up approach, where details of syntax and implementation of data structures are the predominant concepts. The top-down approach proposed focuses instead on understanding the abstractions represented by the classical data structures without regard to their physical implementation. Only after the students are comfortable with the behavior and applications of the major data structures do they learn about their implementations or the basic data types like arrays and pointers that are used. This paper discusses the benefits of such an approach and how it is being used in a Computer Science curriculum.

38 citations


04 Dec 1995
TL;DR: Hydra is a set of methods and software tools for carrying out digital circuit design using Haskell that shows that a complete computer system design can be presented, at all levels of abstraction, with no details omitted, giving students a genuine understanding of how computers work.
Abstract: Hydra is a set of methods and software tools for carrying out digital circuit design using Haskell It has been used successfully for three years in the third-year course on Computer Architecture at the University of Glasgow, with plans to extend its use to the advanced fourth-year course Some of its innovative features are: Signal type classes; support for CMOS and NMOS design; a large family of higher order combining forms; a set of tools for simulation; a language for expressing control algorithms; and automated tools for deriving control circuits from control algorithms The system contains a rich library of circuits, ranging from low level implementations of logic gates using pass transistors to complete processor designs The chief benefit of using functional circuit specification to teach computer architecture is that a complete computer system design can be presented, at all levels of abstraction, with no details omitted, giving students a genuine understanding of how computers work

35 citations


Journal ArticleDOI
TL;DR: It is found that top management support and immediate manager acceptance are important, and that demonstrable business benefits and problem urgency affect management support, and it is concluded that successful expert systems implementation exhibits properties found in both.
Abstract: Many expert systems implementations are unsuccessful: the system falls into disuse or is not used to the extent originally envisioned. This paper reports on a study on the factors that lead to successful (or conversely less successful) expert system implementation. Six cases, drawn from three high-technology companies, were investigated: three very successful, three less so. Following analysis of the cases, propositions were developed regarding various success factors and their interrelation. We found that top management support and immediate manager acceptance are important, and that demonstrable business benefits and problem urgency affect management support. At the user level, perception of management support, degree of organizational change, organizational support, and users' personal stake in the system affect operational use. We relate our findings to what is known about implementation in Management Science and Information Systems, concluding that successful expert systems implementation exhibits properties found in both.

29 citations


01 Jan 1995
TL;DR: The research presented in this paper contributes to this problem in two ways by stipulating requirements on representation languages to be used for modelling trade procedures, and presenting a common graph-based representation language, Documentary Petri Nets, which satisfies these requirements.
Abstract: Organizations engaging in electronic commerce typically are faced with defining detailed bilateral agreements between business partners. This implies that set-up costs for new electronic linkages can be quite high. There is a growing need to model and simulate this form of inter-organizational interaction to lower these costs. The research presented in this paper contributes to this problem in two ways by: 1) stipulating requirements on representation languages to be used for modelling trade procedures; and, 2) presenting a common graph-based representation language, Documentary Petri Nets, which satisfies these requirements. The practical implementation of Documentary Petri Net models is illustrated using a modelling environment, Case/Open-edi, a tool that may be used for the design and analysis of trade procedures. A simplified documentary credit procedure is used to give an example of such a Documentary Petri Net model. Finally, conclusions and directions for research are given. 1. Facilitating electronic commerce. The introduction of EDI can have tremendous benefits for the efficiency of the execution of trade procedures, both among and within organizations. The most obvious benefit is the reduction of time needed for the execution of the transaction. Exchanging documents electronically eliminates delays caused by the postal exchange of paper documents between organizations, and offers opportunities to reduce the processing time of documents within organizations. Using EDI, it is no longer required to re-key incoming or outgoing data manually, as the structured form of the messages enables automatic processing by computer systems. It is now possible to replace many paper documents with electronic equivalents, particularly since standards for the structure of the messages have matured. Regarding these benefits, it could be expected that many organizations would be eager to start with EDI implementation. However, this is not reflected in the current status of EDI diffusion. In reality, successful EDI implementations have been realized mainly in trading relationships that can be characterized as 'electronic hierarchies' in Williamson's terms, i.e. trading relationships with frequent transactions, mostly over a long period of time (Value Adding Partnerships, [14]). In these kind of relationships, parties can gain extra benefits by closely coordinating each others' actions, thus compensating for the extra start-up costs stemming from detailed trading partner negotiations. However, when the partnership is established for a limited period, covering a few transactions only, EDI linkages are seldom observed since the costs of the necessary negotiations cannot be recovered from EDI efficiency gains. These shorter-term partnerships could be called 'electronic market relationships'. The aim of this research is to decrease the set-up costs for EDI linkages, thereby facilitating the introduction of electronic market relationships.

24 citations


20 Nov 1995
TL;DR: This research develops a new, medium-independent model of presentation and shows that a specification-driven presentation system based on this model can form the basis of a software environment supporting multiple presentations and a variety of media.
Abstract: The many different documents produced by a large software project are typically created and maintained by a variety of incompatible software tools, such as programming environments, document processing systems, and specialized editors for non-textual media. The incompatibility of these tools hinders communication within the project by making it difficult to share the documents that record the project's plans, design history, implementations, and experiences. An important factor underlying this incompatibility is the diversity of presentation models that have been adopted. Each system's presentation model is well-suited to the document types and media it supports, but is difficult to adapt to other types and media. This dissertation describes a new model of presentation that is designed to be applied to a diverse array of documents drawn from many different media. The model is based on four simple services: attribute propagation, box layout, tree elaboration, and interface functions. Together, these services are powerful enough to support the creation of many novel and visually rich document presentations. Furthermore, because the model is based on a new understanding of the fundamental parameters defining media, the four services can be adapted for use with all media in common use. The utility of this presentation model has been explored through the design and implementation of Proteus, a system for handling presentation specifications that is part of Ensemble, an environment for developing both software and multimedia documents. Proteus interprets specifications that describe how the four presentation services should be applied to individual documents. Proteus has a medium-independent kernel that provides the specification interpreter and runtime support for the four presentation services. The kernel is adapted to different media via the addition of a shell specifying the medium's parameters. Proteus's adaptability significantly eases the task of extending Ensemble to support new media. Proteus is also an important part of Ensemble's support for multiple, synchronized presentations. In summary, this research develops a new, medium-independent model of presentation and shows that a specification-driven presentation system based on this model can form the basis of a software environment supporting multiple presentations and a variety of media.

Journal ArticleDOI
TL;DR: Two studies encompassing more than 100 developers reveal nine practices that differentiate successful implementations of CASE tools from failed ones.
Abstract: Implementation of a CASE tool is a complex process whose success depends on more than having the right tool with the desired features. Two studies encompassing more than 100 developers reveal nine practices that differentiate successful implementations of CASE tools from failed ones.

Journal ArticleDOI
TL;DR: A generic approach is presented that integrates knowledge-based systems with both a well-known and accepted modeling technique (scoring models), and several decision support techniques (such as the analytic hierarchy process and discriminant analysis) to provide the decision support necessary to evaluate whether or not full-scale development of a candidate product should proceed.
Abstract: Customer-oriented product development has become increasingly necessary for competitive reasons. This paper describes a framework and a methodology for the design, development, and implementation of knowledge-based decision support systems for customer satisfaction assessment. A generic approach is presented that integrates knowledge-based systems with both a well-known and accepted modeling technique (scoring models), and several decision support techniques (such as the analytic hierarchy process and discriminant analysis). In addition to the fexibility and developmental advantages of knowledge-based systems, additional benefits of this approach include reduced information processing and gathering time, improved communications with senior management, and better management of scarce development resources. To simplify the exposition, we illustrate the framework and methodology within the context of a successful system implementation. The resulting system, known as the Customer Satisfaction Assessment System (CSAS), is designed to provide the decision support necessary to evaluate whether or not full-scale development of a candidate product should proceed. The system assesses and estimates the extent to which a potential new product will meet the expectations of the customer. CSAS incorporates market research findings, as well as strategic evaluation factors and their interrelationships. It can function as a stand-alone system or in conjunction with other evaluation systems (e.g., those providingfinancial, technological, manufacturing, and marketing evaluations) to provide a complete assessment of the product under consideration. Since its implementation, the experts' and other users' expressions of complete satisfaction and commitment to the system has been an indication of its value as an important decision support tool. The paper concludes with a discussion of the lessons learned for future implementations and some important extensions of this research.

Journal ArticleDOI
TL;DR: The research described in this paper has the aim to develop a paper-based workbook style methodology that companies can use to increase the benefits generated by Concurrent Engineering, while reducing implementation costs, risk and time.

Patent
01 May 1995
TL;DR: In this paper, a computer aided system design tool which enables the user to define the system functionally by defining functional program requirements (28) and defining preliminary system requirements (30) has been defined.
Abstract: A computer aided system design tool which enables the user to define the system functionally by defining functional program requirements (28) and defining preliminary system requirements (30). After the program requirements (28) and preliminary system requirements (30) have been defined, an automated reuse tool (38) selects particular implementations for each of the functions specified by the user when defining the functional program requirements (28). The automated reuse tool (38) also partitions the system constrains defined by the preliminary system requirements (30) over the system by proper selection of particular implementations for each design. The resulting reuse design database (40) may then be simulated (42) to verify proper operation of the system.

Book ChapterDOI
01 Oct 1995
TL;DR: A different approach is proposed consisting of first splitting up the specification into as many specifications as there were subsystems in the initial specification, and the programming language code will then be generated from all these generated specifications.
Abstract: Deriving distributed prototypes and implementations from formal specifications is one of the most important aspects of an implementation-oriented language like Estelle. Some tools already exist allowing the generation of distributed programs to be executed on a computer network. All these tools follow the same approach: code in a programming language is directly generated from the specification, compiled and linked with some communication libraries. In this paper, we propose a different approach consisting of first splitting up the specification into as many specifications as there were subsystems in the initial specification. The programming language code will then be generated from all these generated specifications. We show the advantages of this approach. The new method is being integrated into the Estelle Development Toolset. Runtime measurements and comparisons with another Estelle code generator, Pet/Dingo, show its usefulness.

Book ChapterDOI
01 Jan 1995
TL;DR: Early experiments with automated design and implementation of application-specific communication protocols based on the formal specification of the application using ESTEREL show that the automated approach creates a better integrated implementation with the same level of performance.
Abstract: New concepts such as the Application Level Framing (ALF) have been proposed to make network protocol implementations more efficient and to give the application programmer greater control over the data transmission. This paper describes early experiments with automated design and implementation of application-specific communication protocols based on the formal specification of the application using ESTEREL. A comparison is made between a hand coded JPEG player and its automated equivalent. The results show that the automated approach creates a better integrated implementation with the same level of performance.

Proceedings ArticleDOI
01 Dec 1995
TL;DR: This tutorial surveys the present and future of multimedia computing systems and outlines new challenges for CAD presented by these systems, including the design of VLSI systems-on-chips for multimedia and the successive refinement of an application from software to a high-volume chip using advanced CAD synthesis tools.
Abstract: This tutorial surveys the present and future of multimedia computing systems and outlines new challenges for CAD presented by these systems. Multimedia computing is a challenging domain for several reasons: it requires both high computation rates and memory bandwidth; it is a multirate computing problem; and requires low-cost implementations for high-volume markets. As a result, the design of multimedia computing systems introduces new challenges for CAD at all levels of abstraction, ranging from layout to system design. After surveying the nature of the multimedia computing problem, we examine two experiences in multimedia computer design from a CAD perspective: the design of VLSI systems-on-chips for multimedia: and the successive refinement of an application from software to a high-volume chip using advanced CAD synthesis tools.

Proceedings ArticleDOI
01 Aug 1995
TL;DR: This work addresses the problem of transforming a behavioral specification so that synthesis of a testable implementation from the new specification requires significantly less area and partial scan cost than synthesis from the original specification.
Abstract: We address the problem of transforming a behavioral specification so that synthesis of a testable implementation from the new specification requires significantly less area and partial scan cost than synthesis from the original specification. A two-stage objective function, that estimates the area and testability of the final implementation, and also captures enabling effects of the transformations, is developed. Optimization is done using a new randomized branch and bound steepest descent algorithm. Application of the transformation algorithm on several examples demonstrates significant simultaneous improvement in both area and testability of the final implementations.

DissertationDOI
20 Nov 1995
TL;DR: This dissertation shows how to execute specifications written at a level of abstraction comparable to that found in specifications written in non-executable specification languages, including an algorithm for evaluating and satisfying first order predicate logic assertions written over abstract model types.
Abstract: Executable specifications can serve as prototypes of the specified system and as oracles for automated testing of implementations, and so are more useful than non-executable specifications. Executable specifications can also be debugged in much the same way as programs, allowing errors to be detected and corrected at the specification level rather than in later stages of software development. However, existing executable specification languages often force the specifier to work at a low level of abstraction, which negates many of the advantages of non-executable specifications. This dissertation shows how to execute specifications written at a level of abstraction comparable to that found in specifications written in non-executable specification languages. The key innovation is an algorithm for evaluating and satisfying first order predicate logic assertions written over abstract model types. This is important because many specification languages use such assertions. Some of the features of this algorithm were inspired by techniques from constraint logic programming.

Journal ArticleDOI
Colin H. West1, Angelo Tosi1
TL;DR: A methodology for testing implementations of communications protocols that uses autonomous test drivers to exercise the protocol service interface that has proved to be a powerful and cost-effective complement to other testing techniques.
Abstract: We describe our experience with a methodology for testing implementations of communications protocols that uses autonomous test drivers to exercise the protocol service interface. The test drivers only generate random event sequences that the service interface specifies should be accepted by a correct implementation. Event rejection therefore indicates a discrepancy between the service specification and the implementation under test. The methodology has proved to be a powerful and cost-effective complement to other testing techniques.

Journal ArticleDOI
TL;DR: A data-flow programming model which can act simultaneously as a functional representation of algorithms and as a structural description of their corresponding implementations on a target computer built up of 3-D interconnected data-driven processing elements (DDPs).
Abstract: Programming parallel architectures dedicated to real-time image processing (IP) is often a difficult and error-prone task. This mainly results from the fact that IP algorithms typically involve several distinct processing levels and data representations, and that various execution models as well as complex hardware are needed for handling these processing layers under real-time constraints. Our goal is to permit an intuitive but still efficient handling of such an architecture by providing a continuous and readable path from the functional specification of an algorithm to its corresponding hardware implementation. For this, we developed a data-flow programming model which can act simultaneously as a functional representation of algorithms and as a structural description of their corresponding implementations on a target computer built up of 3-D interconnected data-driven processing elements (DDPs). Algorithms are decomposed into functional primitives viewed as top-level nodes of a data-flow graph (DFG). Each node is given a known physical implementation on the target architecture, either as a single DDP or as an encapsulated sub graph of DDPs, making the well known mapping problem a topological one. The target computer was built at ETCA and embeds 1024 custom data-driven processors and 12 transputers in a 3-D interconnected network. Concurrently with the machine, a complete programming environment has been developed. Relying upon a functional compiler, a large library of IP primitives and automatic place-and-route facilities, it also includes various X-Window based tools aiming at visual and efficient access to all intermediary program representations. In terms of visual languages, we try to share the burden between all the layers of this programming environment. Rather than including some display facilities in existing software environment, we have taken advantage of the intuitiveness of functional representations, even textual, and of the hardware efficiency that provides immediate results, ultimately supporting hierarchical constructs.

Proceedings ArticleDOI
10 Jul 1995-CASE
TL;DR: An approach to automating the construction of software systems from components by raising the level of architecture specifications to assist the currently labor-intensive and error-prone process of system integration.

Proceedings ArticleDOI
01 Dec 1995
TL;DR: A new system-level specification transformation called procedure exlining is introduced, which solves the problem of replacing sequences of statements by procedure calls, which is the opposite problem of inlining.
Abstract: We introduce a new system-level specification transformation called procedure exlining. Exlining is the problem of replacing sequences of statements by procedure calls, which is the opposite problem of inlining. Procedures are used by system synthesis and behavioral synthesis tools to guide exploration of various high-level implementations, so exclining can greatly improve the results of synthesis. We demonstrate the usefulness of exlining on several examples.

Book ChapterDOI
01 Jan 1995
TL;DR: A human-computer collaborative environment called MIDAS is described that defines a new division of labor between human designers and computers that leverages the strengths of both collaborative parties, while compensating for their weaknesses and smoothing the transition from higher level design abstraction to lower level design activities and implementation.
Abstract: In this paper, we describe a human-computer collaborative environment called MIDAS that defines a new division of labor between human designers and computers. The environment leverages the strengths of both collaborative parties, while compensating for their weaknesses and smoothing the transition from higher level design abstraction to lower level design activities and implementation. The environment has the following tangible features: (a) it lets designers explicitly express their conceptual design intentions and helps them map the high-level intentions into interface implementations; (b) it lets human designers control design decisions and handles a pyramid of details for them during design; and (c) it provides flexible work and control flow for opportunistic design.

Dissertation
01 Jan 1995
TL;DR: WBS/Control as mentioned in this paper is an extensible class library based on object-oriented principles and discrete-event simulation for manufacturing planning and control systems, which can be used to model a range of manufacturing systems.
Abstract: Manufacturing planning and control systems are fundamental to the successful operations of a manufacturing organisation. 10 order to improve their business performance, significant investment is made by companies into planning and control systems; however, not all companies realise the benefits sought Many companies continue to suffer from high levels of inventory, shortages, obsolete parts, poor resource utilisation and poor delivery performance. This thesis argues that the fit between the planning and control system and the manufacturing organisation is a crucial element of success. The design of appropriate control systems is, therefore, important. The different approaches to the design of manufacturing planning and control systems are investigated. It is concluded that there is no provision within these design methodologies to properly assess the impact of a proposed design on the manufacturing facility. Consequently, an understanding of how a new (or modified) planning and control system will perform in the context of the complete manufacturing system is unlikely to be gained until after the system has been implemented and is running. There are many modelling techniques available, however discrete-event simulation is unique in its ability to model the complex dynamics inherent in manufacturing systems, of which the planning and control system is an integral component. The existing application of simulation to manufacturing control system issues is limited: although operational issues are addressed, application to the more fundamental design of control systems is rarely, if at all, considered. The lack of a suitable simulation-based modelling tool does not help matters. The requirements of a simulation tool capable of modelling a host of different planning and control systems is presented. It is argued that only through the application of object-oriented principles can these extensive requirements be achieved. This thesis reports on the development of an extensible class library called WBS/Control, which is based on object-oriented principles and discrete-event simulation. The functionality, both current and future, offered by WBS/Control means that different planning and control systems can be modelled: not only the more standard implementations but also hybrid systems and new designs. The flexibility implicit in the development of WBS/Control supports its application to design and operational issues. WBS/Control wholly integrates with an existing manufacturing simulator to provide a more complete modelling environment.

Book ChapterDOI
01 Jan 1995
TL;DR: The influence of nursing on the decision-making process of selecting healthcare information systems (HISs) is increasing and systems planning and implementation are based not only on the capabilities of information technology but also on forces in the environment that impact health care.
Abstract: The influence of nursing on the decision-making process of selecting healthcare information systems (HISs) is increasing. Systems planning and implementation are based not only on the capabilities of information technology but also on forces in the environment that impact health care. Computer decisions in the 1970s were primarily financially oriented. Hospitals saw the need to automate their patient billing process and other key systems, such as general ledger and accounts payable. Advanced technology led to increased sophistication of hardware and software capabilities, and concurrently the information needs of the hospital increased. More extensive reporting requirements, as well as the need for management reporting on productivity, became the norm.

Journal ArticleDOI
01 Sep 1995
TL;DR: In this paper interdependencies within the organizational decision making activity are used to identify some generic categories of support required to maintain consistency and quality in end-user model construction.
Abstract: Encouraging individuals to use corporate data and build computer-based decision models locally, while simultaneously ensuring that the modelling activity is consistent with corporate policies and guidelines poses a challenge to many organizations. Although it is desirable to encourage user autonomy in decision making, it is equally imperative to assure appropriate quality of the decisions made. In this paper interdependencies within the organizational decision making activity are used to identify some generic categories of support required to maintain consistency and quality in end-user model construction. Five distinct cases of model building activity in an end-user computing environment are described and support for ensuring consistency in user constructed models for two of these cases is discussed. An object-oriented knowledge-based system that provides such support has been developed; the architecture of this system is described. System implementation and interaction is illustrated with the aid of a financial budgeting application.

Journal ArticleDOI
TL;DR: The paper presents the design and implementation of a CSP-based object-oriented system consisting of a specification model, Communicating-object, and a prototype system, C-OBJECT, supporting the model.
Abstract: The paper presents the design and implementation of a CSP-based object-oriented system. The system consists of a specification model, Communicating-object, and a prototype system, C-OBJECT, supporting the model. The objects execute in a set of parallel processes called actions. The dynamic communicating objects exchange messages by both data transmissions and function invocations. The C-OBJECT prototype is constructed in a MIMD architecture (32-node transputer) with C++ which is composed of two parts: network configuration and a Communicating-object service subsystem (library) providing various levels of message-passing primitives. The initial prototype with good performance has shown its availability for C and C++ programming. The integrated system facilitates application software with tools of specification, design and implementation.

01 Sep 1995
TL;DR: Experience with the file system shows that it is possible to design flexible system software which meets the threefold requirements of incrementality, scope control and low overhead.
Abstract: Conventional operating system design makes decisions based on assumptions about applications' usage of hardware and software resources. When the assumptions do not hold, these decisions may create a mismatch between what an application wants and what the implementation provides. This dissertation proposes a design approach called Pi which reduces the potential mismatch by enhancing the flexibility of system software. A system built using the Pi approach allows clients to participate in implementation decisions at run-time through dual interfaces; the first one for using the basic functionality, and the second one for changing the implementation. The Pi approach uses a reflective architecture for flexibility. It utilizes a self-representation of a subsystem created using resource objects and contracts, which decide the subsystem's semantics. Contract implementations can be changed by clients through the second interface. The visibility of a change is restricted to a designated scope using a mechanism called scope-based dispatch. The Pi approach has been demonstrated by designing the Pi File System (PFS) architecture and constructing its prototype implementation. The Pi approach has enabled clients to control separate components of the implementation of the file system. For example, naming and caching in the PFS implementation can be tailored by clients to their own needs. The approach has allowed clients to make incremental changes to the implementation and has ensured that the effects of changes are limited to a client-specified scope like a process or a file. Also, the overhead of Pi flexibility mechanisms is limited to a few indirections, and hence the performance penalty is small. Experience with the file system shows that it is possible to design flexible system software which meets the threefold requirements of incrementality, scope control and low overhead.