scispace - formally typeset
Search or ask a question

Showing papers on "Application software published in 1991"


Journal ArticleDOI
TL;DR: The authors' experience implementing and evaluation several protocols in the x-Kernel shows that this architecture is general enough to accommodate a wide range of protocols, yet efficient enough to perform competitively with less-structured operating systems.
Abstract: A description is given of an operating system kernel, called the x-Kernel, that provides an explicit architecture for constructing and composing network protocols. The authors' experience implementing and evaluation several protocols in the x-Kernel shows that this architecture is general enough to accommodate a wide range of protocols, yet efficient enough to perform competitively with less-structured operating systems. Experimental results demonstrating the architecture's generality and efficiency are provided. The explicit structure provided by the x-Kernel has the following advantages. First, the architecture simplifies the process of implementing protocols in the kernel, making it easier to build and test novel protocols. Second, the uniformity of the interface between protocols avoids the significant cost of changing abstractions and makes protocol performance predictable. Third, it is possible to write efficient protocols by tuning the underlying architecture rather than heavily optimizing protocols themselves. >

853 citations


Proceedings ArticleDOI
01 Mar 1991
TL;DR: The process by which users decide to customize is described and the factors that irdluence when and how users make those decisions have implications for both the design of software and the integration of new software into an organization.
Abstract: One of the properties of a user interface is that it both guides and constrains the patterns of interaction between the user and the software application. Application software is increasingly designed to be “customizable” by the end user, providing specific mechanisms by which users may specify individual preferences about the software and how they will interact with it over multiple sessions. Users may thus encode and preserve their preferred patterns of use. These customizations, together with choices about which applications to use, make up the unique “softswue environment” for each individual. While it is theoretically possible for each user to carefully evaluate and optimize each possible customization option, this study suggests that most people do not. In facfi since time spent customizing is time spent not working, many people do not take advantage of the customization features at all. I studied the customization behaviorof51 users of a Unix software environment, over a period of four months. This paper describes the process by which users decide to customize and examines the factors that irdluence when and how users make those decisions. These findings have implications for both the design of software and the integration of new software into an organization.

338 citations


Patent
28 Jun 1991
TL;DR: In this article, the authors define a workflow by providing a template of business activities that expresses the manner in which these activities relate to one another, and then the system orchestrates performance of the tasks in accordance with the template.
Abstract: Methods and apparatus for defining, executing, monitoring and controlling the flow of business operations A designer first defines a workflow by providing a template of business activities that expresses the manner in which these activities relate to one another The system orchestrates performance of the tasks in accordance with the template; in so doing, it integrates various types of application software, and partitions tasks among various users and computers

291 citations


Journal ArticleDOI
TL;DR: Research on improving the user interfaces of touch screen applications is described and the advantages of touch screens are discusses, their current capabilities are examined, and possible future developments are considered.
Abstract: Research on improving the user interfaces of touch screen applications is described. The advantages of touch screens are discusses, their current capabilities are examined, and possible future developments are considered. >

260 citations


Journal ArticleDOI
TL;DR: The strategy of using multiple versions of independently developed software as a means to tolerate residual software design faults is discussed and the effectiveness of multiversion software is studied by comparing estimates of the failure probability of these systems with the failure probabilities of single versions.
Abstract: The strategy of using multiple versions of independently developed software as a means to tolerate residual software design faults is discussed. The effectiveness of multiversion software is studied by comparing estimates of the failure probabilities of these systems with the failure probabilities of single versions. The estimates are obtained under a model of dependent failures and compared with estimates obtained when failures are assumed to be independent. The experimental results are based on 20 versions of an aerospace application developed and independently validated by 60 programmers from 4 universities. Descriptions of the application and development process are given, together with an analysis of the 20 versions. >

188 citations


Patent
28 Feb 1991
TL;DR: In this article, a method for evaluating application software used with a computer system having a graphic user interface is presented, where the program continually checks a system-provided event record to determine if a user-initiated event has occurred.
Abstract: A method for evaluating application software used with a computer system having a graphic user interface The method is implemented as a computer program that runs simultaneously with the application software The program continually checks a system-provided event record to determine if a user-initiated event has occurred If so, the program relates the event to an on-screen object of the graphic user interface and to the time at which it occurred The program outputs an event capture log, which may be used for subsequent analysis

103 citations


Proceedings ArticleDOI
01 May 1991
TL;DR: The authors summarize the goals of metric-driven analysis and feedback systems and describe a prototype system, Amadeus, which defines abstract interfaces and embodies architectural principles for these types of systems and provides an extensible framework for adding new empirically based analysis techniques.
Abstract: The authors summarize the goals of metric-driven analysis and feedback systems and describe a prototype system, Amadeus, which defines abstract interfaces and embodies architectural principles for these types of systems. Metric-driven analysis and feedback systems enable developers to define empirically guided processes for software development and maintenance. The authors provide an overview of the Amadeus system operation, including an example of the empirically guided process, a description of the system characteristics, an explanation of the system conceptual operation, and a summary of the users' view of the system. The centerpiece of the system is a pro-active server, which interprets scripts and coordinates event monitoring and agent activation. Amadeus provides an extensible framework for adding new empirically based analysis techniques. >

100 citations


Proceedings ArticleDOI
18 Oct 1991
TL;DR: The environment prototype contains a set of performance data transformation modules that can be interconnected in user-specified ways and allows users to interconnect and configure modules graphically to form an acyclic, directed data analysis graph.
Abstract: As parallel systems expand in size and complexity, the absence of performance tools for these parallel systems exacerbates the already difficult problems of application program and system software performance tuning. Moreover, given the pace of technological change, we can no longer afford to develop ad hoc, one-of-a-kind performance instrumentation software; we need scalable, portable performance analysis tools. We describe an environment prototype based on the lessons learned from two previous generations of performance data analysis software. Our environment prototype contains a set of performance data transformation modules that can be interconnected in user-specified ways. It is the responsibility of the environment infrastructure to hide details of module interconnection and data sharing. The environment is written in C++ with the graphical displays based on X windows and the Motif toolkit. It allows users to interconnect and configure modules graphically to form an acyclic, directed data analysis graph. Performance trace data are represented in a self-documenting stream format that includes internal definitions of data types, sizes, and names. The environment prototype supports the use of head-mounted displays and sonic data presentation in addition to the traditional use of visual techniques.

92 citations


Proceedings ArticleDOI
D. Jewett1
25 Jun 1991
TL;DR: The goals for this machine, the system architecture, its implementation and resulting performance, and the hardware and software techniques incorporated to achieve fault tolerance are discussed.
Abstract: A description is given of Integrity S2, a fault-tolerant, Unix-based computing system designed and implemented to provide a highly available, fault-tolerant computing platform for Unix-based applications. Unlike some other fault tolerant computing systems, no additional coding at the user-level is required to take advantage of the fault-tolerant capabilities inherent in the platform. The hardware is an RISC-based triple-modular-redundant processing core, with duplexed global memory and I/O subsystems. The goals for this machine, the system architecture, its implementation and resulting performance, and the hardware and software techniques incorporated to achieve fault tolerance are discussed. Fault tolerance has been accomplished without compromising the programmatic interface, operating system or system performance. >

88 citations


Proceedings ArticleDOI
S.L. Lillevik1
28 Apr 1991
TL;DR: DELTA is the third of four major Touchstone Program prototype systems and provides aggregate peak performance in excess of 30 GFLOP’s, and contains a new ,interconnect network based on a Caltech-designed router device.
Abstract: In Sep tember , 1990, the Intel C o r p o r a t i o n demonstrated the third of four major Touchstone Program prototype systems. Denoted DELTA, the prototype scales to over 500 nodes, provides aggregate peak performance in excess of 30 GFLOP’s, and contains a new ,interconnect network based on a Caltech-designed router device. DELTA contains four heterogeneous node types f o r numeric, service, inputloutput, and network fmctions. The operating system supports message-passing paradigms and intefaces with a Concurrent File System. Users access DELTA across a local area network and may select either the C or FORTRAN programmin,g languages. An interactive parallel debugger assists in application development and performance tuning.

84 citations


Proceedings ArticleDOI
13 May 1991
TL;DR: It is argued that multiuser distributed memory multiprocessors with dynamic mapping of the application onto the hardware structure are needed to make available the advantages of this type of architecture to a wider user community.
Abstract: It is argued that multiuser distributed memory multiprocessors with dynamic mapping of the application onto the hardware structure are needed to make available the advantages of this type of architecture to a wider user community. It is shown, based on an abstract model, that such architectures may be used efficiently. It is also shown that future developments in interconnection hardware will allow the fulfillment of the assumptions made in the model. Since a dynamic load balancing procedure will be one of the most important components in the systems software, the elements of its implementation are discussed and first results based on a testbed implementation are presented. >

Patent
28 Feb 1991
TL;DR: In this paper, a method for evaluating application software used with a computer system having a graphic user interface is presented, where the program continually checks a system-provided event record to determine if a user-initiated event has occurred.
Abstract: A method for evaluating application software used with a computer system having a graphic user interface. The method is implemented as a computer program that runs simultaneously with the application software. The program continually checks a system-provided event record to determine if a user-initiated event has occurred. If so, the program relates the event to an on-screen object of the graphic user interface and to the time at which it occurred. Events and objects and their attributes are associated with identifiers so that the invention can be programmed to select only certain event data. The program outputs an event capture log, which may be used for subsequent analysis.

Proceedings ArticleDOI
01 Dec 1991
TL;DR: The author first presents a classification of simulation modeling tools, then a collection of features of simulation software, and a technique to reduce the vast number of simulation modeled tools to a manageable few.
Abstract: It is noted that the selection of appropriate simulation software from the vast number of packages available is a difficult task The author first presents a classification of simulation modeling tools Then, a collection of features of simulation software is discussed Next, guidance is provided for selecting a simulation modeling tool Lastly, a technique to reduce the vast number of simulation modeling tools to a manageable few is described The selection of simulation software depends on the problems to be solved as much as on the characteristics of the various modeling tools >

Journal ArticleDOI
TL;DR: A technique and an environment-supporting specialization of generalized software components are described, based on symbolic execution, that allows one to transform a generalized software component into a more specific and more efficient component.
Abstract: A technique and an environment-supporting specialization of generalized software components are described. The technique is based on symbolic execution. It allows one to transform a generalized software component into a more specific and more efficient component. Specialization is proposed as a technique that improves software reuse. The idea is that a library of generalized components exists and the environment supports a designer in customizing a generalized component when the need arises for reusing it under more restricted conditions. It is also justified as a reengineering technique that helps optimize a program during maintenance. Specialization is supported by an interactive environment that provides several transformation tools: a symbolic executor/simplifier, an optimizer, and a loop refolder. The conceptual basis for these transformation techniques is described, examples of their application are given, and how they cooperate in a prototype environment for the Ada programming language is outlined. >

Proceedings ArticleDOI
17 May 1991
TL;DR: A case study is given of an application of the Akaike Information Criterion used to select the best model for a system and then that model was used to predict the number of remaining errors.
Abstract: Predicting the remaining errors in a software system historically has been difficult to do with accuracy. The models used to predict future events have often worked well on one system or collection of data, and not at all well on another. Much of the recent work in the software reliability field has been on model selection and identifying which model would work well with which software system. The Akaike Information Criterion can be used to select the best model from among several models. A case study is given of an application of this technique to an ongoing software project. The Akaike Information Criterion was used to select the best model for a system and then that model was used to predict the number of remaining errors. >

Journal ArticleDOI
01 Sep 1991
TL;DR: A method for static allocation of modules to processors, with the constraints of minimising interprocessor communication (IPC) cost and load balancing is presented and the algorithm provides a near optimal solution.
Abstract: In distributed computing systems, partitioning of application software into modules and proper allocation of modules among processors are important factors for efficient utilisation of resources. A method for static allocation of modules to processors, with the constraints of minimising interprocessor communication (IPC) cost and load balancing is presented. The heuristic approach forms module clusters around maximally linked modules or attached modules, and restricts the cluster size to the average load to be assigned to each processor. While modules are being allocated, specific capabilities of the processors can also be taken into consideration. The module allocation with the above constraints is carried out in a single phase, and the algorithm provides a near optimal solution.

Proceedings ArticleDOI
14 Oct 1991
TL;DR: A method is presented for improving the performance of many computationally intensive tasks by extracting information at compile-time to synthesize new operations that augment the functionality of a core processor.
Abstract: Substantial gains can be achieved by allowing the configuration and fundamental operations of a processor to adapt to a user's program. A method is presented for improving the performance of many computationally intensive tasks by extracting information at compile-time to synthesize new operations that augment the functionality of a core processor. The newly synthesized operations are targeted to RAM-based reconfigurable logic located within the processor. A proof-of-concept system called PLADO, consisting of a C configuration compiler and a hardware platform, is presented. Computation and performance results confirm the concept viability, and demonstrate significant speed-up. >

Patent
14 Feb 1991
TL;DR: In this article, a microcontroller, separate from the main processor, is used for power-management functions in a personal computer, which can take control of the system bus under certain conditions, and permit power management to be performed without placing any burden or constraints on the user's choice of operating system or application software.
Abstract: A personal computer which a microcontroller, separate from the main processor, is used for power-management functions. Under certain conditions, this power-management microcontroller can take control of the system bus. This provide BIOS-independent power management, and permits sophisticated power management to be performed without placing any burden or constraints on the user's choice of operating system or application software.

Patent
27 Nov 1991
TL;DR: In this paper, the authors present an enhanced virtual software machine that provides a virtual execution environment in a target computer for application software programs having execution dependencies incompatible with a software execution environment on the target computer.
Abstract: An enhanced virtual software machine that provides a virtual execution environment in a target computer for application software programs having execution dependencies incompatible with a software execution environment on the target computer. The machine comprises a plurality of independent processes, a management interface for generating requests for execution to the plurality of independent processes and receiving results of such processing, and a preprocessor for generating a set of native executable program modules. According to one embodiment, the virtual software machine binds a task manager control module into a single address space of the target computer operating system for each user that attaches to the system. Upon receipt of a transaction request, a dynamic binding facility dynamically binds one or more of the program modules into the single address space for scheduling and execution under the control of the task manager control module. At least one of the program modules calls the management interface upon encountering an execution dependency in the program module and effects the required functionality using an independent process. A task management library is also bound in the single address space and functions to preserve, release and/or restore a context of each of the one or more program modules loading into the single address space during execution of the program modules by the task manager control module.

Proceedings ArticleDOI
17 May 1991
TL;DR: The central theme of the study is the creation of a suitable complexity measure for use in software reliability models by employing factor analytic techniques to reduce the dimensionality of the complexity problem space to produce a set of reduced metrics.
Abstract: The central theme of the study is the creation of a suitable complexity measure for use in software reliability models. Factor analytic techniques are employed to reduce the dimensionality of the complexity problem space to produce a set of reduced metrics. The reduced metrics are subsequently combined into a single relative complexity measure. Program complexity varies dynamically as a function of inputs to the system. Hence, the notion of relative complexity is extended to a dynamic or functional complexity metric for use in proposed modifications to existing reliability models. >

Journal ArticleDOI
TL;DR: The integrated environment contains extensive color graphics representations of power systems, interface with external power system simulation programs, dynamic display of the results on the graphical representation of the network, and straightforward communication among the external and the internal databases.
Abstract: An integrated graphics environment for power system analysis and design (PSADE) is presented. The environment runs on a personal computer with standard memory and graphics hardware requirements (VGA or EGA). PSADE is based on a general purpose graphics development software which provides numerous tools to facilitate the creation and modification of particular power system analysis applications. The integrated environment contains extensive color graphics representations of power systems, interface with external power system simulation programs, dynamic display of the results on the graphical representation of the network, and straightforward communication among the external and the internal databases. The use of this environment is illustrated by several particular applications. PSADE is specially useful to universities and industry for teaching and training purposes, as well as being a valid research tool for preliminary study of planning or operational schemes. >

Patent
01 Mar 1991
TL;DR: In this article, a method of operating a computer causes information to be presented through visual displays using alternate mode of expression, which is practiced by application software which operates on a general purpose computer.
Abstract: A method of operating a computer causes information to be presented through visual displays using alternate mode of expression. The method is practiced by application software which operates on a general purpose computer. The application software separates concept data from expression data. Concept data identify information which the software intends to present in a visual display, without specifying any particular one of a plurality of forms of expression that might be used to express the concept data. Expression data cause the visual display to form images in accordance with a specific mode of expression. A user specifies the expression modes to utilize while operating the application program. For example, linguistic information may be presented in English, Spanish, German, French, or other languages, and graphic information may be presented in color or in black and white. Expression tables which translate concept data into expression data are loaded into primary storage. The user may change an expression mode while the application software is executing. To change an expression mode, new expression data tables are overlaid over old expression data tables without altering the executable program, concept data tables, or other permanent data.

Proceedings ArticleDOI
22 Sep 1991
TL;DR: This paper presents a knowledge-based approach to supporting understanding-intensive tasks in software maintenance and re-engineering that uses programming language knowledge to parse the source code and analyze its semantics.
Abstract: Software understanding is the process of recovering high-level, functionality-oriented information from the source code. This paper presents a knowledge-based approach to supporting understanding-intensive tasks in software maintenance and re-engineering. The approach uses programming language knowledge to parse the source code and analyze its semantics. It uses general programming and application domain knowledge to automate the recognition of functional concepts. Also, a set of presentation, focusing, and editing tools is provided for the user to view and modify the source code and to extract reusable components from it. Two workbench environments that we have recently developed based on this approach are described.

Proceedings ArticleDOI
17 May 1991
TL;DR: Software reliability methods are applied to a major subset of BNR's software to determine if the total number of customer-perceived failures and actual software faults can be predicted before or soon after a new release of such a system.
Abstract: BNR, the R&D subsidiary of Northern Telecom and Bell Canada, has one of the largest software systems in the world, with code libraries exceeding 8 million source lines of a high level language. This software is used in the high-end digital switching systems that Northern Telecom markets. Software reliability methods are applied to a major subset of this software to determine if the total number of customer-perceived failures and actual software faults can be predicted before or soon after a new release of such a system. These predictions are based on pre-customer testing ( alpha and beta ) and small field trials. Many of the existing reliability models and methods of parameter estimation currently demonstrated in the literature are compared. >

Proceedings ArticleDOI
17 May 1991
TL;DR: The evaluation results indicate that the proposed ELC Model not only performs better than all the other models by a wide margin, but also enjoys favorable properties that practitioners would like to see in their software reliability modeling practices.
Abstract: A heuristic approach is given to addressing the software reliability modeling problem. The heuristic approach is based on a linear combination of three popular software reliability models. A simple, predetermined combination is suggested by assigning equal weights to each component model for the final delivery of the software reliability prediction. In a preliminary examination, this equally-weighted linear combination (ELC) model is judged to perform well when applied to three published software failure data sets. The authors further present five other sets of software failure data taken from major projects at the Jet Propulsion Laboratory, and apply the ELC model as well as six other popular models for a detailed comparison and evaluation. The evaluation results indicate that the proposed ELC Model not only performs better than all the other models by a wide margin, but also enjoys favorable properties that practitioners would like to see in their software reliability modeling practices. These properties include: simplicity, low-risk, ease of application, and insensitivity to data noise. >

Patent
27 Nov 1991
TL;DR: In this paper, a first-language basic input/output control program is linked with a first language OS for executing first language application software in object code form, an emulator linked with second-language application software, and a way of activating and terminating the emulator.
Abstract: The present invention includes a first-language basic input/output control program linked with a first-language OS an a computer and an input/output device for executing first-language application software in object code form, an emulator linked with second-language application software, the first-language basic input/output control program and the first-language OS for executing the second-language application software in object code form and a way of activating and terminating the emulator. Thereby, there is provided a computer system which permits the first-language application software and the second-language application software to be operated using the identical computer, input/output device and first-language OS.

Journal ArticleDOI
TL;DR: This work discusses the experiences using one such set of software development tools available on the NeXT workstation and describes the effort required to port the MidasPlus molecular modeling package to theNeXT work-station.

Proceedings ArticleDOI
01 Jan 1991
TL;DR: An adaptive mult iple grid generat ion method and relaxat ion schemes on the iPSC/2 hypercube, based on a general software tool that takes care of the data exchanges needed and controls the load balanced execution of the application by redistributing the data among processors of the distributed system.
Abstract: T h e solut ion of partial dif ferential equations on twodimensional domains can benefit f r o m the use of irregular grids. In t he case of multigrid solut ion techniques, a hierarchy of nested grids is used. W e imp lemen ted a n adaptive mult iple grid generat ion method and relaxat ion schemes on the iPSC/2 hypercube, based on a general software tool f o r t he man ipu la t ion of dataparallel applications characterized by a n arbitrary and varying topology. T h i s tool takes care of t he data exchanges needed t o ensure t h e consistency of t h e distributed data s tructure and controls t he load balanced execution of t he application by redistributing the data among t h e processors of t h e distributed m e m o r y parallel computer . Remapp ing is done af ter each grid ref inement . In a mult igrid context, a n appropriate choice of data s tructures allows t o coordinate t h e migrat ion of pieces of t h e grids on diflerent levels, so as t o l im i t inter-grid communicat ion.

Proceedings ArticleDOI
08 Jan 1991
TL;DR: This paper begins by justifying the importance of graphical user interfaces (GUIs) and the need for proper validation and concludes with a strategy for validation, based on derivation of test cases from a formal specification.
Abstract: This paper begins by justifying the importance of graphical user interfaces (GUIs) and the need for proper validation. The various problems in GUI validation are classified into 3 categories : functional, structural and environmental issues. The functional aspects of GUI are examined from the mapping of display objects on screen, interaction functions, to basic interaction components and window management functions. The largest functional issue identified is the lack of a formal specification suitable for deriving test cases. The main structural problem is in deciding on which of the software levels (i.e. window systems, toolkits, UIMS and applications) to target tests. The environmental issues concern human testers, automation, input synthesis and output visual verification. At the heart of all software testing activities, whether GUI or conventional, lies the problem of test case selection as testing budgets are finite. This paper concludes with a strategy for validation, based on derivation of test cases from a formal specification. >

Proceedings ArticleDOI
25 Oct 1991
TL;DR: The paper presents HDM-hypertext design model, a first attempt toward developing high level design primitives for describing static and dynamic aspects of hypertext applications from the authoring-in-the-large point of view.
Abstract: This paper discusses a structured approach to hypertext application design which we have called "authoring-in-the-large". This approach is based on the belief that in order to get a consistent, expressive, usable hypertext, the application should be first designed at conceptual level in a system independent manner. The author should try to describe global properties of an application - its representation structures, navigational patterns, operational semantics, overall visualization and display aspects, before actually creating and filling in the nodes' content. In addition, this paper presents HDM - Hypertext Design Model, a first attempt toward developing high level design primitives for describing static and dynamic aspects of hypertext applications from the authoring-in-the-large point of view.