scispace - formally typeset
Search or ask a question

Showing papers on "Application software published in 1989"


Patent
21 Jul 1989
TL;DR: In this article, a multimedia interface (150) presents information and receives user commands for a computer system (160), which operates in parallel with another application software module (52, 54), such as an expert system.
Abstract: A multimedia interface (150) presents information and receives user commands for a computer system (160). The multimedia interface (150) operates in parallel with another application software module (52, 54), such as an expert system. To add multimedia features to the application software module (52, 54), the module is modified so as to generate multimedia commands at the same time as it displays text on a text monitor. The multimedia commands, which are held in a queue (74), provide additional information in the form of video images and generated speech corresponding to the displayed text. In addition, the multimedia commands are split into at least two sets: one set which is dispatched to the user substantially immediately after displaying the corresponding text, and one set which is dispatched only upon request by the user. In the preferred embodiment, the multimedia interface (150) presents information to the user through text, graphics, video, speech production, and printed output. User inputs are made through a special-function keypad (56) and voice recognition (62). The preferred embodiment is a portable expert system which fits in a single portable suitcase sized package (200).

424 citations


Journal ArticleDOI
TL;DR: Computer simulation studies are presented which demonstrate significantly improved reconstructed images achieved by an ART algorithm as compared to IRR methods.
Abstract: The author presents an algebraic reconstruction technique (ART) as a viable alternative in computerized tomography (CT) from limited views. Recently, algorithms of iterative reconstruction-reprojection (IRR) based on the method of convolution-backprojection have been proposed for application in limited-view CT. Reprojection was used in an iterative fashion alternating with backprojection as a means of estimating projection values within the sector of missing views. In algebraic methods of reconstruction for CT, only those projections corresponding to known data are required. Reprojection along missing views would merely serve to introduce redundant equations. Computer simulation studies are presented which demonstrate significantly improved reconstructed images achieved by an ART algorithm as compared to IRR methods. >

290 citations


Journal ArticleDOI
TL;DR: The concept of distributed execution of recovery blocks is examined as an approach for uniform treatment of hardware and software faults and a specific formulation of the approach aimed at minimizing the recovery time is presented, called the distributed recovery blocks scheme.
Abstract: The concept of distributed execution of recovery blocks is examined as an approach for uniform treatment of hardware and software faults. A useful characteristic of the approach is the relatively small time cost it requires. The approach is thus suitable for incorporation into real-time computer systems. A specific formulation of the approach that is aimed at minimizing the recovery time is presented, called the distributed recovery blocks scheme. The DRB scheme is capable of effecting forward recovery while handling both hardware and software faults in a uniform manner. An approach to incorporating the capability for distributed execution of recovery blocks into a load-sharing multiprocessing scheme is also discussed. Two experiments aimed at testing the execution efficiency of the scheme in real-time applications have been conducted on two different multimicrocomputer networks. The results clearly indicate the feasibility of achieving tolerance of hardware and software faults. >

222 citations


Proceedings ArticleDOI
14 May 1989
TL;DR: Preliminary results are reported from efforts to design and develop a robotic system that will accept and execute commands from either a six-axis teleoperator device or an autonomous planner or combine the two.
Abstract: Preliminary results are reported from efforts to design and develop a robotic system that will accept and execute commands from either a six-axis teleoperator device or an autonomous planner or combine the two. Such a system should have both traded and shared control capability. A sharing strategy is presented whereby the overall system, while retaining positive features of teleoperated and autonomous operation, loses its individual negative features. A two-tired shared control architecture is considered here, consisting of a task level and a servo level. Also presented is a computer architecture for the implementation of this system, including a description of the hardware and software. >

143 citations


Proceedings ArticleDOI
21 Jun 1989
TL;DR: The authors address the dependability validation of Fault-tolerant computing systems and more specifically the validation of the fault-tolerance mechanisms through the realization of a general physical-fault injection tool (MESSALINE).
Abstract: The authors address the dependability validation of fault-tolerant computing systems and more specifically the validation of the fault-tolerance mechanisms. Their approach is based on the use of fault injection at the physical level on a hardware/software prototype of the system considered. The place of this approach in a validation-directed design process as well as its place with respect to related works on fault injection are identified. The major requirements and problems related to the development and application of a validation methodology based on fault injection are presented and discussed. The proposed methodology has been implemented through the realization of a general physical-fault injection tool (MESSALINE) whose usefulness is demonstrated by its application to the experimental validation of a subsystem of a computerized interlocking system for railway control applications. >

137 citations


Proceedings ArticleDOI
01 May 1989
TL;DR: This paper focuses here on GURU's indexing component which extracts conceptual attributes from natural language documentation based on words' co-occurrences, and goes further than keyword-based tools in the understanding of a document without the brittleness of knowledge based tools.
Abstract: In contrast to other kinds of libraries, software libraries need to be conceptually organized. When looking for a component, the main concern of users is the functionality of the desired component; implementation details are secondary. Software reuse would be enhanced with conceptually organized large libraries of software components. In this paper, we present GURU, a tool that allows automatical building of such large software libraries from documented software components. We focus here on GURU's indexing component which extracts conceptual attributes from natural language documentation. This indexing method is based on words' co-occurrences. It first uses EXTRACT, a co-occurrence knowledge compiler for extracting potential attributes from textual documents. Conceptually relevant collocations are then selected according to their resolving power, which scales down the noise due to context words. This fully automated indexing tool thus goes further than keyword-based tools in the understanding of a document without the brittleness of knowledge based tools. The indexing component of GURU is fully implemented, and some results are given in the paper.

94 citations


Proceedings ArticleDOI
06 Sep 1989
TL;DR: A new algorithm for determining the optimal distribution with no restriction on the number of skip levels is presented, resulting in a nonoptimal distribution of groups and subgroups of the carry-skip circuits, degrading the worst-case delay of the adder.
Abstract: The carry-skip adder, because of its greater topological regularity and layout simplicity, is considered a good compromise in terms of area and performance. Some general rules have been suggested for its design, but they tend to overlook many important implementation details and cannot be applied to carry-skip adders with more than two levels of carry-skip or with different delays in the carry paths. The result is a nonoptimal distribution of groups and subgroups of the carry-skip circuits, degrading the worst-case delay of the adder. A new algorithm for determining the optimal distribution with no restriction on the number of skip levels is presented. Some results and conclusions are presented regularly in the realization of such an adder in bipolar ECL technology. >

90 citations


Patent
18 Jul 1989
TL;DR: An interactive video-audio-computer open architecture system and method with dynamically reconfigurable software providing a virtual device interface buffering applications from hardware and with extended flexibility and universal compatibility with the myriad of industry hardware products and standards (videodisc players, graphics, microprocessors, computers, video and audio sources, etc.).
Abstract: An interactive video-audio-computer open architecture system and method with dynamically reconfigurable software providing a virtual device interface buffering applications from hardware and with extended flexibility and universal compatibility with the myriad of industry hardware products and standards (videodisc players, graphics, microprocessors, computers, video and audio sources, etc.), all with transparency to the user and without the requirement of modification of application software irrespective of which hardware product is connected to the system.

62 citations


Journal ArticleDOI
01 Apr 1989
TL;DR: A practical approach to improving software maintenance through measurement based on general models for measurement and improvement based on real-world maintenance environments is presented.
Abstract: A practical approach to improving software maintenance through measurement is presented. This approach is based on general models for measurement and improvement. Both models, their integration, and practical guidelines for transferring them into industrial maintenance settings are presented. Several examples of applications of the approach to real-world maintenance environments are discussed. >

54 citations


Patent
10 Aug 1989
TL;DR: In this paper, the authors present a system and method for providing application program portability and consistency across a number of different hardware, database, transaction processing and operating system environments, which includes a plurality of processes for performing one or more tasks required by the application software.
Abstract: A system and method for providing application program portability and consistency across a number of different hardware, database, transaction processing and operating system environments. In the preferred embodiment, the system includes a plurality of processes for performing one or more tasks required by the application software in one or more distributed processors of a heterogenous or "target" computer. In a run-time mode, program code of the application software is pre-processed, compiled and linked with system interface modules to create code executable by a operating system of the target computer. The executable code, which includes a number of functional calls to the processes, is run by the operating system to enable the processes to perform the tasks required by the application software. Communications to and from the processes are routed by a blackboard switch logic through a partitioned storage area or "blackboard".

50 citations


Proceedings ArticleDOI
01 Oct 1989
TL;DR: A set of features which should be considered when evaluating simulation software, and also a four-step selection strategy, are presented.
Abstract: The number of simulation packages available for performing manufacturing analyses has grown tremendously during the past five years, making it increasingly more difficult for an analyst to choose simulation software for a particular application. In this paper, we present a set of features which should be considered when evaluating simulation software, and also a four-step selection strategy.

Proceedings ArticleDOI
06 Feb 1989
TL;DR: The use of set-oriented disk access whereby a variable-sized set of pages can be fetched or flushed to disk in a single call to the I/O system is proposed, providing fast access to variable-length complex objects, yet retains the advantages of a page-structured buffer pool with a conventional frame size.
Abstract: The use of set-oriented disk access whereby a variable-sized set of pages can be fetched or flushed to disk in a single call to the I/O system is proposed This solution provides fast access to variable-length complex objects, yet retains the advantages of a page-structured buffer pool with a conventional frame size A set-oriented I/O manager has been implemented in the Darmstadt database kernel system using the data-chained I/O method Performance management indicate considerable enhancement of throughput as well as response time In the experiments, set-oriented disk access for very large objects performed up to 25 times faster than conventional I/O >

Journal ArticleDOI
TL;DR: Algorithmic adaptability, which supports techniques for switching between classes of schedulers in distributed transaction systems, is modeled and an experimental system implemented to support experimentation in adaptability is discussed.
Abstract: Adaptability is an essential tool for managing escalating software costs and to build high-reliability, high-performance systems. Algorithmic adaptability, which supports techniques for switching between classes of schedulers in distributed transaction systems, is modeled. RAID, an experimental system implemented to support experimentation in adaptability, is discussed. Adaptability features in RAID, including algorithmic adaptability, fault tolerance, and implementation techniques for an adaptable server-based design, are modeled. >

Patent
02 Oct 1989
TL;DR: In this paper, a data processor, such as a digital signal processor, is placed under the control of a group of abstract object-oriented modules arranged with an underlying operational nucleus that includes a real-time kernel.
Abstract: A data processor, such as a digital signal processor, that has augmented memory, I/O and math units for real-time performance of complex functions, is placed under the control of a group of abstract object-oriented modules arranged with an underlying operational nucleus that includes a real-time kernel. The modules are hierarchically configured, with the lowest being an array object type that references memory allocations. A stream object type, based on the arrays, defines channels between application software and data devices. A vector object type, also based on the arrays, establishes structure within allocated blocks and also enables vector math functions to be undertaken by the vector module. Matrix and filter object types utilize the arrays and vectors in sequences controlled by the corresponding matrix and vector modules. The system provides a model of DSP functionality that is hardware independent, and an interface between high level language calls and highly efficient routines executed in assembly language. With this arrangement a large library of math functions is held available for use in real-time operations of complex nature.

Journal ArticleDOI
22 May 1989
TL;DR: In this paper, an approach to electrical analysis of VLSI packaging interconnections using computer simulation is discussed, and the current status of work is discussed and directions of future research are delineated.
Abstract: An approach to electrical analysis of VLSI packaging interconnections using computer simulation is discussed. Corresponding simulation software developed during the course of research on VLSI interconnections is described. Examples of application to prototypical interconnections (two-transmission-line systems joined by a lumped-parameter network and a transmission line terminated by a network of bipolar and MOS transistors) are provided. Simulation results for the above examples are presented and analyzed. The current status of work is discussed, and directions of future research are delineated. >

Journal ArticleDOI
Jan P. Kruys1
TL;DR: A functional decomposition of the security of distributed systems is described, thus providing building blocks which can be specified and analysed in detail and can be used to establish a secure distributed processing environment for both private application software and public communication services such as electronic mail.

Journal ArticleDOI
TL;DR: A tri-module redundant (TMR) multiprocessor system for increased availability to a real-time application is presented and has been used to drive a mobile trolley.
Abstract: A tri-module redundant (TMR) multiprocessor system for increased availability to a real-time application is presented. The system incorporates three homogeneous Z-80 based microcomputers, each with necessary analog/digital I/O facilities and global communication hardware. The software design is modular in nature and is, therefore, cost effective and adaptable for expansion to the N-module redundant (NMR) system. The retry mechanism has been employed for recovery from transient faults. The number of retries is programmable, which makes the system adaptable to an application environment. The system has been used to drive a mobile trolley. >

Patent
21 Feb 1989
TL;DR: In this article, a computer system is provided which is compatible with existing programmable option select systems and which provides optional enhanced system setup capabilities, allowing application software to access and utilize an expanded set of system setup configuration registers.
Abstract: A computer system is provided which is compatible with existing programmable option select systems and which provides optional enhanced system setup capabilities. The enhanced system permits use of application software designed specifically for existing programmable option select systems which utilize limited system configuration data registers but further provides an optional mode accessed during system setup procedures wherein application software can access and utilize an expanded set of system setup configuration registers to enhance the performance of the computer system.

Proceedings ArticleDOI
15 May 1989
TL;DR: A model of the software development process is presented and the Bauhaus is described, a knowledge-based user interface to Ada software libraries that reduces the amount of manual effort expended in the development of software reuse systems.
Abstract: A number of systems have been developed that demonstrate the utility of knowledge-based techniques in the construction of software reuse systems. The amount of manual effort expended in the development of such systems can be reduced through the use of a knowledge acquisition environment tailored for a reuse-oriented model of the software development process. We present a model of the software development process and then describe the Bauhaus, a knowledge-based user interface to Ada software libraries.

Journal ArticleDOI
TL;DR: The objective of this paper is to design a methodology for the introduction, development and maintenance of computer security within major organizations.

Proceedings ArticleDOI
K. Kawano1, M. Orimo1, Kinji Mori1
20 Sep 1989
TL;DR: An online system test technique that does not interrupt system operation is proposed, based on an autonomous decentralized concept, in which each subsystem has autonomy to control itself and coordinate with the other subsystems.
Abstract: An online system test technique that does not interrupt system operation is proposed. It is based on an autonomous decentralized concept, in which each subsystem has autonomy to control itself and coordinate with the other subsystems. In the autonomous decentralized system architecture, each software module is connected only to the data field (DF), where the data are circulating, and selects whether or not to receive the data on the basis only of the content code of each datum. This means that there exists only one interface between software modules, which is the content code message interface. In this system, both real current data and test data can be circulated in the DF. Hence, a software module can autonomously judge whether to run or test itself on the basis of the received data, while the other software modules are operating. Each module can diagnose the other modules according to their test result data in the DF. The effectiveness of this system test mechanism is shown by applications to real-time control systems. >

Proceedings ArticleDOI
03 Jan 1989
TL;DR: A first prototype software process specification language is presented, its application is demonstrated, and software-engineering-related requirements for a supporting information base are derived.
Abstract: General requirements for software process specification languages are discussed. A first prototype software process specification language is presented, its application is demonstrated, and software-engineering-related requirements for a supporting information base are derived. Efforts aimed at implementing the information-base requirements are briefly mentioned. This work is part of the Meta Information Base project at the University of Maryland. >

Journal ArticleDOI
TL;DR: A set of high-level communication primitives has been designed and implemented to provide the programmer with an interface independent of the operating system and of the underlying interprocess communications facilities.
Abstract: Epsilon is a testbed for monitoring distributed applications involving heterogeneous computers, including microcomputers, interconnected by a local area network. Such a hardware configuration is usual but raises difficulties for the programmer. First, the interprocess communication mechanisms provided by the operating systems are rather cumbersome to use. Second, they are different from one system to another. Third, the programmer of distributed applications should not worry about system and/or network aspects that are not relevant for the application level. The authors present the solution chosen in Epsilon. A set of high-level communication primitives has been designed and implemented to provide the programmer with an interface independent of the operating system and of the underlying interprocess communications facilities. A program participating in a distributed application can be executed on any host without any change in the source code except for host names. >

Proceedings ArticleDOI
01 Jun 1989
TL;DR: The focus of this paper is on the Core Environment's organization and its use in application tool development.
Abstract: Many aspects of design automation software have similar requirements for representing, manipulating, and storing design information. The recognition of these common requirements in CAD tools, allows the Flexible Architecture Compilation Environment's (FACE) Core Environment to provide a suite of high level tools for the CAD developer. The Core Environment software has been developed using object-oriented software technology, and may be readily adapted to specific applications. The focus of the core environment is to improve the productivity of CAD tool developers through better tool integration and a state-of-the-art software development environment. This Core Environment software has been used in the development of an integrated tool set covering algorithm specification, structural synthesis, and physical assembly of digital hardware systems. The focus of this paper is on the Core Environment's organization and its use in application tool development.

Proceedings ArticleDOI
16 Dec 1989
TL;DR: The user interface (UI) of the MEad computer-aided control engineering (CACE) program is presented and the unifying philosophy behind the MEAD UI is discussed, which includes a point-and-click-style interaction, a unifying grouping of similar functionality, and a graphical interface to its CACE database management system.
Abstract: The user interface (UI) of the MEAD computer-aided control engineering (CACE) program is presented. After a brief presentation of the MEAD computer program, the unifying philosophy behind the MEAD UI is discussed. Main features of this UI include a point-and-click-style interaction, a unifying grouping of similar functionality, and a graphical interface to its CACE database management system. A typical modeling, analysis, and design scenario is given to illustrate the interface. The use of a user interface management system (UIMS) in the design and implementation of the MEAD UI is discussed. The MEAD user interface is implemented using an experimental UIMS developed at GE which supports both a Tektronix terminal and X-window-based window systems. Some implementational features of the MEAD user interface are that it is implemented using an object-oriented database system; that it uses a state tree to specify the dialog control; and that it is created entirely by means of a graphical editor, thereby avoiding conventional programming. The architecture of the UIMS is described and the implications of creating a user interface with this UIMS are discussed. >

Proceedings ArticleDOI
27 Nov 1989
TL;DR: The authors describe the prototype application of selected techniques from artificial intelligence to selected network management problems and discuss possible applications of neural network techniques to switch message diagnosis.
Abstract: The authors describe the prototype application of selected techniques from artificial intelligence to selected network management problems. One software prototype supports tactical planning activities using knowledge-based system techniques. A second software prototype applies heuristic search techniques to a network design problem. The authors discuss possible applications of neural network techniques to switch message diagnosis. >

Journal ArticleDOI
G. Chroust1
TL;DR: The Process Model ( ADPS/M) and the Process Mechanism (ADPS/P) are put into the broader context of current software engineering concepts and principles and reasons for the architecture of ADPS are explained.
Abstract: ADPS (Application Development Project Support), developed in the IBM Vienna Software Development Laboratory, is an environment for the industrial development of application software. Crucial prerequisite for such an environment is the definition of a detailed process of how to proceed (a Process Model) and an appropriate instrumentation via computer support (a Process Mechanism) which not only helps the users to follow the established process but also provides the users with various support functions.This paper puts the Process Model (ADPS/M) and the Process Mechanism (ADPS/P) into the broader context of current software engineering concepts. It explains principles and reasons for the architecture of ADPS.

Proceedings ArticleDOI
03 Jan 1989
TL;DR: Experimental results suggest that the techniques used by OSU can be used to develop 50-90% of an application without explicit programming, yielding productivity improvements of 2 to 10 times.
Abstract: The theory of prototyping is presented. A description is then given of Oregon speedcode universe (OSU), a software development system using on-screen editing of standard graphical user interface objects, prototyping, program generation, and software accelerators, which are typically used to accelerate the production of running applications. A programmer uses OSU to design and implement all user interface objects such as menus, windows, dialogs, and icons. These objects are then incorporated into an application-specific sequence that mimics the application during program development and performs the desired operations of the application during program execution. Experimental results suggest that the techniques used by OSU can be used to develop 50-90% of an application without explicit programming, yielding productivity improvements of 2 to 10 times. >

Proceedings ArticleDOI
03 Jan 1989
TL;DR: The authors describe research carried out to develop a software process and product specification language that allows all the information necessary to understand, control, and improve any given software engineering process, and develop a meta-information-base schema that automatically generates an information-base structure given a Software Engineering process.
Abstract: The authors describe research carried out to: (i) develop a software process and product specification language that allows all the information necessary to understand, control, and improve any given software engineering process; (ii) develop a meta-information-base schema that automatically generates an information-base structure given a software process and product specification; and (iii) develop a mapping between the software-engineering-oriented and information-base-oriented models. Their generator approach addresses the fact that software engineering changes not only from environment to environment, but also from project to project. >

Proceedings ArticleDOI
01 Aug 1989
TL;DR: Some estimations of the computing power required, the necessary interconnection bandwidth, and the requisite memory size are presented and the hardware architecture of the NERV multiprocessor system is derived that fulfills these requirements.
Abstract: A general-purpose simulation system for neural networks is computationally very demanding. This paper presents some estimations of the computing power required, the necessary interconnection bandwidth, and the requisite memory size. Next, the hardware architecture of the NERV multiprocessor system is derived that fulfills these requirements. Up to 320 processors 68020 are used in a single VME crate together with a Macintosh II as a host computer. This set-up provides a computing power of 1300 MIPS together with a friendly graphical user interface. To support the simulation of arbitrarily interconnected networks and of asynchronous update models, the VME bus has been extended by a broadcast feature and a global max finder. The software architecture is outlined. It consists of system software, utilities, and application software.