scispace - formally typeset
Search or ask a question

Showing papers on "Application software published in 1982"


Journal ArticleDOI
Kung1
TL;DR: The basic principle of systolic architectures is reviewed and it is explained why they should result in cost-effective, highperformance special-purpose systems for a wide range of problems.
Abstract: f High-performance, special-purpose computer systems are typically used to meet specific application requirements or to off-load computations that are especially taxing to general-purpose computers. As hardware cost and size continue to drop and processing requirements become well-understood in areas such as signal and image processing, more special-purpose systems are being constructed. However, since most of these systems are built on an ad hoc basis for specific tasks, methodological work in this area is rare. Because the knowledge gaited from individual experiences is neither accumulated nor properly organized, the same errors are repeated. I/O and computation imbalance is a notable example-often, the fact that I/O interfaces cannot keep up with device speed is discovered only after constructing a high-speed, special-purpose device. We intend to help correct this ad hoc approach by providing a general guideline-specifically, the concept of systolic architecture, a general methodology for mapping high-level computations into hardware structures. In a systolic system, data flows from the computer memcory in a rhythmic fashion, passing through many processing elements before it returns to memory, much as blood circulates to and from the heart. The system works like an autombbile assembly line where different people work on the same car at different times and many cars are assembled simultaneously. An assembly line is always linear, however, and systolic systems are sometimes two-dimensional. They can be rectangular, triangular, or hexagonal to make use of higher degrees of parallelism. Moreover, to implement a variety of computations, data flow in a systolic system may be at multiple speeds in multiple directions-both inputs and (partial) results flow, whereas only results flow in classical pipelined systems. Generally speaking, a systolic system is easy to implement because of its regularity and easy to reconfigure (to meet various outside constraints) because of its modularity. The systolic architectural concept was developed at Carnegie-Mellon University,'17 and versions of systolic processors are being designed and built by several industrial and governmental organizations.840 This article reviews the basic principle of systolic architectures and explains why they should result in cost-effective, highperformance special-purpose systems for a wide range of problems.

2,319 citations


Journal ArticleDOI
TL;DR: Methods of determining the design correctness of systems as applied to computer programs are surveyed.
Abstract: It is essential to assess the reliability of digital computer systems used for critical real-time control applications (e.g., nuclear power plant safety control systems). This involves the assessment of the design correctness of the combined hardware/software system as well as the reliability of the hardware. In this paper we survey methods of determining the design correctness of systems as applied to computer programs.

295 citations


Journal ArticleDOI
TL;DR: Support for the construction of robust software that survives node, network, and media failures is discussed in an integrated language/system whose goal is to provide the needed support.
Abstract: Technological advances have made it possible to construct systems from collections of computers connected by a network. At present, however, there is little support for the construction and execution of software to run on such a system. Our research concerns the development of an integrated language/system whose goal is to provide the needed support. This paper discusses a number of issues that must be addressed in such a language. The major focus of our work and this paper is support for the construction of robust software that survives node, network, and media failures.

93 citations


Journal ArticleDOI
TL;DR: The subdivision of functions discussed below can be viewed as a practical (albeit limited) approach for implementing state-of-the-art computer vision systems, given the level of understanding and the analytical tools currently available in this field.
Abstract: robots that \"see\" and \"feel\" can perform more complex tasks, industry has employed various computer vision techniques to enhance the abilities of intelligent machines. The recent widespread interest in robotics and automation in the US originates from American industry's most fundamental problem: a staggering drop in productivity. From 1947 to 1965, US productivity increased at an average rate of 3.4 percent a year. The growth rate decreased to 2.3 percent in the following decade, then dropped to below one percent in the late 1970's and down to-0.9 percent in 1980. Japan's productivity growth, the contrasting example most often cited in the literature, has been climbing at an average annual rate of about 7.3 percent. ' Although there are many ways to influence manufacturing productivity and product quality-regulatory, fiscal, and social-the emphasis in the following discussion is technological. In particular, we are interested in the computer vision aspects of industrial inspection and robot control. The principal motivation behind computer vision is increased flexibility and lower cost. The use of sensing technology to endow-a machine with a greater degree of \"intelligence\" in dealing with its environment is receiving increased attention. A robot that can \"see\" and \"feel\" should be easier to train in the performance of complex tasks while at the same time requiring less stringent control mechanisms than preprogrammed machines. A sensory, trainable system is also adaptable to a much larger variety of tasks, thus achieving a degree of universality that ultimately translates into lower production and maintenance costs. The computer vision process can be divided into five principal areas: sensing, segmentation, description, recognition , and interpretation. These categories are suggested to a large extent by the way computer vision systems are generally implemented. It is not implied that human vision and reasoning can be so neatly subdivided nor that these processes are carried out independently of each other. For instance, we can logically assume that recognition and interpretation are highly interrelated functions in a human. These relationships, however, are not yet understood to the point where they can be mod-eled analytically. Thus, the subdivision of functions discussed below can be viewed as a practical (albeit limited) approach for implementing state-of-the-art computer vision systems, given our level of understanding and the analytical tools currently available in this field. Visual sensing Imaging devices. Visual information is converted to electrical signals by visual sensors. The most commonly used visual sensors are vidicon cameras …

81 citations


Journal ArticleDOI
Agerwala1, Arvind
TL;DR: This special issue of Computer features articles by leading researchers in the field of data flow languages and graphs, and dynamic architectures, and outlines the major problems with data flow systems from the point of view of experts who are not directly active in data flow research.
Abstract: Some 15 years ago, J. Rodriguez at MITI and D. Adams at Stanford2 began to work on research that eventually led to the development of concepts still in use today in data flow systems. Important advances have been made since that time, and many researchers are now investigating data flow concepts as an alternative to von Neumann machines and languages. Nevertheless, until this special issue of Computer, no attempt had been made to bring together a body of work on data flow systems and closely examine just how far research into this technology has progressed. In the pages that follow, the reader will be presented with an overview of the field, especially as it relates to high-speed computing. Included are articles by leading researchers in the field of data flow languages and graphs, and dynamic architectures. There is also an article that outlines the major problems with data flow systems from the point of view of experts who are not directly active in data flow research. Together, we hope these articles will stimulate further investigation into the practicality of data flow systems. Anyone with some background in computer languages and architecture, and with a rudimentary knowledge of compilers should find the material in this issue most interesting.

80 citations


Journal ArticleDOI
TL;DR: This Method Provides Both a Basic Approach and Some Powerful New Tools to Aid in Dialogue Programming.
Abstract: How Should One Approach the Design of an Interactive Interface? This Method Provides Both a Basic Approach and Some Powerful New Tools to Aid in Dialogue Programming.

42 citations


Journal ArticleDOI
01 Jul 1982

40 citations


Journal ArticleDOI
01 Mar 1982
TL;DR: Results indicate that the proposed techniques have a high potential for improving retrieval system utility, especially for inexperienced users, and suggests that the appropriateness of different assistance techniques is dependent on context, type of application and user.
Abstract: Users of interactive bibliographic retrieval systems are hampered by the problems of system complexity and heterogeneity. To alleviate these problems-especially for computer-inexperienced end users-the concept of a translating computer intermediary has been investigated. The intermediary simplifies system operation by conversing with users in an easy-to-use, common language; user requests are translated into the language of the appropriate retrieval system, and after suitable network connections have been established, sent to that system. System responses are then forwarded to the user after conversion to a more uniform format. The design principles for such an intermediary system include a modularized command/argument language augmented by considerable on-line instruction emphasizing basic functions for neophyte users and including tutorial and automated aids to search-strategy formulation. An experimental intermediary system named CONrr (connector for networked information transfer) was constructed and tested with bona fide users. Results indicate that the proposed techniques have a high potential for improving retrieval system utility, especially for inexperienced users. Analysis of the experiments also suggests that the appropriateness of different assistance techniques is dependent on context-e.g., type of application and user.

33 citations


Journal ArticleDOI
01 Apr 1982

33 citations


Journal ArticleDOI
TL;DR: A design approach that is oriented toward machinery control applications is developed for distributed control systems and principles are identified for specifying system requirements, partitioning a system into processes, allocating processes to processors, and choosing a communication network.
Abstract: The design and implementation of a control system consisting of a number of cooperating processes running concurrently on a group of communicating processors is a complex task. There is general agreement that the high level of complexity associated with these systems dictates a ``top-down'' design approach to both hardware and software. In this paper, a design approach that is oriented toward machinery control applications is developed for distributed control systems. Principles are identified for specifying system requirements, partitioning a system into processes, allocating processes to processors, and choosing a communication network. A design example is discussed.

24 citations



Journal ArticleDOI
TL;DR: This issue of Computer presents promising new techniques for using very large numbers of computing elements to solve important problems in meteorology, cryptography, image processing, and sonar and radar surveillance.
Abstract: This issue of Computer presents promising new techniques for using very large numbers of computing elements to solve important problems. In many areas of computer application, such as meteorology, cryptography, image processing, and sonar and radar surveillance, the quality of the answer the computer returns is proportional to the amount of computation performed. Despite the impressive speed of many recent computers, their architecture limits them to a mostly serial approach to computation, and therefore limits their usefulness for these computationally intensive problems. Advances in the design and fabrication ofVLSI circuits will soon make it feasible to implement computers consisting of tens or even hundreds of thousands of computing elements. Synergistic advances in numerical analysis and software engineering have made it possible for these highly parallel computing elements to work cooperatively on the solution of a single problem. Highly parallel structures can be either general or special purpose. Either way, they promise tremendous speed improvements over the fastest conventional machines. Solutions to problems that were computationally intractable only a few years ago now fall within the bounds of this new technology. The six articles in this special issue of Computer examine some of the important recent developments in

Journal ArticleDOI
Bal1, Kaminker, Lavi, Menachem, Soha 
TL;DR: The design challenges in creating the NS 16000 microprocessor family were met only after thoroughly considering market requirements and LSI technology limitations, and the design allows for a smaller die size, leading to a reduction in chip cost.
Abstract: and slave processors, this group of microprocessors addresses a wide range of system applications. When LSI/MOS chips were first developed, it was possible for designers to place approximately 1000 active elements on a single chip. Now, ten years later, the number of active elements per chip has risen to over 100,000. As we enter the second decade of LSI/MOS technology, applications for its use are continually expanding as the computational power of newly developed 16-and 32-bit microprocessors approaches that of mainframe computers. In short, microprocessor designers have their work cut out for them. Currently, software development efforts are becoming responsible for ever larger shares of product development costs. To offset these costs, microcomputer designers are shifting toward high-level language programming. Increasingly , users expect microprocessors to provide a cost-effective solution for HLL support with minimal degradation in overall system performance; this sets tougher requirements for microprocessor designers. Sophisticated future systems will require a combination of capabilities. Anticipating these needs, National Semiconductor has developed the NS 16000 microprocessor family to incorporate various architectural features into a new generation of devices. Utilizing National Semiconductor's XMOS technology, the design of the NS16000 family is implemented with 3.5-micron gate technology. This allows for a smaller die size, leading to a reduction in chip cost. The design challenges in creating this new family were met only after thoroughly considering market requirements and LSI technology limitations. This article describes some of the capabilities provided by the NS16000 architecture.

Journal ArticleDOI
TL;DR: Making text editors more like computer games may seem ridiculous on the surface, but these "games" use basic motivational techniques-something designers of application systems have overlooked.
Abstract: 35 Using Handwriting Action to Construct Models of Engineering Objects Mamoru Hosaka and Fumihiko Kimura With this symbol recognition technique, handwritten engineering drawings can be used as direct computer input for generating machine models of design objects. 49 The Adventure of Getting to Know a Computer John M. Carroll Making text editors more like computer games may seem ridiculous on the surface, but these \"games\" use basic motivational techniques-something designers of application systems have overlooked.

Journal ArticleDOI
TL;DR: The predominant existing software specification and implementation techniques for sequential control are not adequate for the creation of correct software of the complexity required for redundant systems.
Abstract: Redundant control systems require more than a single redundant construct to serve the six basic functions of fault tolerance: test, detection, diagnosis, masking, reconfiguration, and recovery. Software usually constitutes or supports one or more such constructs. Additionally, software must be correct, since it is seldom, if ever, protected by redundancy. A redundant sequential control system requires intricate software constructs. The predominant existing software specification and implementation techniques for sequential control are not adequate for the creation of correct software of the complexity required for redundant systems. This complexity is illustrated by an example.

Proceedings ArticleDOI
13 Sep 1982
TL;DR: It is shown that constants as well as rules and regulations typically found in business applications should be factored out and stored separately from the application programs in a data base.
Abstract: This paper describes a methodology for application software development, the objective being the reduction of volume of code and ease of maintenance. It is shown that constants as well as rules and regulations typically found in business applications should be factored out and stored separately from the application programs in a data base. Definitional equations are proposed as a method for specifying such rules and regulations. The equations can be used as parameters to various types of interpreters to be used by application programs.As an illustration of the methodology, one such interpreter has been implemented. This paper shows its application to a screen handling program; other uses are discussed. The interpreter and its implementation are outlined.


Journal ArticleDOI
TL;DR: Basic software techniques for distributing the hardware resource among user programs computed by a multicomputer system with a dynamic architecture that maximizes the number of parallel program streams computed by the same set of resources are elaborated on.
Abstract: This paper elaborates on basic software techniques for distributing the hardware resource among user programs computed by a multicomputer system with a dynamic architecture. The system can switch the hardware resources into the minimal sized computers required for computations and initiate the unused resources into execution of additional program streams. Thus it maximizes the number of parallel program streams computed by the same set of resources.

Journal ArticleDOI
TL;DR: A methodology is suggested for estimating the execution-time bound and storage requirements of control algorithms coded in Pascal when they are run on a specified microprocessor and a set of recently announced 16-bit microprocessors in some control applications is evaluated.
Abstract: One of the basic problems in the area of microprocessor-based process control is the estimation of execution time and storage required to implement a given control algorithm on a specified microprocessor. With improvements in the capability of microprocessors and increases, in sophistication and cost of control software higher level languages particularly, Pascal is becoming popular in control applications. In this paper, a methodology is suggested for estimating the execution-time bound and storage requirements of control algorithms coded in Pascal when they are run on a specified microprocessor. The method consists of some modifications of the Pascal P-compiler, which yields the P-code and together with it a subsidiary sequential file of records to facilitate timing and storage calculation. This sequential file, together with the parameters used in program loops in the PASCAL application program and P-code instruction execution time and code size for a specified microprocessor, yields the execution-time bound and memory requirements. Using this methodology, the performance of a set of recently announced 16-bit microprocessors in some control applications is evaluated.

Journal ArticleDOI
TL;DR: Pertinent software aspects of the POLO‐FINITE system are described to provide an example of the software virtual machine approach to solve database, memory management, and processing module integration problems.
Abstract: Approaches to the development of scientific application software have matured over the past two decades and now constitute identifiable methodologies. Three software development methodologies are described, compared, and contrasted from the viewpoint of development effort, continued maintenance, and subsequent extension. Pertinent software aspects of the POLO-FINITE system are described to provide an example of the software virtual machine approach to solve database, memory management, and processing module integration problems. Finally, possible extensions of the virtual machine concept are discussed as a means to further advance software development methodologies.

Proceedings ArticleDOI
Lawrence A. O'Neill1
01 Jan 1982
TL;DR: The experience in using various techniques and their conclusions about their value are described, which illustrate the effect of using a consistent methodology.
Abstract: We have observed the effect that software engineering can have on design automation throughout the four years of the Designer's Workbench (DWB) project. DWB is a design aids delivery system that interfaces the user to a variety of applications programs. This paper describes our experience in using various techniques and our conclusions about their value. The improvements that occurred in the second design iteration illustrate the effect of using a consistent methodology. The introduction of table-driven, finite state machines and software utilities provided an unusually adaptable and flexible environment for adding new applications. The resultant design aids delivery system is able to respond to the rapid changes that occur in the supported technologies and provide tools when needed rather than after the customers have completed their project.

Journal ArticleDOI
TL;DR: Some power system applications are examined and it is argued that their present form is not suitable for systems that include special purpose processors.
Abstract: Several uses for special purpose numerical hardware for scientific applications have recently been proposed [1, 3, 6]. However, few of those innovations have yet been implemented. We discuss some of the characteristics essential for an application to effectively use a special purpose processor. We examine some power system applications and argue that their present form is not suitable for systems that include special purpose processors.

Journal ArticleDOI
TL;DR: The history of graphics software mirrors that of computing in general: we work out basic techniques; we develop algorithms; we begin to search for standards as discussed by the authors, which mirrors the history of computer graphics.
Abstract: The history of graphics software mirrors that of computing in general: we work out basic techniques; we develop algorithms; we begin to search for standards.

Journal ArticleDOI
TL;DR: A practical architecture based on local-area network technology which supports the incorporation of redundancy and a set of classification criteria for interprocess communication primitives which permits a unified treatment of message transfer protocols and transparent redundancy in the application software are proposed.

Journal ArticleDOI
B.R. Myers1
01 Jul 1982

Journal ArticleDOI
D. Botsch1, H. Eberding1
TL;DR: The wide range of applications and requirements and the increasing software size in communication systems encourage the use of high-level programming languages, and CCITT recommends the language CHILL, developed by Siemens for the world market.
Abstract: The wide range of applications and requirements and the increasing software size in communication systems encourage the use of high-level programming languages. For SPC telephone switching systems, CCITT recommends the language CHILL. One of the first public switching systems to use CHILL is EWSD, developed by Siemens for the world market. The hardware architecture of EWSD, being the basis for resolving the real-time requirements, is described as a two-level processing approach. The software is structured according to the concept of various layers and virtual machines. In order to support the development and test of CHILL software, a number of tools known as support software had to be developed.

Journal ArticleDOI
TL;DR: The architecture of a multiprocessor system is described that will be used for on-line filter and second stage trigger applications and emphasis is put on the modularity, processor communication and interfacing.
Abstract: The architecture of a multiprocessor system is described that will be used for on-line filter and second stage trigger applications. The system is based on the MC68000 microprocessor from Motorola. As for the hardware, emphasis is put on the modularity, processor communication and interfacing. In the discussion of the software special attention is given to the operating system software, in particular the communication between the supervisor and slave processing cells. Also the interaction between time critical user programs and operating system is discussed.

Proceedings ArticleDOI
01 May 1982
TL;DR: Here, concepts for what has come to be called "Automatic Application Generation" (AAG), based upon several years of experience, are presented, and Simplicity to the end user, capability to produce complex application structures, and modularity are stressed.
Abstract: A major drawback of commercially available DUR systems is the inability of the user to easily develop and modify his applications for these devices. For example, it he wants to change the use of the recognizer from a quality control task to an inventory management application, he must either go back to the manufacturer or substantially modify all special application code. Here, concepts for what has come to be called "Automatic Application Generation" (AAG), based upon several years of experience, are presented. Simplicity to the end user, capability to produce complex application structures, and modularity are stressed. Examples are presented, and implementational considerations are discussed.

Proceedings ArticleDOI
06 Oct 1982
TL;DR: This paper investigates the integration of application software systems into an Ada environment using the model given in the Stoneman requirements, and considers especially the aspect of making available the user functions of such systems to an APSE end user.
Abstract: In this paper we investigate the integration of application software systems into an Ada environment using the model given in the Stoneman requirements. We consider especially the aspect of making available the user functions of such systems to an APSE end user. We outline that within the Stoneman model this can only be achieved by integrating the application software system interfaces into the MAPSE level. We suppose that an application system can be described by an abstract data type: the package facility of Ada is used to realize such an abstract data type. The problems are discussed in detail by showing how to integrate an existing data base system.

01 Jan 1982
TL;DR: This investigation resulted in a proposed configuration of the distributed system, the set of new operating system functions that together with an existing Kernel make the Basic Real-Time Operating System, and theset of new EPL language primitives that provide BMD application processes with efficient mechanisms for communication, synchronization, and effective utilization of distributed system resources.
Abstract: : The main goal of this research was to design and evaluate decentralized operating system concepts that will support real-time BMD(Ballistics Missiles Defense) applications executing on distributed hardware with local and shared memories. The objective was to develop real-time operating system functions that would perform efficient integration of distributed resources, and support execution of BMD application software with high levels of performance, reliability, and continuous operation. Results of current research efforts in the field of distributed hardware architecture, operating systems design, and distributed programming languages, are studied in order to identify major issues and evaluate proposed solutions. This investigation resulted in a proposed configuration of the distributed system, the set of new operating system functions that together with an existing Kernel (Cohe81) make the Basic Real-Time Operating System, and the set of new EPL language primitives that provide BMD application processes with efficient mechanisms for communication, synchronization, and effective utilization of distributed system resources.