scispace - formally typeset
Search or ask a question

Showing papers in "IEEE Computer in 1985"


Journal ArticleDOI
TL;DR: Interval temporal logic offers a natural basis for the specification of devices and digital signals and is suitable for hardware description languages based on formalisms suited to temporal reasoning.
Abstract: Because digital systems operate over time, hardware descriptions should be based on formalisms suited to temporal reasoning. One such notation, interval temporal logic, offers a natural basis for the specification of devices and digital signals. As computer systems continue to grow in complexity, the distinction between hardware and software is becoming increasingly blurred. This situation has produced an increasing awareness of the need for behavioral models suited to specifying and reasoning about both digital devices and programs. Contemporary hardware description languages (for example, Barbacci, Parker and Wallace,2 and Su et al. 3) are not sufficient because of various limitations:

356 citations


Journal ArticleDOI
TL;DR: The article describes four document systems developed or under development at Brown University that illustrate many of the different necessary functions of electronic document systems.

293 citations


Journal ArticleDOI
TL;DR: Research on knowledge representation in artificial intelligence provides a wealth of relevant techniques that can be incorporated into specification languages.
Abstract: Specification of many kinds of knowledge about the world is essential to requirements engineering. Research on knowledge representation in artificial intelligence provides a wealth of relevant techniques that can be incorporated into specification languages.

274 citations


Journal ArticleDOI
TL;DR: A discussion of various classification criteria for existing requirements specification techniques follows a brief review of requirements specification contents and concerns.
Abstract: The purpose of this article is to increase awareness of several requirements specifications issues: (1) the role they play in the full system development life cycle, (2) the diversity of forms they assume, and (3) the problems we continue to face. The article concentrates on ways of expressing requirements rather than ways of generating them. A discussion of various classification criteria for existing requirements specification techniques follows a brief review of requirements specification contents and concerns.

271 citations


Journal ArticleDOI
Ross1
TL;DR: SADT broke new ground in the areas of problem analysis, requirements definition, and functional specification because it allowed rigorous expression of high-level ideas that previously had seemed too nebulous to treat technically.
Abstract: Embodying an organized discipline of thought and action, the SADT methodology has been successful in many applications previously thought too nebulous for technical treatment. At the time the definitive papers on SADT,* Sofrech's Structured Analysis and Design Technique, were published in 1977,1-3 the methodology had been in extensive development and use for several years. Originally introduced as a "system-blueprinting" method for documenting the architecture of large and complex systems,4 SADT had become a full-scale methodology for coping with complexity through a team-oriented, organized discipline of thought and action, accompanied by concise, complete, and readable word and picture documentation. SADT broke new ground in the areas of problem analysis, requirements definition, and functional specification because it allowed rigorous expression of high-level ideas that previously had seemed too nebulous to treat technically. In the seven years since its introduction, SADT and its derivatives and extensions have been successfully applied to hundreds of major projects in a very broad range of application areas. An overview of this application experience and a look toward the future development of SADT are the subjects of this article. What is SADT? SADT consists of two principal parts: (1) the box-and-arrow diagram-*SADT is a trademark of SofTech, Inc. ming language of structured analysis and (2) the design technique, which is the discipline of thought and action that must be learned and practiced if the language is to be used effectively. Both parts are intimately related. Without the simplicity, generality, readability, and rigor of the SA dia-gramming language, the important ideas of the DT methodology would not be sufficiently visible and tangible to be of any coherent use. But without the DT discipline, those same language features of SA would make it almost useless (so much so that in 1977 Softech laid proprietary claim to its methodology, even while placing all the specifics of the SA language in the public domain). The situation would be much like knowing the rules and notation of algebra (capable of extensive calculation in almost any domain) without having any guidance or experience in translating word problems into properly formulated expressions. Neither SA nor SADT solves problems. Both are tools that allow people to express, understand, manipulate, and check problem elements in ways previously not possible. All of SADT stems from a single premise: The human mind can accommodate any amount of complexity as long as it is presented in easy-to-grasp chunks that together …

269 citations


Journal ArticleDOI
TL;DR: In a real-time conference, each participant can be seated in his own office at a workstation that might include a high-resolution screen for computer output, a keyboard and a pointing device, a microphone and a speaker, and possibly a camera and video monitor as discussed by the authors.
Abstract: Multimedia Communications r 1aI-Time 'stems of America r support for group including electronic mail, nferencing,I form man-a ,2 and coordination sup-primarily addresses asyn-us interaction among users. systems are most useful when user can work at times of his own sing. However, although relative-little work has been done on com-uter support for people working together simultaneously, for certain group tasks, such as crisis handling, simultaneous (or real-time) interaction is essential. In a real-time conference, for example, each participant can be seated in his own office at a workstation that might include a high-resolution screen for computer output, a keyboard and a pointing device, a microphone and a speaker, and possibly a camera and video monitor. Parts of each participant's screen can be dedicated to displaying a shared space in which everyone sees the same information. The voice communication equipment can be used by the conference participants for discussion and negotiation; video communication can add an illusion of physical presence by simulating a face-to-face meeting; and conversational references ("this num-ber" or "that sentence") can be clarified by pointing at displayed information. The displayed information can be dynamically edited and processed, permanent records can be saved, and new information that is relevant to the discussion can be retrieved for display at any time. Participants can, in addition , have private spaces on their screens that allow them to view relevant private data or to compose and review information before submitting it to the shared space. Systems that provide some of the above features already exist. As early as 1968, the NLS system5 provided a shared-screen mode for simultaneous collaborative authoring of structured documents. This facility, which can be used to access any interactive program from multiple terminals, is now available in many time-shared operating systems in the form of terminal linking. Terminal linking on most systems does not work correctly unless all linked terminals are of the same type. A notable exception to this is Tym-share's Augment6 system (the commercial successor to NLS), which supports "virtual" terminal linking across dissimilar terminal types. Real-time conferencing can be used to support joint work in many different applications.

260 citations


Journal ArticleDOI
TL;DR: The research that led to SREM has been extended to address the problems of specifying system-level requirements and defining a distributed or uniprocessor design that satisfies requirements generated with SREM.
Abstract: SREM has been extended. Now a requirements-driven modelfor integrated software engineering environments, SREM has new tools to support its application, the Distributed Computing Design System. Since its development and initial presentation in 1976, Software Requirements Engineering Methodology, or SREM, technology has evolved in both application and research. ' The methodology and tools have matured to a point where they are being used on a variety of projects at a number of different locations; the research that led to SREM has been extended to address the problems of specifying system-level requirements and defining a distributed or uniprocessor design that satisfies requirements generated with SREM. This article overviews developments in these areas and summarizes their results. It also describes the origins of SREM, its basic model of requirements, and how the model is implemented in RSL, the Requirements Statement Language, and supporting software.

176 citations


Journal ArticleDOI
TL;DR: A pictorial system is the best vehicle for this, since it offers complete freedom in representing program navigation and can be used to examine a piece of abstract data by enlarging its picture with a zoom effect, thus showing internal detail.
Abstract: ion. A programming system should treat software as a hierarchical structure of neat abstractions. Conventional languages support this view syntactically by enforcing certain access rights within the program text. With standard editors and program listings, however, the programmer navigates through the program text in a sequential fashion. If the language were integrated into the programming system, though, the access path and the abstraction path would be unified and the system would embody the desired structure rather than merely support it. A pictorial system is the best vehicle for this, since it offers complete freedom in representing program navigation. For example, the depth dimension can be used to examine a piece of abstract data by enlarging its picture with a zoom effect, thus showing internal detail.

176 citations


Journal ArticleDOI
TL;DR: The diagrams, flowcharts, and other iconic representations the authors have long employed to communicate with other people can now be used directly to describe algorithms to computers, and the availability of graphics-based, personal workstations can eliminate the need to convert algorithms to the linear strings of symbols traditionally required by most computers.
Abstract: objects have been devdloped. The diagrams, flowcharts, and other iconic representations we have long employed to communicate with other people can now be used directly to describe algorithms to computers. With the availability of graphics-based, personal workstations, these visual modes can eliminate the need to convert algorithms to the linear strings of symbols traditionally required by most computers. Linear, symbolic computer languages have been studied and refined extensively over the past 30 years, but computer language designers now face a new challenge: to provide convenient and natural visual programming lan-

142 citations


Journal ArticleDOI
TL;DR: A suitable classification scheme for comparing these architectures with the aim of determining the predominant importance of new parallel algorithms for enhancement of computer performance is proposed.
Abstract: During the past several years, a great number of proposals have been made with the objective to increase supercomputer performance by an order of magnitude on the basis of a utilization of new computer architectures. The present paper is concerned with a suitable classification scheme for comparing these architectures. It is pointed out that there are basically four schools of thought as to the most important factor for an enhancement of computer performance. According to one school, the development of faster circuits will make it possible to retain present architectures, except, possibly, for a mechanism providing synchronization of parallel processes. A second school assigns priority to the optimization and vectorization of compilers, which will detect parallelism and help users to write better parallel programs. A third school believes in the predominant importance of new parallel algorithms, while the fourth school supports new models of computation. The merits of the four approaches are critically evaluated. 50 references.

127 citations



Journal ArticleDOI
Horak1
TL;DR: The architectural model, the underlying processing model, and the principles of the interchange formats of the ECMA 101 and ISO drafts are introduced, and possibilities of further development indicated.
Abstract: D1 ta,uUAi ctura HI, dec eni Multimedia (Communications riT cture and rchange tatus of lardization imens AG om language, the docu-plays a central role in the f information. Increasing-e tools at personal worksta-cilitate the handling of elec-documents in the office. Office usually consists of a sequence of esses involving a number of tools tributed over several workstations. he interchange of documents between cooperating tools in office systems necessitates a fundamental, common understanding of the structure of documents. For this reason, international standards committees are currently making great efforts to draw up standards that will enable the interchange of documents among open systems. Specifically, the CCITT, or the International Telegraph and Telephone Consultative Committee, the ISO, or the International Organization for Standardization, and the ECMA, or the European Computer Manufacturers Association, have been, or still are, working on the standards shown in the sidebar. The results obtained by ISO and ECMA with regard to the topics of office document architecture, or ODA, and office document interchange formats , or ODIF, are essentially the same in both organizations. In this article, the architectural model , the underlying processing model, and the principles of the interchange formats of the ECMA 101 and ISO drafts are introduced, and possibilities of further development indicated. In its details, the discussion that follows is based on ECMA 101. Document architecture model Document, text, and content. Within the scope of the ODA/ODIF standards , a document is a structured amount of text that can be interchanged as a unit between an originator and a recipient. A document can be interchanged either in image form, to permit its being printed and displayed as intended by the originator, or in processibleform, to permit document editing and layout revision by the recipient. Text is a representation of information for human perception that can be reproduced in two-dimensional form. Text consists of graphic elements such as graphic characters, geometric ele

Journal ArticleDOI
London1, Duisberg
TL;DR: This animation system will provide pictorial representations of internal data structures, at the proper level of abstraction, which are used by a program, and will give users visual feedback as the program and its parts are being executed.
Abstract: An anmation kit can be uwed to explain how a progEra works by creating graphical -e _ihaton. and move. corfeAlted wih the program' actons. Such afacUiky could play an Important role in progrm design, de1elpment and teding. Phe availability of today's powerA ful personal workstations with high-resolution bit-map displays and pointing devices makes possible the creation and display of drawings containing a wide assortment of characters, fonts, icons, and figures, all of which can be continuously moved for realistic animation. We are currently involved in using such animation to visualize programs and algorithms by creating graphical snapshots and movies correlated with the programs' actions. Such a facility we hope will provide programmers or computer users in general with an understanding of what the programs do, how they work, and why they work. It also will give users visual feedback as a program and its parts are being executed. This animation system will provide pictorial representations of those data structures, at the proper level of abstraction, which are used by a program. Standard representations of internal data structures, such as linked lists or arrays with separate index variables, are often insufficient because the viewer must mentally transcribe such representations to ithe abstractions involved in the use of those structures. We use the type of diagrams or sketches a programrner draws at a desk or wallboard, or the kinds of schematic figures found in a programming or data structures text; fortunately, we do not need pictures with exquisite shadings that re-create photographs. Such figures change to reflect the changes during the execution of the program. People's apparent tendency to understand by visualizing spatially the abstractions that constitute the intention, or "meaning," ofaprogram is exploited by the system. For example, one visualizes in two dimensions the trees or matrices manipulated by a program, whereas the code is always linear and sequential.

Journal ArticleDOI
Brown, Carling, Herot, Kramlich, Souza 
TL;DR: An overview of the PV environment is provided, along with a detailed discussion of the technique used to instrument programs, to provide designers and programmers with both static and dynamic views of systems.
Abstract: This article describes a prototype program visualization (PV) enviromnent that we have developed. The prototype is an "umbrella" in the sense that it is not targeted to support any one software development methodology. Rather, it provides basic PV tools that can be used in the service of the programmer's chosen methodology. Our system was designed to support progmming in C, although large portions ofit are independent of the software development language. The prototype itself is implemented in C and runs on a VAX 11/780 under UCB Unix Version 4.2.

Journal ArticleDOI
Melamed1, Morris
TL;DR: Low-cost, high-powered intelligent terminals and workstations equipped with high-resolution displays have recently made practical anew approach to simulation modeling-visual simulation, and this projected AT&T Performance Analysis Workstation combines visual and interactive features in a novel way.
Abstract: far the maos complex _ mm_ Low-cost, high-powered intelligent terminals and workstations equipped with high-resolution displays have recently made practical anew approach to simulation modeling-visual simulation. Traditional programming is no longer the mainstay of model specification: Users can employ extensive graphics to draw a model on a CRT screen and observe its behavior through animation and dynamically evolving statistics. Visual input and output serve to overcome many of the barriers that have prevented simulation from taking its rightful place as a convenient tool for system design. These capabilities are illustrated by the projected AT&T Performance Analysis Workstation, which combines visual and interactive features in a novel way.

Journal ArticleDOI
TL;DR: This chapter presents three paradigms of representations for combinatorial search problems, and develops theoretical bounds, efficient algorithms, and functional requirements of multiprocessing architectures for supporting efficient evaluation of combinatorially search problems.
Abstract: This chapter presents three paradigms of representations for combinatorial search problems. Depending on the functions of the nonterminal nodes in the graphical representation, a search problem can be represented as an AND-tree, an OR-tree, and an AND/OR graph. This classification facilitates the design of unique computer architectures for supporting efficient evaluation of combinatorial search problems. For each representation, we develop theoretical bounds, efficient algorithms, and functional requirements of multiprocessing architectures, and illustrate these results by examples.

Journal ArticleDOI
Poggio1, Garcia Luna Aceves1, Craighill1, Moran1, Aguilar1, Worthington1, Hight1 
TL;DR: Today's micropit tsor-based personal computers pride comprehensive support for text editors, formatters, databases, mail systems, and printers, but people naturally create and exchange information by means of a wide variety of media.
Abstract: PO herehng motivation productivity of peop S use of microcompt~rs f l tomation of routine tasks, {uisition and commumcatiofr ormation, and the inttlligen% t of decision making. For tli~ to ppen, microcomputers must to process and distribute the information that people no*3 | ndle and must offer inforoxessing and communication s similar to those people sed manually. Today's micropit tsor-based personal computers pride comprehensive support for t as a medium. There are a wide variety of text editors, formatters, databases, mail systems, and printers. These capabilities are available at sufficiently low cost that small businesses and even many households are able to purchase them. However, people naturally create and exchange information by means of a wide variety of media. They talk to each other, draw pictures, label diagrams, write notes, and point to things. Thus, in addition to text, they seek computer-based support for graphics, voice, and combinations of these media. The SRI Command and Control Workstation project, which is supported by the United States Navy, has as one of its major goals the creation and application of computer-based, multimedia information systems to support naval command and control ,ased,

Journal ArticleDOI
TL;DR: It was found that the VP-200 can be two to three times as fast as the X-MP/2 in vector mode for large vector lengths, and three reasons are suggested for the parity.
Abstract: The authors report on their opportunity to benchmark the VP-200 (benchmarked at the Fujitsu plant in Numazu, Japan in May 1984 and more recently in May 1985 at Amdahl); the Hitachi S810/20 (made available at the Large Scale Computation Center of Tokyo University in May 1984); and Cray Research provided time on an X-MP running COS and version 1.13 of their CFT compiler. Other X-MP benchmarks were conducted at Los Alamos. For the most part, the codes that were executed on the machines came from the Los Alamos benchmark set. The set is composed of programs that typify the Los Alamos workload. It was found that the VP-200 can be two to three times as fast as the X-MP/2 in vector mode for large vector lengths. Results from highly vectorized codes (rudimentary matrix operations and a linear equations solver) as well as the timings from basic vector operations support this conclusion. On the codes that are more indicative of the Los Alamos workload, the VP-200 and X-MP/2 are comparable. The times for BMK5 (0% vectorized) were virtually equivalent; BMK21 (0% vectorized) was executed on the VP-200 18% faster than on the X-MP; BMK1lb (62% vectorized) and BMK21a (18% vectorized) favored the VP-200 by 17% and 24%, respectively; and SIMPLE (93% vectorized) favored the X-MP by 25%. The authors suggest three reasons for the parity: (1) from the standpoint of Amdahl's Law, many of the codes are dominated by the equivalent scalar performance of the machines, (2) even in the cases of higher degrees of vectorization, vector lengths less than 100 perform equivalently, and (3) function calls prevented full vectorization on the VP-200 in some cases (for example, BMK1la). The Hitachi S810/20 does not perform as well as the other two on the benchmark codes, probably because its scalar performance and vector processor clock period are slower. Additionally, the benchmark codes could not make use of the large number of functional units in the S810/20.

PatentDOI
Camarata Joseph Michael1
TL;DR: The disclosed technique provides a completely general LAN architecture with any excess protocol and is well suited to CSMA/CD (carrier sense, multiple access with collision detection).
Abstract: A Local Area Network (LAN) is disclosed which enables a generalized data communication facility to be established over ordinary telephone wiring. The disclosed technique provides a completely general LAN architecture with any excess protocol and is well suited to CSMA/CD (carrier sense, multiple access with collision detection). The invention consists of architectural and circuit techniques which enable the construction of a local area network(s) (LAN) over ordinary wiring within a building or beween buildings within a short distance of each other (often called a campus). The LAN is implemented without disturbing the normal voice function of the telephone circuits and without the need to string additional wiring. Implementing a LAN in accordance with this invention requires only three types of hardware functional elements (in the appropriate quantities for the application): a) a Node Unit (NU), b) Repeater Unit (RU), and c) depending on the size of the network, possibly a Pulse Regenerator Unit (PRU).

Journal ArticleDOI
TL;DR: The IPC provides the communication primitives and mechanisms for delivering operations to objects and delivering the results, if any, of operations back to invoking clients.
Abstract: object model as the basic system organizing principle. With this model, all system activity can be thought of as operations on a collection of objects managed by the system and organized into classes called types. Examples of object types are documents, processes, and folders. In its simplest terms, the IPC provides the communication primitives and mechanisms for delivering operations to objects and delivering the results, if any, of operations back to invoking clients. Some processes, called object managers, play a special role in implementing objects. The Document Manager and Authentication Manager are examples of managers. Generally, when an operation is invoked on an object, it is delivered to a manager for the object that performs the operation. Every object has a Unique Identifier (UID) that is a fixed-length, structured bit string. It includes fields that specify the object type and the host upon which the object was created, as well as fields that serve to ensure its uniqueness. Although, ultimately, all references to objects are through UIDs, some managers also support symbolic naming. For example, the folder hierarchy of the Document Store implements a symbolic name space by providing a mapping between user-defined symbolic names and object UIDs in order to facilitate user references to objects. The IPC is message oriented, and it supports object-oriented addressing. Operations invoked on objects are sent as messages addressed to the objects. That is, addresses for messages are object UIDs. The object addressed is the operand, and the message data contains the operation and any additional parameters necessary to specify the operation. The role of the IPC is to deliver the message to the manager for the object (a process), which can perform the operation requested. Responses are sent as messages from object managers to requesting clients. When sending a message, a process need not specify the host where the addressed object resides. To deliver the message, the IPC must determine the host. Certain object types are such that


Journal ArticleDOI
TL;DR: The intent is to identify major research developments and to examine their application to engineering data management.
Abstract: ion and property inheritance, (3) object representation, (4) nontraditional data, and (5) knowledge-base management. We concentrate upon facilities for conceptual modeling and information representation. Our intent is to identify major research developments and to examine their application to engineering data management. Figure 1. Data model classification. calculus, the current model endpoint is most often taken as the relational data model. The composition of this model is well understood.2 The knowledge base endpoint is at this time only a set of capabilities, including * representing both extensional values as well as more abstract infor-

Journal Article
Maruyama1
TL;DR: In this paper, the authors present two implementations of exclusive OR, which can be successfully compared with the specification, ''exclusive-OR'' by reduc-tio ad absurdum.
Abstract: Verification need not be costly or time-consuming. With the powerful features of Prolog and the use of temporal logic, verification can be cut to several minutes on a mainframe. As more gates are being squeezed into single LSI chips, the accuracy of design is becoming increasingly significant. A chip design error may result in the repetition of a costly manufacturing process to make a new chip. To avoid such expenses, reliable methodologies must be developed to check the total design process. With a complete hardware synthesis system, we would not have to worry about checking designs, yet not one system is available for practical application. Simulation is the most widely used technique for checking hardware designs. In the early stages of design, simulation enables the designer to find and fix errors. In the final stages, however, simulation is not as effective , and some errors can remain hidden. The most serious problem is that simulation does not definitely ensure the conformance of design to specifications. This handicap is the reason we need formal verification. In formal verification, logic is used to precisely describe a logic circuit. Once specifications are described in logic, a theorem prover does the rest of the work. The first step is to compare a com-binational circuit with its specifications , a process that is easily translated to a logical expression. Figure 1 shows two implementations of exclusive OR , which can be successfully compared with the specification, \"exclusive-OR.\" T. J. Wagner took a further step, reporting on hardware verification by the FOL proof-checker developed at Stanford University.' His proof of an eight-bit multiplier with 260 steps is excellent, but the designer must still construct a verification with the proof checker, in the same way a proof is done in mathematics. Our goal is automated verification, which up to now has .been limited to several special circuits (adder, shifter).2 The idea of automated verification leads us to automated proof by reduc-tio ad absurdum. Suppose a certain condition is represented by proposition P. If we want to verify that this condition always holds for the design, we must prove that no counterexample ever occurs; that is, \\,p =(1) If we can infer condition (1) directly through analysis of logical formulas, verification is successful. If we cannot , another technique is necessary. Tracing causality is a key concept in the DDL Verifier.3`6 Starting from the negation of a proposition, …


Journal ArticleDOI
Shahdad1, Lipsett, Marschner, Sheehan, Cohen 
TL;DR: The authors present a time-based execution model and describe VHDL's features, using a coded four-bit adder to illustrate the use of the most significant ones and describe the concept of design entity, the language's primary abstraction mechanism.
Abstract: In March 1980, the US Department of Defense (DoD) launched the Very High Speed Integrated Circuits program to advance the state of the art in highspeed integrated circuit technology, specifically for defense systems. In 1981 the Institute for Defense Analyses (IDA) arranged a workshop to define the requirements for such a standard. The DoD used the final report of the IDA workshop as a basis for defining a set of language requirements for the VHSIC Hardware Description Language (VHDL), issuing a request for proposal for a two-phase procurement of VHDL and its support environment. VHDL supports the design, documentation, and efficient simulation of hardware from the digital system level to the gate level. While designed to be independent of any underlying technology, design methodology, or environment tool, the language is also extendable toward various hardware technologies, design methodologies, and the varying information needs of design automation tools. The authors begin their discussion of VHDL by describing the concept of design entity, the language's primary abstraction mechanism. They then present a time-based execution model and describe VHDL's features, using a coded four-bit adder to illustrate the use of the most significant ones. Various figures are presented that contain the block diagrams and the code for this example.

Journal ArticleDOI
TL;DR: In this design model, software transformations are first applied to put the algorithm to be implemented into a regular form conducive to systolic implementation, and the steps of allocating operations to hardware, scheduling their execution, and optimizing the design are performed bottom-up, starting with the innermost blocks of the algorithm.
Abstract: The major contribution of this work is a transformational model of systolic design. In this design model, software transformations are first applied to put the algorithm to be implemented into a regular form conducive to systolic implementation. The steps of allocating operations to hardware, scheduling their execution, and optimizing the design are then performed bottom-up, starting with the innermost blocks of the algorithm. We have successfully used this model to rederive several published designs, and it appears suitable for designing complex systolic arrays. This model may help guide manual design, explain systolic algorithms, or capture the design process in the machine where it can benefit from effective automated support.

Journal ArticleDOI
Moriconi1, Hare
TL;DR: This article is an introduction to many of the interesting features of PegaSys, an experimental system that encourages and facilitates extensive use of graphical images as formal, machine- processable documentation.
Abstract: This article is an introduction to many of the interesting features of PegaSys, an experimental system that encourages and facilitates extensive use of graphical images as formal, machine- processable documentation. Unlike most other systems that use graphics to describe programs, the main purpose of PegaSys is to facilitate the explanation of program designs. What is particularly interesting about PegaSys is its ability to: (1) check whether pictures are syntactically meaningful, (2) enforce design rules throughout the hierarchical decomposition of a design, and (3) determine whether a program meets its pictorial documentation. Much of the power of PegaSys stems from its ability to represent and reason about different kinds of pictures within a single logical framework. Excerpts from a working session with PegaSys are used to illustrate the basic style of interaction as well as the three PegaSys capabilities.

Journal ArticleDOI
TL;DR: It is argued that designers of the interface between users and computer systems need a toolkit of abstractions that embody human factors knowledge while automatically executing the low-level details of the interaction.
Abstract: There is a consensus among designers on the need for a rigorous separation of the functionality of a computer system from its user interface. The principle remains sterile unless some way is provided to put it into practice. It is argued that designers of the interface between users and computer systems need a toolkit of abstractions that embody human factors knowledge while automatically executing the low-level details of the interaction. We believe that the notion of a user interface toolkit constitutes a reasonable way to enforce this separation. Given this approach, the next goal is the definition of a toolkit that is useful. The author points at some general benefits resulting from the toolkit abstractions. In particular, the user can avoid "communication deadlocks" by running several applications simultaneously; or he can obtain distinct views of an object through the external view mechanism; or, as a last example, he can interact by means of the dialog socket with the various applications on the workstation in a consistent way through a unique (refinable) dialog-handler (or a dialog-handler of his choice). Therefore, the proposed abstractions improve the quality of user interfaces when viewed in the large. Conversely, when viewed in the small, these abstractions cannot be guaranteed 100% "user-friendly": each class of users and each class of tasks have specific requirements that are to be satisfied on a case-by-case basis.

Journal ArticleDOI
Reynolds1, Postel1, Katz1, Finn1, DeSchon1 
TL;DR: With this system users can create messages containing text, image, and voice data, and send such messages to other users in the ARPA Internet, and the combination oftext, h, graphics, and facsimile into a on data structure may have subantial impact on other applications as well.
Abstract: rn,nte tial for multimedia comtion in a computer-assisted t is great. The ability to ate diagrams or maps and k about them increases the efess of remote communication ndously. The combination oftext, h, graphics, and facsimile into a on data structure may have subantial impact on other applications as well. This article describes the development, implementation, and use of an experimental multimedia mail system developed under the sponsorship of the Defense Advanced Research Projects Agency. About 40 researchers in 10 organizations have contributed to the experiment. With this system users can create messages containing text, image, and voice data, and send such messages to other users in the ARPA Internet.

Journal ArticleDOI
TL;DR: The present paper provides an evaluation of the state of the art regarding multiprocessing and indicates some major research problems which must be solved before VLSI multipROcessors and parallel processors will become commonplace.
Abstract: New developments related partly to the increasing availability of high-performance, 32-bit microprocessors favor the design of high-performance computing systems made of high-volume, low-cost components. The present paper provides an evaluation of the state of the art regarding multiprocessing and indicates some major research problems which must be solved before VLSI multiprocessors and parallel processors will become commonplace. Attention is given to throughput-oriented multiprocessing, high-availability multiprocessing, response-oriented multiprocessing, parallel processing research problems, and future directions. Intrinsically parallel applications are considered along with parallel algorithms, parallel models of computation, parallel programming languages, and novel programming languages. 23 references.