scispace - formally typeset
Search or ask a question

Showing papers on "Software published in 1990"


Journal ArticleDOI
TL;DR: The bulk-synchronous parallel (BSP) model is introduced as a candidate for this role, and results quantifying its efficiency both in implementing high-level language features and algorithms, as well as in being implemented in hardware.
Abstract: The success of the von Neumann model of sequential computation is attributable to the fact that it is an efficient bridge between software and hardware: high-level languages can be efficiently compiled on to this model; yet it can be effeciently implemented in hardware. The author argues that an analogous bridge between software and hardware in required for parallel computation if that is to become as widely used. This article introduces the bulk-synchronous parallel (BSP) model as a candidate for this role, and gives results quantifying its efficiency both in implementing high-level language features and algorithms, as well as in being implemented in hardware.

3,885 citations


Book
01 May 1990
TL;DR: This book aims to explode the notion of the interface as a discrete and tangible thing that the authors can map, draw, design, implement, and attach to an existing bundle of functionality.
Abstract: From the Publisher: "When the concept of the interface first began to emerge, it was commonly understood as the hardware and software through which a human and a computer could communicate. As it has evolved, the concept has come to include the cognitive and emotional aspects of the user's experience as well...The noun, interface is taken to be a discrete and tangible thing that we can map, draw, design, implement, and attach to an existing bundle of functionality. One of the goals of this book is to explode that notion and replace it with one that can guide our work in the right direction." - From the Introduction T

1,170 citations


Book
01 Jan 1990

867 citations


Book
01 Jun 1990
TL;DR: This paper shows that two features of functional languages in particular, higher-order functions and lazy evaluation, can contribute significantly to modularity.
Abstract: As software becomes more and more complex, it is more and more important to structure it well. Well-structured software is easy to write and to debug, and provides a collection of modules that can be reused to reduce future programming costs. In this paper we show that two features of functional languages in particular, higher-order functions and lazy evaluation, can contribute significantly to modularity. As examples, we manipulate lists and trees, program several numerical algorithms, and implement the alpha-beta heuristic (an algorithm from Artificial Intelligence used in game-playing programs). We conclude that since modularity is the key to successful programming, functional programming oers important advantages for software development.

755 citations


Journal ArticleDOI
TL;DR: The actor model as a framework for concurrent systems1 and some concepts which are useful in building actor systems are discussed and some common patterns of concurrent problem solving are outlined.
Abstract: Three significant trends have underscored the central role of concurrency in computing. First, there is increased use of interacting processes by individual users, for example, application programs running on X windows. Second, workstation networks have become a cost-effective mechanism for resource sharing and distributed problem solving. For example, loosely coupled problems, such as finding all the factors of large prime numbers, have been solved by utilizing ideal cycles on networks of hundreds of workstations. A loosely coupled problem is one which can be easily partitioned into many smaller subproblems so that interactions between the subproblems is quite limited. Finally, multiprocessor technology has advanced to the point of providing supercomputing power at a fraction of the traditional cost.At the same time, software engineering considerations such as the need for data abstraction to promote program modularity underlie the rapid acceptance of object-oriented programming methodology. By separating the specification of what is done (the abstraction) from how it is done (the implementation), the concept of objects provides modularity necessary for programming in the large. It turns out that concurrency is a natural consequence of the concept of objects. In fact Simula, the first object-oriented language, simulated a simple form of concurrency using coroutines on conventional architectures. Current development of concurrent object-oriented programming (COOP) is providing a solid software foundation for concurrent computing on multiprocessors, Future generation computing systems are likely to be based on the foundations being developed by this emerging software technology.The goal of this article is to discuss the foundations and methodology of COOP. Concurrency refers to the potentially parallel execution of parts of a computation. In a concurrent computation, the components of a program may be executed sequentially, or they may be executed in parallel. Concurrency provides us with the flexibility to interleave the execution of components of a program on a single processor, or to distribute it among several processors. Concurrency abstracts away some of the details in an execution, allowing us to concentrate on conceptual issues without having to be concerned with a particular order of execution which may result from the quirks of a given system.Objects can be defined as entities which encapsulate data and operations into a single computational unit. Object models differ in how the internal behavior of objects is specified. Further, models of concurrent computation based on objects must specify how the objects interact, and different design concerns have led to different models of communication between objects. Object-oriented programming builds on the concepts of objects by supporting patterns of reuse and classification, for example, through the use of inheritance which allows all instances of a particular class to share the same method.In the following section, we outline some common patterns of concurrent problem solving. These patterns can be easily expressed in terms of the rich variety of structures provided by COOP. In particular, we discuss the actor model as a framework for concurrent systems1 and some concepts which are useful in building actor systems. We will then describe some other models of objects and their relation to the actor model along with novel techniques for supporting reusability and modularity in concurrent object-oriented programming. The last section briefly outlines some major on-going projects in COOP.It is important to note that the actor languages give special emphasis to developing flexible program structures which simplify reasoning about programs. By reasoning we do not narrowly restrict ourselves to the problem of program verification—an important program of research whose direct practical utility has yet to be established. Rather our interest is in the ability to understand the properties of software because of clarity in the structure of the code. Such an understanding may be gained by reasoning either informally or formally about programs. The ease with which we can carry out such reasoning is aided by two factors: by modularity in code which is the result of the ability to separate design concerns, and by the ability to abstract program structures which occur repeatedly. In particular, because of their flexible structure, actor languages are particularly well-suited to rapid prototyping applications.

638 citations


Patent
24 Apr 1990
TL;DR: In this article, the authors describe a system for renting computer software which derives use and billing information, prevents unauthorized use, maintains integrity of the software and controls related intercomputer communications.
Abstract: Remote control of the use of computer data is described in a system for renting computer software which derives use and billing information, prevents unauthorized use, maintains integrity of the software and controls related intercomputer communications. A user at a target computer "downloads" programs or data, via a telephone line and remote control modules, from a host computer. Usage of the programs or data by the target computer or other accounting data are recorded and stored and, at predetermined times, the host computer "uploads" the usage data for processing. Other features include: (1) software and usage security for rental programs; (2) a polynomial generator/checker for generating block check characters for assuring integrity of data transmitted and received; (3) a voice-data switch for switching between data communication and normal telephone communication; and (4) an audio amplifier and speaker for monitoring of activity on the communication line during data transfers.

564 citations


Patent
08 Nov 1990
TL;DR: A telephone configured as a programmable microcomputer (2) (telephone-computer) which operates in most circumstances through a standard telephone 12-key keypad input is described in this paper.
Abstract: A telephone configured as a programmable microcomputer (2) (telephone-computer) which operates in most circumstances through a standard telephone 12-key keypad input (3). The telephone-computer (2) has the overall appearance of a telephone and includes telephone electronics and a microprocessor unit operated in conjunction with other computer elements, including memory devices, a programmable gate array (PGA) chip which can be initially programmed and then fixed, and enhanced integrity features. The PGA has the capability of being configured to accomodate various types of software which require different hardware configuration, but without actually reconfiguring the hardware. The telephone-computer (2) delivers data processing capabilities and services through an ordinary telephone instrument via conventional telephone lines (78) with a network host computer (68) which communicates with a vast panoply of service bureaus (80a-80d). Specifically, operating software is downloaded to the telephone-computer (2) by the network host computer (68) to format the microcomputer to conform to the software format used by the service bureaus (80a-80d).

388 citations


Book
03 Jan 1990
TL;DR: The first book ever to cover neural network tools at the PC level, it provides background information on available network architectures, offers schemes for deciding if a particular problem is appropriate for neural networks, and gives practical guidelines for implementing a neural network in hardware and software at thePC level.
Abstract: This book is written for engineers and computer scientists interested in solving practical problems with neural network tools. The first book ever to cover neural network tools at the PC level, it provides background information on available network architectures, offers schemes for deciding if a particular problem is appropriate for neural networks, and gives practical guidelines for implementing a neural network in hardware and software at the PC level.

372 citations


Journal ArticleDOI
TL;DR: The reliability and trustworthiness of software remain among the most controversial issues facing this age of high technology as discussed by the authors and the authors present some of the crucial questions faced by software programmers and eventual users.
Abstract: Methods and approaches for testing the reliability and trustworthiness of software remain among the most controversial issues facing this age of high technology The authors present some of the crucial questions faced by software programmers and eventual users

340 citations


Journal ArticleDOI
TL;DR: The software life cycle, as described above, is frequently implemented based on a view of the world interpreted in terms of a functional decomposition; that is, the primary question addressed by the systems analysis and design is WHAT does the system do?
Abstract: In software engineering, the traditional description of the software life cycle is based on an underlying model, commonly referred to as the “waterfall” model (e.g., [4]). This model initially attempts to discretize the identifiable activities within the software development process as a linear series of actions, each of which must be completed before the next is commenced. Further refinements to this model appreciate that such completion is seldom absolute and that iteration back to a previous stage is likely. Various authors' descriptions of this model relate to the detailed level at which the software building process is viewed. At the most general level, three phases to the life cycle are generally agreed upon: 1) analysis, 2) design and 3) construction/implementation (e.g., [36], p. 262; [42]) (Figure 1(a)). The analysis phase covers from the initiation of the project, through to users-needs analysis and feasibility study (cf. [15]); the design phase covers the various concepts of system design, broad design, logical design, detailed design, program design and physical design. Following from the design stage(s), the computer program is written, the program tested, in terms of verification, validation and sensitivity testing, and when found acceptable, put into use and then maintained well into the future.In the more detailed description of the life cycle a number of subdivisions are identified (Figure 1(b)). The number of these subdivisions varies between authors. In general, the problem is first defined and an analysis of the requirements of current and future users undertaken, usually by direct and indirect questioning and iterative discussion. Included in this stage should be a feasibility study. Following this a user requirements definition and a software requirements specification, (SRS) [15], are written. The users requirements definition is in the language of the users so that this can be agreed upon by both the software engineer and the software user. The software requirements specification is written in the language of the programmer and details the precise requirements of the system. These two stages comprise an answer to the question of WHAT? (viz. problem definition). The user-needs analysis stage and examination of the solution space are still within the overall phase of analysis but are beginning to move toward not only problem decomposition, but also highlighting concepts which are likely to be of use in the subsequent system design; thus beginning to answer the question HOW? On the other hand, Davis [15] notes that this division into “what” and “how” can be subject to individual perception, giving six different what/how interpretations of an example telephone system. At this requirements stage, however, the domain of interest is still very much that of the problem space. Not until we move from (real-world) systems analysis to (software) systems design do we move from the problem space to the solution space (Figure 2). It is important to observe the occurrence and location of this interface. As noted by Booth [6], this provides a useful framework in object-oriented analysis and design.The design stage is perhaps the most loosely defined since it is a phase of progressive decomposition toward more and more detail (e.g., [41]) and is essentially a creative, not a mechanistic, process [42]. Consequently, systems design may also be referred to as “broad design” and program design as “detailed design” [20]. Brookes et al. [9] refer to these phases as “logical design” and “physical design.” In the traditional life cycle these two design stages can become both blurred and iterative; but in the object-oriented life cycle the boundary becomes even more indistinct.The software life cycle, as described above, is frequently implemented based on a view of the world interpreted in terms of a functional decomposition; that is, the primary question addressed by the systems analysis and design is WHAT does the system do viz. what is its function? Functional design, and the functional decomposition techniques used to achieve this, is based on the interpretation of the problem space and its translation to solution space as an interdependent set of functions or procedures. The final system is seen as a set of procedures which, apparently secondarily, operate on data.Functional decomposition is also a top-down analysis and design methodology. Although the two are not synonymous, most of the recently published systems analysis and design methods exhibit both characteristics (e.g., [14, 17]) and some also add a real-time component (e.g., [44]). Top-down design does impose some discipline on the systems analyst and program designer; yet it can be criticized as being too restrictive to support contemporary software engineering designs. Meyer [29] summarizes the flaws in top-down system design as follows: 1. top-down design takes no account of evolutionary changes;2. in top-down design, the system is characterized by a single function—a questionable concept;3. top-down design is based on a functional mindset, and consequently the data structure aspect is often completely neglected;4. top-down design does not encourage reusability. (See also discussion in [41], p. 352 et seq.)

311 citations


Proceedings ArticleDOI
17 Jun 1990
TL;DR: Using state-of-the-art technology and innovative architectural techniques, the author's architecture approaches the speed and cost of analog systems while retaining much of the flexibility of large, general-purpose parallel machines.
Abstract: The motivation for the X1 architecture described was to develop inexpensive commercial hardware suitable for solving large, real-world problems. Such an architecture must be systems oriented and flexible enough to execute any neural network algorithm and work cooperatively with existing hardware and software. The early application of neural networks must proceed in conjunction with existing technologies, both hardware and software. Using state-of-the-art technology and innovative architectural techniques, the author's architecture approaches the speed and cost of analog systems while retaining much of the flexibility of large, general-purpose parallel machines. The author has aimed at a particular set of applications and has made cost-performance tradeoffs accordingly. The goal is an architecture that could be considered a general-purpose microprocessor for neurocomputing

Patent
07 Sep 1990
TL;DR: In this article, a remote diagnostic and monitoring system and method for use with an operation system having a multi-tasking interface to a real-time control system, the remote diagnostic task operating in an interface computer operatively connected to the operation system, a communication link, and a remote service computer coupled to the interface computer via the communication link.
Abstract: A remote diagnostic and monitoring system and method for use with an operation system having a multi-tasking interface to a real time control system, the remote diagnostic and monitoring system and method comprising a remote diagnostic task operating in an interface computer operatively connected to the operation system, a communication link, and a remote service computer coupled to the interface computer via the communication link. The remote diagnostic task operating on the interface computer can initiate and monitor from a remote location any of the procedures and controls run on the interface computer for controlling and monitoring the operation system. The remote diagnostic task receives instructions from a person at the remote service computer through the communications link and injects these instructions into the operating software of the operation system in a manner such that the remote source, i.e. the remote service computer, is indistinguisable from a local source. The remote diagnostic task is grouped or included with functionally partitioned application tasks in the interface computer and uses the multitasking operating system of the interface computer in the same manner as the application tasks.

Patent
29 Jun 1990
TL;DR: In this article, a low power management system including both hardware and software is provided for a battery powered portable computer, which includes the capability to turn off clock signals (not shown) to various sections of the computer based upon demand.
Abstract: A low power management system including both hardware (Figure 2) and software (Figure 5) is provided for a battery powered portable computer (not shown). The low power management system powers down various sections (10, 12, 16, 21, 24) of the computer (DMA, VCO, DISPLAY, UART) when they are not used. The low power management system is controlled by a control program directing the microprocessor (40) (Figure 2) of the computer, and includes the capability to turn off clock signals (not shown) to the various sections of the computer based upon demand. Also included is the capability to turn on clock signals based upon demand. The low power management system also includes the capability to turn on the computer upon a press of a key on the computer keyboard. The low power management system monitors software application programs for keyboard activity so as to turn off the microprocessor in the computer in response to a loop (10, 16) looking for a keypress and certain other loops which can be monitored without use of the microprocessor.

Patent
30 Jan 1990
TL;DR: The data-flow architecture and software environment for high-performance signal and data procesing as mentioned in this paper is based on the Data-Flow processor, which is a three-dimensional bussed packet routing network.
Abstract: A data-flow architecture and software environment for high-performance signal and data procesing. The programming environment allows applications coding in a functional high-level language 20 which a compiler 30 converts to a data-flow graph form 40 which a global allocator 50 then automatically partitions and distributes to multiple processing elements 80, or in the case of smaller problems, coding in a data-flow graph assembly language so that an assembler 15 operates directly on an input data-flow graph file 13 and produces an output which is then sent to a local allocator 17 for partitioning and distribution. In the former case a data-flow processor description file 45 is read into the global allocator 50, and in the latter case a data-flow processor description file 14 is read into the assembler 15. The data-flow processor 70 consists of multiple processing elements 80 connected in a three-dimensional bussed packet routing network. Data enters and leaves the processor 70 via input/output devices 90 connected to the processor. The processing elements are designed for implementation in VLSI (Very large scale integration) to provide realtime processing with very large throughput. The modular nature of the computer allows adding more processing elements to meet a range of throughout and reliability requirements. Simulation results have demonstrated high-performance operation, with over 64 million operations per second being attainable using only 64 processing elements.

Journal ArticleDOI
TL;DR: This work has developed radiograph reconstruction software that takes full advantage of the contrast and spatial detail inherent in the original CT data by using a ray casting algorithm which explicitly takes into account every intersected voxel, and a heuristic approach for approximating the images that would result from purely photoelectric or Compton interactions.
Abstract: The increasing use of 3-dimensional radiotherapy treatment design has created greater reliance on methods for computing images from CT data which correspond to the conventional simulation film. These images, known as computed or digitally reconstructed radiographs, serve as reference images for verification of computer-designed treatments. Used with software that registers graphic overlays of target and anatomic structures, digitally reconstructed radiographs are also valuable tools for designing portal shape. We have developed radiograph reconstruction software that takes full advantage of the contrast and spatial detail inherent in the original CT data. This goal has been achieved by using a ray casting algorithm which explicitly takes into account every intersected voxel, and a heuristic approach for approximating the images that would result from purely photoelectric or Compton interactions. The software also offers utilities to superimpose outlines of anatomic structures, field edges, beam crosshairs, and linear scales on digitally reconstructed radiographs. The pixel size of the computed image can be controlled, and several methods of interslice interpolation are offered. The software is written in modular format in the C language, and can stand alone or interface with other treatment planning software.

Book
01 Oct 1990
TL;DR: The concept of PAM, Programmable Active Memory is introduced and results obtained with the Perle-0 prototype board are presented, featuring a software silicon foundry for a 50K gate array, with a 50 milliseconds turn-around time.
Abstract: We introduce the concept of PAM, Programmable Active Memory and present results obtained with our Perle-0 prototype board, featuring: A software silicon foundry for a 50K gate array, with a 50 milliseconds turn-around time. A 3000 one bit processors universal machine with an arbitrary interconnect structure specified by 400K bits of nano-code. A programmable hardware co-processor with an initial library including: a long multiplier, an image convolver, a data compressor, etc. Each of these hardware designs speeds up the corresponding software application by at least an order of magnitude.

Dissertation
01 Jan 1990
TL;DR: This dissertation extends her analysis by examining the co-adaptive relationship between users and user-customizable software: users both adapt to the available technology and appropriate the technology, adapting it over time.
Abstract: Co-adaptive phenomena are defined as those in which the environment affects human behavior and at the same time, human behavior affects the environment. Such phenomena pose theoretical and methodological challenges and are difficult to study in traditional ways. However, some aspects of the interaction between people and technology only make sense when such phenomena are taken into account. In this dissertation, I postulate that the use of information technology is a coadaptive phenomenon. I also argue that customizable software provides a particularly good testbed for studying co-adaptation because individual patterns of use are encoded and continue to influence user behavior over time. The possible customizations are constrained by the design of the software but may also be modified by users in unanticipated ways, as they appropriate the software for their own purposes. Because customization patterns are recorded in files that can be shared among users, these customizations may act to informally establish and perpetuate group norms of behavior. They also provide a mechanism by which individual behavior can influence global institutional properties and future implementations of the technology. The presence of these sharable artifacts makes it easier to study customization than related co-adaptive phenomena such as learning and user innovation. Because some mechanisms may be the same for all co-adaptive phenomena, findings about use of customizable software may also shed light on user's choices about when to learn new software and when to innovate. Current research models do not provide useful ways of exploring co-adaptive phenomena, thus requiring new research models and methods. Research on technology and organizations commonly follows one of two theoretical models, each of which is inadequate to account for how users customize software. One treats technology as a static, independent variable, which influences the behavior of the people in the organization. The other treats the organization as the independent variable, in which decision-makers in an organization make strategic choices about technology and appropriate it for their own purposes. The structurational model proposed by Orlikowski (1989) takes both perspectives into account and incorporates an active role by individuals in the organization. This dissertation extends her analysis by examining the co-adaptive relationship between users and user-customizable software: users both adapt to the available technology and appropriate the technology, adapting it over time. These appropriations may take the form of user innovations which may change both the technology itself and the characteristics of the organization, such as who communicates with whom and how coordination of work processes is handled. The theoretical model and evidence for co-adaptation is first illustrated with data from a two-year study of the Information Lens, a software application that allows users to customize the process of managing their electronic mail. I describe the development of the Information Lens and identify the interactions between the technology and individual users in the context of the organization. I also examine the individual patterns of use of Lens rules and trace patterns of sharing of rules among members of the organization. I then examine user customization of software in greater detail, in a study of Unix users at MIT's Project Athena. The data consist of interviews and records of customization files of 51 members of the Project Athena staff. The data is presented from the perspective of the structurational model, with a micro-level analysis of the customization decisions by individual users. The key findings include: 1. The specific identification of the interaction between users and customizable software as a co-adaptive phenomenon, supported by field data. 2. The theoretical linking of co-adaptive phenomena and the structurational model and evidence for a mechanism by which individual interactions with technology affect the organization. 3. The discovery of common patterns of customization: a. Users are most likely to customize when they first join an organization, which is when they know the least about the technology and their eventual use of it. b. Customization activities are often conducted as a way to explore a new software environment. c. Users attempt to incorporate their current work context into their customizations. d. Over time, most users make fewer and fewer customizations, regardless of level of technical expertise. e. Some external events, especially those that cause users to reflect upon their use of the software, increase the probability that users will customize. f. Users who customize like to maintain the same environment, even when the software changes. They will either retrofit the new software to be like the old or refuse to use it at all. g. The most common on-going customization occurs when the user becomes aware of a commonly-repeated pattern of behavior and encodes it as a customization. 4. Customization cannot be considered a primarily individual activity. The following patterns of sharing occurred: a. Users are most likely to borrow customization files when they first join the organization. These files are rarely evaluated for effectiveness and may have been created many years ago. b. A small group of highly technical individuals act as lead users of new technology. They are the first to explore new software and create a set of customization files that other people then borrow. However, the authors of these files receive little or no feedback as to the effectiveness or use of these files. c. Less technical individuals take on the role of translators for other members in their groups. They interpret individual user's needs and create sets of customizations organized to meet those needs. I conclude with a discussion of the theoretical implications, including support for and elaboration of the structurational model and the beginning of a theory of the use of customizable software. I propose changes in the software development process (to include observation of use in the field as an important input to future development), in software design (to include mechanisms that support reflection about use of the software and mechanisms for sharing of customizations), and for managers (to support periodic "maintenance" of skills and to support translators and help them provide more effective customizations for others in the organization).

01 Jan 1990
TL;DR: In this article, the authors propose a new approach to software development which explicitly avoids the use of a single representation scheme or common schema, instead, multiple ViewPoints are utilised to partition the domain information, the development method and the formal representations used to express software specifications.
Abstract: In this paper we propose a new approach to software development which explicitly avoids the use of a single representation scheme or common schema. Instead, multiple ViewPoints are utilised to partition the domain information, the development method and the formal representations used to express software specifications. System specifications and methods are then described as configurations of related ViewPoints. This partitioning of knowledge facilitates distributed development, the use of multiple representation schemes and scalability. Furthermore, the approach is general, covering all phases of the software process from requirements to evolution. This paper motivates and systematically characterises the concept of a "ViewPoint", illustrating the concepts using a simplified example.

Book ChapterDOI
01 Jan 1990
TL;DR: Currently, there are a number of software programs designed to provide three-dimensional reformatting of computed tomography (CT) and to operate on computer equipment supplied with CT acquisition systems, but these programs are subordinate to the reconstruction programs necessary for slice display.
Abstract: Currently, there are a number of software programs designed to provide three-dimensional reformatting of computed tomography (CT) and to operate on computer equipment supplied with CT acquisition systems. These programs, unless operating on an independent work station, are subordinate to the reconstruction programs necessary for slice display. This method facilitates handling of the great volume of data to be processed. Consequently, when reference is made to a work station, it is a dedicated work station, currently the IIS (Dimensional Medicine, Inc., Minnetonka, Minnesota).

Book ChapterDOI
Richard A. Robb1
01 Jan 1990
TL;DR: A comprehensive software system which permits detailed investigation and structured evaluation of 3-D and 4-D biomedical images, and the inclusion of a variety of semiautomatic segmentation, quantitative mensuration, and process design tools (macros) significantly extends the usefulness of the software.
Abstract: A comprehensive software system called ANALYZE has been developed which permits detailed investigation and structured evaluation of 3-D and 4-D biomedical images. This software system can be used with any 3-D imaging modality, including x-ray computed tomography, radionuclide emission tomography, ultrasound tomography, magnetic resonance imaging, and both light and electron microscopy. The system is a synergistic integration of fully interactive modules for direct display, manipulation and measurement of multidimensional image data. Several original algorithms have been developed to optimize the tradeoffs between image display efficiency and quality. One of these is a versatile, interactive volume rendering algorithm. The inclusion of a variety of semiautomatic segmentation, quantitative mensuration, and process design tools (macros) significantly extends the usefulness of the software. It can be used as a “visualization workshop” to prototype custom applications. ANALYZE runs on standard UNIX computers without special-purpose hardware, which has facilitated its implementation on a variety of popular workstations, in both standalone and distributed network configurations.

Journal ArticleDOI
TL;DR: A software package that provides an interactive and graphical environment for surface acoustic wave (SAW) and plate-mode propagation studies in arbitrarily oriented anisotropic and piezoelectric multilayers is described.
Abstract: A software package that provides an interactive and graphical environment for surface acoustic wave (SAW) and plate-mode propagation studies in arbitrarily oriented anisotropic and piezoelectric multilayers is described. The software, which runs on an IBM PC with math coprocessor, is based on a transfer-matrix formulation for calculating the characteristics of SAW propagation in multilayers that was originally written for a mainframe computer. The menu-driven software will calculate wave velocities and field variable variations with depth for any desired propagation direction: the graphics capability provides a simultaneous display of slowness or velocity and of SAW Delta v/v coupling constant curves, and their corresponding field profiles in either polar or Cartesian coordinates, for propagation in a selected plane or as a function of one of the Euler angles. The program generates a numerical data file containing the calculated velocities and field profile data. Examples illustrating the usefulness of the software in the study of various SAW and plate structures are presented. >

Journal ArticleDOI
TL;DR: A prototype system, the Integrated Building Design Environment, has been implemented to act as a testbed for a number of issues that arise from the need to integrate computer aids for the design and construction of buildings.
Abstract: A prototype system, the Integrated Building Design Environment (IBDE), has been implemented to act as a testbed for a number of issues that arise from the need to integrate computer aids for the design and construction of buildings. The seven programs that comprise IBDE are briefly described. Issues such as knowledge representation, data organization, intercommunication, implementation and control are discussed.

Journal ArticleDOI
TL;DR: In this article, an architecture for hard-wired data-flow algorithms which is based on the transmission of arithmetic data one digit at a time serially, and performance of operations digit-serially on that data is presented.
Abstract: An architecture is presented for hard-wired data-flow algorithms which is based on the transmission of arithmetic data one digit at a time serially, and performance of operations digit-serially on that data. It is shown that digit-serial computation gives rise to particularly efficient chip designs, and that choice of digit-size allows the user to match throughput requirements to specifications. Details of the implementation of the individual operators as a cell-library of silicon CMOS circuits are given and mention is made of the software environment (silicon compiler) which allows the rapid translation of algorithms to integrated circuits. >

Patent
19 Jan 1990
TL;DR: A post production offline editing system for storing unedited video takes in a random access memory (which is preferably a set of laser video disk players), displaying selected takes (or individual frames from selected takes), and generating an edit list which defines an edited video program.
Abstract: A post production offline editing system for storing unedited video takes in a random access memory (which is preferably a set of laser video disk players), displaying selected takes (or individual frames from selected takes), and generating an edit list which defines an edited video program The system includes a computer programmed with software providing an integrated software environment which enables a user conveniently to log unedited takes into the system, and to generate an edit list suitable for use in a subsequent online editing operation The system software provides global access to a variety of video post production environments at any point during an offline editing operation The system software presents menus to the user including icons or mnemonic text in windows which may be conveniently selected by the user using a mouse In a preferred embodiment, the invention includes a video special effects unit capable of processing the stored takes to simulate various video transitions between scenes (such as dissolves, fades, and wipes), to enable the user to view a show defined by an edit list which specifies such transitions The user interface includes a convenient means for jogging (and shuttling) the laser disk players using a mouse

Journal ArticleDOI
TL;DR: The hypothesis is that the successful dissemination and reuse of classes requires a well-organized community of developers who are ready to share ideas, methods, tools and code, and these communities should be supported by software information systems which manage and provide access to class collections.
Abstract: Object-oriented programming may engender an approach to software development characterized by the large-scale reuse of object classes. Large-scale reuse is the use of a class not just by its original developers, but by other developers who may be from other organizations, and may use the classes over a long period of time. Our hypothesis is that the successful dissemination and reuse of classes requires a well-organized community of developers who are ready to share ideas, methods, tools and code. Furthermore, these communities should be supported by software information systems which manage and provide access to class collections. In the following sections we motivate the need for software communities and software information systems. The bulk of this article discusses various issues associated with managing the very large class collections produced and used by these communities.

Journal ArticleDOI
Zvi Har'el1, Robert P. Kurshan1
TL;DR: A way to develop and implement communications protocols so they are logically sound and meet stated requirements and recount the experience of an application of this methodology to develop a new session protocol at an interface of an AT&T product called the Trunk Operations Provisioning Administration System (TOPAS).
Abstract: We describe a way to develop and implement communications protocols so they are logically sound and meet stated requirements. Our methodology employs a software system called the coordination-specification analyzer (COSPAN) to facilitate logical testing (in contrast to simulation or system execution testing). Logical testing of a communications protocol is carried out on a succession of models of the protocol. Starting with a high-level model (e.g., a formal abstraction of a protocol standard), successively more refined (detailed) models are created. This succession ends with a low-level model which is in fact the code that runs the ultimate implementation of the protocol. Tests of successive models are defined not by test vectors, but by user-defined behavioral requirements appropriate to the given level of abstraction. Testing a high-level design permits early detection and correction of design errors. Successive refinement is carried out in a fashion that guarantees properties proved at one level of abstraction hold in all successive levels of abstraction. We recount the experience of an application of this methodology, employing COSPAN, to develop (analyze and implement in software) a new session protocol at an interface of an AT&T product called the Trunk Operations Provisioning Administration System (TOPAS).

Proceedings ArticleDOI
TL;DR: The various roles and responsibilities of the view-sharing software that must be considered during its design and evaluation are discussed: view management, floor control, conference registration by participants, and handling of meta-level communications.
Abstract: Although work is frequently collaborative, most computer-based activities revolve around software packages designed to be used by one person at a time. To get around this, people working together often talk and gesture around a computer screen, perhaps taking turns interacting with the running “single-user” application by passing the keyboard around. However, it is technically possible to share these unaltered applications—even though they were originally designed for a single user only—across physically different workstations through special view-sharing software. Each person sees the same image of the running application on their own screen, and has an opportunity to interact with it by taking turns. This paper discusses the various roles and responsibilities of the view-sharing software that must be considered during its design and evaluation: view management, floor control, conference registration by participants, and handling of meta-level communications. A brief survey of existing shared view systems is provided for background.

Patent
20 Jul 1990
TL;DR: In this paper, a hardware error processing is undertaken to analyze the source of the error and to preserve sufficient information to allow later software error processing, for certain errors, complete recovery without interruption of the sequence of instruction execution.
Abstract: Hardware error processing is undertaken to analyze the source of the error and to preserve sufficient information to allow later software error processing. The hardware error processing also allows, for certain errors, complete recovery without interruption of the sequence of instruction execution.

Patent
17 Aug 1990
TL;DR: In this article, the creation and ordering of custom business forms is simplified for short run orders, allowing the end user of the business form to design and transmit the order on a personal computer.
Abstract: The creation and ordering of custom business forms is simplified for short-run orders, allowing the end user of the business form to design and transmit the order on a personal computer. Using appropriate design software, the business form is first designed, and then electronically transferred into a second software program which allows for the selection of a number of business form parameters, and also includes order parameters including quantity and delivery options. The business form parameters from the design software are automatically transferred to the ordering software, and any remaining parameters that must be selected are then selected in a sequence, the order quantity and delivery information is inputted, and the price is calculated, and then the custom business form and order are transmitted in machine form to a second computer remote from the first computer. At the second computer a confirmation of the order is produced, and the order is evaluated to determine where the best facility to print it is. The order is then electronically transferred to the printing location, and after printing it is shipped. The particular manner of highlighting and/or pictorially illustrating the options selectable, and a number of the particular parameters to be selected--such as the edge along which multipart forms are to be attached--are controlled for optimum utility.

Journal ArticleDOI
TL;DR: Object-oriented program development leads to modular programs and a substantial reuse of code for the two problems: sorting activities in a schedule and ordering nodes and elements in a finite element mesh.
Abstract: The representation of engineering systems in a manner suitable for computer processing is an important aspect of software development for computer aided engineering. The process of abstraction is a well-known technique for developing data representations. Objects are a mechanism for representing data using abstraction, and object-oriented languages are languages for writing programs to manipulate objects. The paper shows through examples the advantages of object-oriented programming for developing engineering software. Mathematical graphs are used as an abstraction for two problems: (1) sorting activities in a schedule and (2) ordering nodes and elements in a finite element mesh. Classes of objects are developed for generic graphs, activity procedence graphs, and graphs of elements meshes. Object-oriented program development leads to modular programs and a substantial reuse of code for the two problems.