scispace - formally typeset
Search or ask a question

Showing papers on "Application software published in 1990"


Journal ArticleDOI
TL;DR: The authors address the problem of validating the dependability of fault-tolerant computing systems, in particular, the validation of the fault-Tolerance mechanisms through the use of fault injection at the physical level on a hardware/software prototype of the system considered.
Abstract: The authors address the problem of validating the dependability of fault-tolerant computing systems, in particular, the validation of the fault-tolerance mechanisms. The proposed approach is based on the use of fault injection at the physical level on a hardware/software prototype of the system considered. The place of this approach in a validation-directed design process and with respect to related work on fault injection is clearly identified. The major requirements and problems related to the development and application of a validation methodology based on fault injection are presented and discussed. Emphasis is put on the definition, analysis, and use of the experimental dependability measures that can be obtained. The proposed methodology has been implemented through the realization of a general pin-level fault injection tool (MESSALINE), and its usefulness is demonstrated by the application of MESSALINE to the experimental validation of two systems: a subsystem of a centralized computerized interlocking system for railway control applications and a distributed system corresponding to the current implementation of the dependable communication system of the ESPRIT Delta-4 Project. >

584 citations


Patent
16 Oct 1990
TL;DR: In this article, a multi-windowing agent computer terminals, that answer telemarketing calls, can receive call-related information from a communication system at an agent terminal for automatic display in a "telephony" window, and automatically send such information to a host computer system application software and retrieve caller desired information based on the calling or called number for display in "host application" window without requiring a caller to provide verbal information.
Abstract: The present invention enables multi-windowing agent computer terminals, that answer a variety of, e.g., telemarketing calls, to (i) receive call-related information from a communication system at an agent terminal for automatic display in a "telephony" window, and (ii) automatically send such information to a host computer system application software and retrieve caller desired information based on the calling or called number for display in a "host application" window without requiring a caller to provide verbal information. The present invention also enables an agent terminal to automatically transfer caller-related information from one window to another window via programmable function key commands which can be programmed by a user/system administrator regarding what information is to be copied and where and when to copy it. This allows the user/system administrator to program the system to (1) retrieve information using the received call-related data without routing caller data to the host computer application software prior to its delivery to the agent terminal, and (2) automatically transmit collected data to the host database at a call's conclusion.

214 citations


Proceedings ArticleDOI
07 May 1990
TL;DR: The development of a virtual-machine monitor (VMM) security kernel for the VAX architecture is described, focusing on how the system's hardware, microcode, and software are aimed at meeting A1-level security requirements while maintaining the standard interfaces and applications of the VMS and ULTRIX-32 operating systems.
Abstract: The development of a virtual-machine monitor (VMM) security kernel for the VAX architecture is described. Particular focus is on how the system's hardware, microcode, and software are aimed at meeting A1-level security requirements while maintaining the standard interfaces and applications of the VMS and ULTRIX-32 operating systems. The VAX security kernel supports multiple concurrent virtual machines on a single VAX system, providing isolation and controlled sharing of sensitive data. Rigorous engineering standards were applied during development to comply with the assurance requirements for verification and configuration management. The VAX security kernel was developed with a heavy emphasis on performance and on system management tools. The kernel performs sufficiently well that all of its development can be now carried out in virtual machines running on the kernel itself, rather than in a conventional time-sharing system. >

146 citations


Journal ArticleDOI
S.R. White1, L. Comerford1
TL;DR: A novel use-once authorization mechanism, called a token, is introduced as a solution to the problem of providing authorizations without direct communication, and guidelines to its solution are offered.
Abstract: ABYSS (a basic Yorktown security system) is an architecture for protecting the execution of application software. It supports a uniform security service across the range of computing systems. The use of ABYSS in solving the software protection problem, especially in the lower end of the market, is discussed. Both current and planned software distribution channels are supportable by the architecture, and the system is nearly transparent to legitimate users. A novel use-once authorization mechanism, called a token, is introduced as a solution to the problem of providing authorizations without direct communication. Software vendors may use the system to obtain technical enforcement of virtually any terms and conditions of the sale of their software, including such things as rental software. Software may be transferred between systems, and backed up to guard against loss in case of failure. The problem of protecting software on these systems is discussed, and guidelines to its solution are offered. >

108 citations


Proceedings ArticleDOI
05 Feb 1990
TL;DR: An extended transaction model that meets the special requirements of software engineering projects is presented, possible implementation techniques are described, and a number of issues regarding the incorporation of such a model into multiuser software development environments are discussed.
Abstract: It is generally recognized that the classical transaction model, providing atomicity and serializability, is too strong for certain application areas since it unnecessarily restricts concurrency. The author is concerned with supporting cooperative work in multiuser design environments, particularly teams of programmers cooperating to develop and maintain software systems. An extended transaction model that meets the special requirements of software engineering projects is presented, possible implementation techniques are described, and a number of issues regarding the incorporation of such a model into multiuser software development environments are discussed. >

71 citations


Proceedings ArticleDOI
John R. Koza1
06 Nov 1990
TL;DR: The authors describe the genetic programming paradigm, which genetically breeds populations of computer programs to solve problems, where the individuals in the population are hierarchical computer programs of various sizes and shapes.
Abstract: The authors describe the genetic programming paradigm, which genetically breeds populations of computer programs to solve problems. In genetic programming, the individuals in the population are hierarchical computer programs of various sizes and shapes. Applications to three problems in artificial intelligence are presented. The first problem involves genetically breeding a population of computer programs to allow an 'artificial ant' to traverse an irregular trail. The second problem involves genetically breeding a minimax control strategy in a different game with an independently acting pursuer and evader. The third problem involves genetically breeding a minimax strategy for a player of a simple discrete two-person game represented by a game tree in extensive form. >

70 citations


Patent
01 Oct 1990
TL;DR: In this paper, a virtual software machine for providing a virtual execution environment in a target computer for an application software program having one or more execution dependencies that are incompatible with a software execution environment on the target computer.
Abstract: The present invention relates to a virtual software machine for providing a virtual execution environment in a target computer for an application software program having one or more execution dependencies that are incompatible with a software execution environment on the target computer. The machine comprises a plurality of independent processes, and a virtual control mechanism having a virtual management interface (VMI) for generating requests for execution to the plurality of independent processes and receiving results of such processing. The requests for execution and the results are communicated via a message exchange mechanism. The machine also includes a pre-processor for generating a pre-processed application program in which the execution dependencies are masked. A compiler/linker receives the pre-processed application program and the virtual control mechanism and generates executable code for the operating system of the target computer. A run-time module of the machine is run by the operating system of the target computer for executing the application software program in the target computer despite the execution dependency that is incompatible with the target computer system software execution environment.

69 citations


Proceedings ArticleDOI
26 Jun 1990
TL;DR: A DSD project that consists of the implementation of a distributed self-diagnosis algorithm and its application to distributed computer networks is presented and the EVENT-SELF algorithm presented combines the rigor associated with theoretical results with the resource limitations associated with actual systems.
Abstract: A DSD (distributed self-diagnosing) project that consists of the implementation of a distributed self-diagnosis algorithm and its application to distributed computer networks is presented. The EVENT-SELF algorithm presented combines the rigor associated with theoretical results with the resource limitations associated with actual systems. Resource limitations identified in real systems include available message capacity for the communication network and limited processor execution speed. The EVENT-SELF algorithm differs from previously published algorithms by adopting an event-driven approach to self-diagnosability. Algorithm messages are reduced to those messages required to indicate changes in system those messages required to indicate changes in system state. Practical issues regarding the CMU-ECE DSD implementation are considered. These issues include the reconfiguration of the testing subnetwork for environments in which processors can be added and removed. One of the goals of this work is to utilize the developed CMU-ECE DSD system as an experimental test-bed environment for distributed applications. >

67 citations


Journal ArticleDOI
TL;DR: A taxonomy of fault tolerance in commercial computers is set forth, organized around three orthogonal axes: the sources of errors the computer tolerates, thecomputer's approach to tolerating errors, and the computer's structure.
Abstract: A taxonomy of fault tolerance in commercial computers is set forth. It is organized around three orthogonal axes: the sources of errors the computer tolerates, the computer's approach to tolerating errors, and the computer's structure. Each of these is briefly discussed. An example of each class in the taxonomy is presented, as well as its approach to answering the following questions: (1) Is the system to be highly reliable or highly available? (2) Do all outputs have to be correct, or only data committed to long-term storage? (3) How familiar must the user be with the architecture and software redundancy? (4) Is the system dedicated so that attributes of the application can be used to simplify fault tolerance techniques? (5) Is the system constrained to use existing components? (6) Even if the design is new, what cost and/or performance penalty does it impose on the user who does not require fault tolerance? (7) Is the system stand-alone, or can other processors be called upon to assist in times of failure? The computers covered are the VAX 8600 and IBM 3090 uniprocessors, the Tandem, Stratus, and VAXft 3000 multicomputers, and the Teradata and Sequoia multiprocessors. >

61 citations


Journal ArticleDOI
01 May 1990
TL;DR: This paper presents the performance evaluation, workload characterization and trace driven simulation of a hypercube multi-computer running realistic workloads and investigated both the computation and communication behavior of these parallel programs.
Abstract: This paper presents the performance evaluation, workload characterization and trace driven simulation of a hypercube multi-computer running realistic workloads. Six representative parallel applications were selected as benchmarks. Software monitoring techniques were then used to collect execution traces. Based on the measurement results, we investigated both the computation and communication behavior of these parallel programs, including CPU utilization, computation task granularity, message interarrival distribution, the distribution of waiting times in receiving messages, and message length and destination distributions. The localities in communication were also studied. A trace driven simulation environment was developed to study the behavior of the communication hardware under real workload. Simulation results on DMA and link utilizations are reported.

53 citations


Journal ArticleDOI
C.E. Houstics1
TL;DR: An allocation model for mapping a real-time application to certain k-processor multiprocessor systems is developed and analyzed and experience suggests that it can be used effectively for the performance evaluation of application-distributed system pairs.
Abstract: An allocation model for mapping a real-time application to certain k-processor multiprocessor systems is developed and analyzed. Its objective is minimizing the total processing time of the application by exploiting the parallelism of the application-architecture pair. The model is formulated in terms of the performance characteristics of the system and the resource requirements of the computation involved. Experience with the model suggests that it can be used effectively for the performance evaluation of application-distributed system pairs. >

Proceedings ArticleDOI
P. Gopinath1, Rajiv Gupta
05 Dec 1990
TL;DR: A description is presented of compiler-based techniques that classify the application code on the basis of predictability and monotonicity, introduce measurement code fragments at selected points in the applicationcode, and use the results of run-time measurements to dynamically adapt worst-case schedules.
Abstract: Worst-case scheduling techniques for real-time applications often result in sever underutilization of the processor resources since most tasks finish in much less time than their anticipated worst-case execution times. A description is presented of compiler-based techniques that classify the application code on the basis of predictability and monotonicity, introduce measurement code fragments at selected points in the application code, and use the results of run-time measurements to dynamically adapt worst-case schedules. This results in better utilization of the system and early failure detection and recovery. >

Journal ArticleDOI
TL;DR: A system architecture called the recovery metaprogram (RMP) is proposed, which separates the application from the recovery software, giving programmers a single environment that lets them use the most appropriate fault-tolerance scheme.
Abstract: A system architecture called the recovery metaprogram (RMP) is proposed. It separates the application from the recovery software, giving programmers a single environment that lets them use the most appropriate fault-tolerance scheme. To simplify the presentation of the RMP approach, it is assumed that the fault model is limited to faults originating in the application software, and that the hardware and kernel layers can mask their own faults from the RMP. Also, relationships between backward and forward error recovery are not considered. Some RMP examples are given, and a particular RMP implementation is described. >

Journal ArticleDOI
TL;DR: A power converter design using MOSFET and bipolar-junction-transistor (BJT) switches is shown to illustrate the power of optimization routines in power electronics.
Abstract: A computer-aided-design approach for power converter components is described. A designer with a minimum of programming and optimization experience can interface with nonlinear optimization routines to rapidly perform design trade-offs that would be impossible by hand. A power converter design using MOSFET and bipolar-junction-transistor (BJT) switches is shown to illustrate the power of optimization routines in power electronics. Realistic design values and available vendor components can be incorporated in a design without using an extensive database program structure. A practical example is given with experimental data to verify the accuracy and usefulness of optimization software. >

Proceedings ArticleDOI
11 Oct 1990
TL;DR: The distribution and real-time characteristics of Alpha's application context are summarized; some ofAlpha's approaches to dealing with distribution are discussed.
Abstract: Alpha is a novel nonproprietary operating system for large, complex, distributed real-time systems. Examples include combat platform and battle management, factory automation, and telecommunications. Such systems run distributed applications and need global (transnode) resource management. They are inherently asynchronous, dynamic, and stochastic, and yet must be highly dependable. Alpha includes support for maintaining application-specific correctness, of distributed execution and consistency of distributed data, and best-effort management of all resources directly with actual application time constraints. Certain features of Alpha are briefly described. First the distribution and real-time characteristics of Alpha's application context are summarized; some of Alpha's approaches to dealing with distribution are discussed. An overview of the project history and status is provided. >

Proceedings ArticleDOI
01 Mar 1990
TL;DR: An approach to the problem of systematic development of applications requiring access to multiple and heterogeneous hardware and software systems is presented, based on a common communication and data exchange protocol that uses local access managers to protect the autonomy of member software systems.
Abstract: An approach to the problem of systematic development of applications requiring access to multiple and heterogeneous hardware and software systems is presented. The approach is based on a common communication and data exchange protocol that uses local access managers to protect the autonomy of member software systems. The solution is modular and can be implemented in a heterogeneous hardware and software environment using different operating systems and different network protocols. The design of the system, its major components, and its prototype implementation are described. Particular emphasis is placed on the distributed operation language (DOL), used to specify invocation, synchronization, and data exchange between various software and hardware components of a distributed system. >

Proceedings ArticleDOI
01 Mar 1990
TL;DR: The approach helps to deal with three aspects of software standards that affect systems integration: periodic revision, missing features that result in the use of proprietary system services, and imprecise, natural language specification.
Abstract: A set of standards for an open systems environment is presented, and an approach to the use of these standards in systems integration is defined. In particular, the approach helps to deal with three aspects of software standards that affect systems integration: periodic revision, missing features that result in the use of proprietary system services, and imprecise, natural language specification. The architectural approach is consistent with the toolkit model of systems development that has been popularized by window systems, and takes advantage of the features provided by many window systems for building user-defined components. >

Proceedings ArticleDOI
15 Oct 1990
TL;DR: The development of DEPEND, an integrated simulation environment for the design and dependability analysis of fault-tolerant systems, is described and a distributed system which employs a prediction-based, dynamic load-balancing heuristic is evaluated.
Abstract: The development of DEPEND, an integrated simulation environment for the design and dependability analysis of fault-tolerant systems, is described. DEPEND models both hardware and software components at a functional level, and allows automatic failure injection to assess system performance and reliability. It relieves the user of the work needed to inject failures, maintain statistics, and output reports. The automatic failure injection scheme is geared toward evaluating a system under high stress (workload) conditions. The failures that are injected can affect both hardware and software components. To illustrate the capability of the simulator, a distributed system which employs a prediction-based, dynamic load-balancing heuristic is evaluated. Experiments were conducted to determine the impact of failures on system performance and to identify the failures to which the system is especially susceptible. >

01 Jan 1990
TL;DR: The scope of this article is to discuss the software testing process and test activities in a large system project following the SDLC approach, and its principles can be applied to a project of any size and using any system development approach.
Abstract: Computer software is a major component of an information system (IS) whose reliability is critical to the performance of an organization. The reliability of an IS has four facets people, hardware, software and data. Among these four facets, the reliability of software is a joint responsibility of computer science and information system professionals; the former handles technical software and the latter application software. Regardless of the types of software involved, the reliability of software must be achieved through a continuous effort of planning, analysis, design, programming, testing, installation and maintenance. To quote an old sage, "Quality is built in, not added on." Studies have shown that most of the system errors detected during the testing phase originate in the early analysis phase. Therefore, software testing should start early in the system development process. A system development process today may come in many different forms. Depending on the structure of the intended system, a system development life cycle (SDLC) process or a prototyping process 2,7,13 or a mixture of both can be followed. An SDLC process typically is applied to a system with clear requirements definitions, well structured processing and reporting, and a long and stable life expectancy. On the other hand, a prototyping process is suitable for a system of ambiguous or incomplete requirements with ad hoc reporting capability, or of a transient life span with ever-changing requirements. It may be used along with an SDLC process to shorten the development time required by a project adopting the SDLC process. The scope of this article is to discuss the software testing process and test activities in a large system project following the SDLC approach. The test process and activities discussed herein is complete and rigorous. Its principles can be applied to a project of any size and using any system development approach,. be it an SDLC a prototyping, ware acquisition or a spiral development and enhancement approach.

Proceedings ArticleDOI
09 Oct 1990
TL;DR: ROSE, a modular distributed operating system that provides support for building reliable applications, is designed and implemented and a resistant process (RP) abstraction allows user processes to survive hardware failures with minimal interruption.
Abstract: ROSE, a modular distributed operating system that provides support for building reliable applications, is designed and implemented. Failure detection capabilities are provided by a failure detection server. Configuration objects can be used to capture the relationship among multiple processes that cooperate to replicate certain resources. Replicated address space (RAS) objects, whose content is accessible with a high probability despite hardware failures, can be used to increase data availability. Finally, a resistant process (RP) abstraction allows user processes to survive hardware failures with minimal interruption. Two different implementations of RP are provided: one checkpoints the information about its state in an RAS object periodically; the other uses replicated execution by executing the same code in different nodes at the same time. >

Journal ArticleDOI
TL;DR: Criteria that have led to simple, elegant interfaces are presented in detail and have been developed and refined through repeated practical application.
Abstract: While the benefits of modular software development are widely acknowledged, there is little agreement as to what constitutes a good module interface. Computational complexity techniques allow evaluation of algorithm time and space costs but offer no guidance in the design of the interface to an implementation. Yet, interface design decisions often have a critical effect on the development and maintenance costs of large software systems. Criteria that have led to simple, elegant interfaces are presented in detail. These criteria have been developed and refined through repeated practical application. The use of the criteria is illustrated with concrete examples. >

Journal ArticleDOI
TL;DR: An environment for creating user interfaces for embedded systems, called the graphical specification system (GSS), is presented, which combines graphical and minimal low-level textual specification with a prototyping capability for rapid user-interface design and evaluation.
Abstract: An environment for creating user interfaces for embedded systems, called the graphical specification system (GSS), is presented. GSS combines graphical and minimal low-level textual specification with a prototyping capability for rapid user-interface design and evaluation. It is part of a larger embedded systems project at Lockheed, called Express. The user interface components, display components, user-machine interaction, interface-application interaction, and executive component are discussed. Two scenarios, developed with GSS tool prototypes, demonstrate how some GSS tools function. One is the construction of a display with two pairs of gauges, one Cartesian and one polar. The other is the design of a display for submarine tracking. >

Journal ArticleDOI
13 Feb 1990
TL;DR: This software, coupled to a good commercial data acquisition system, features high-precision measurement and a high level of friendliness and can be useful for lab tests and educational purposes.
Abstract: When a personal computer (PC) is used as the computing system of an intelligent instrument, software devoted to measurement process control and to measurement process outputting can be specially developed to assist the operator throughout the measurement process in a friendly way. When these conditions are met, the intelligent instrument is called a personal instrument (PI). The main features of a PI are discussed, and the requirements for PI software are given. The performance of an original software package for PI is illustrated, showing how all the requirements are satisfied. This software, coupled to a good commercial data acquisition system, features high-precision measurement and a high level of friendliness and can be useful for lab tests and educational purposes. For lab tests, this software is a good basis for an automatic test station with extended help facilities. It ensures better performances than that of a digital scope because it allows dedicated measurement routines to be developed and executed. The software is effective for educational applications, since it allows direct application of all capabilities offered by computers in instrumentation, when associated with suitable A/D (analog/digital) conversion hardware. >

Proceedings ArticleDOI
11 Oct 1990
TL;DR: The MARUTI operating system is designed to support hard real-time applications on distributed computer systems while providing a fault-tolerant operation and has been implemented as a prototype running on a Unix platform.
Abstract: The MARUTI operating system is designed to support hard real-time applications on distributed computer systems while providing a fault-tolerant operation. Its design is object oriented, and the communication mechanism allows transparent use of the resources of a distributed system. Fault tolerance is provided through a consistent set of mechanisms that support a number of policies. Most important, MARUTI supports guaranteed-service scheduling, by which jobs that are accepted by the system are guaranteed to meet the time constraints of the computation requests with a specified degree of fault tolerance. As a consequence, MARUTI applications can be executed in a predictable fashion. The development of current hard real-time applications requires that the analyst estimate the resource requirements for all parts of the computation and then makes sure that the resources are available to meet the time constraints, which tends to be a cumbersome process. As a part of the MARUTI system, a set of tools which support the hard real-time applications during various phases of their life cycle has been developed. The present version of MARUTI has been implemented as a prototype running on a Unix platform. Experiences with the development of this prototype are also presented. >

Proceedings ArticleDOI
02 Jan 1990
TL;DR: A user modelling approach for computer-based critics is described and the critiquing model approach to instantiating the cooperative problem-solving paradigm is considered.
Abstract: A user modelling approach for computer-based critics is described. The critiquing model approach to instantiating the cooperative problem-solving paradigm is considered. The theoretical background for cooperative problem-solving and the motivation for developing a user modelling approach in this domain are based on a need to provide systems that operate independently of explicit user direction. How to represent, acquire, and maintain consistency of the user model in a critiquing system for LISP programming called LISP-Critic is the fundamental issue addressed. A theoretical model of user domain knowledge, an analysis of the application domain, LISP, and research on the generation of explanations prescribe the contents of the user model. LISP-Critic has been extended to include a user modeling component which includes a database of information about the user and a modelling agent. The modelling agent encapsulates the access and update methods for th user model. >

Proceedings ArticleDOI
16 Apr 1990
TL;DR: A concept is developed in accordance with the objectives of distribution transparency with node autonomy, presentation transparency, device independence, and the handling of resource allocation by the underlying system for multimedia applications in a distributed and heterogeneous environment.
Abstract: The question of how to build a convenient application programming interface for multimedia applications in a distributed and heterogeneous environment is addressed. In accordance with the objectives of distribution transparency with node autonomy, presentation transparency, device independence, and the handling of resource allocation by the underlying system, a concept is developed for this new applications area. The model comprises resources operating as sources and sinks of transient and persistent information. These resources and their interactions appear as capabilities at the application programming interface. Applications do not distinguish between local and remote operations and resources. Authorization is integrated as the protection attribute of the capabilities. The model supports a level of abstraction at which unnecessary details-such as device dependencies-are made transparent to application programs. >

Proceedings ArticleDOI
16 Jun 1990
TL;DR: The IKS provides a description foundation for iconic processing and serves as a concrete organizational framework for implementation and application of image-processing tasks.
Abstract: To provide for the realization of machine-independent image-processing systems which support problem-oriented procedure description and development as well as the generation of reusable software, an iconic kernel system (IKS) was developed. The IKS provides a description foundation for iconic processing and serves as a concrete organizational framework for implementation and application of image-processing tasks. This is achieved using a uniform and comprehensive fundamental operation model for iconic operations and an object-oriented approach for system architecture. >

Proceedings ArticleDOI
12 Mar 1990
TL;DR: Durra is a language designed to support the development of distributed applications consisting of multiple, concurrent, large-grained tasks executing in a heterogeneous network.
Abstract: Durra is a language designed to support the development of distributed applications consisting of multiple, concurrent, large-grained tasks executing in a heterogeneous network. An application-level program is written in Durra as a set of task descriptions that prescribes a way to manage the resources of a heterogeneous machine network. The application describes the tasks to be instantiated and executed as concurrent processes, the intermediate queues required to store the messages as they move from producer to consumer processes, and the possible dynamic reconfigurations of the application. The application-level programming paradigm fits a top-down, incremental method of software development very naturally. It is suggested that a language like Durra would be of great value in the development of large, distributed systems. >

Proceedings ArticleDOI
05 Dec 1990
TL;DR: A software design tool for prediction of performance and resource requirements is described and is used to evaluate the performance of a space surveillance algorithm.
Abstract: Consideration is given to the development of strategies for predictable performance in homogeneous multicomputer data-flow architectures operating in real-time. Algorithms are restricted to the class of large-grained, decision-free algorithms. The mapping of such algorithms onto the specified class of data-flow architectures is realized by a new marked graph model called ATAMM (algorithm to architecture mapping model). Algorithm performance and resource needs are determined for predictable periodic execution of algorithms, which is achieved by algorithm modification and input data injection control. Performance is gracefully degraded to adapt to decreasing numbers of resources. The realization of the ATAMM model on a VHSIC four processor testbed is described. A software design tool for prediction of performance and resource requirements is described and is used to evaluate the performance of a space surveillance algorithm. >

Proceedings ArticleDOI
16 Jun 1990
TL;DR: HYBRID, an experimental hybrid system consisting of specialized Datacube-compatible processors and a transputer network, has been developed in a Sun-3 environment and the VLSI implementation of an edge-preserving smoothing operator for the low-level vision system is described.
Abstract: A hybrid computer architecture for machine vision which combines the useful properties of different types of architectures is introduced. HYBRID, an experimental hybrid system consisting of specialized Datacube-compatible processors and a transputer network, has been developed in a Sun-3 environment. The VLSI implementation of an edge-preserving smoothing operator for the low-level vision system is described, and the performance of transputer-based systems for higher-level vision is evaluated. Methods for analyzing and optimizing the performance of a hybrid architecture are discussed. >