scispace - formally typeset
Search or ask a question

Showing papers on "Database-centric architecture published in 1990"


Proceedings ArticleDOI
01 Sep 1990
TL;DR: As a first test of the complete Rendezvous architecture, a multi-user card game will be implemented by the end of the summer to test the run-time architecture and the start-up architecture.
Abstract: Rendezvous is an architecture for creating synchronous multi-user applications. It consists of two parts: a run-time architecture for managing the multi-user session and a start-up architecture for managing the network connectivity. The run-time architecture is based on a User Interface Management System called MEL, which is a language extension to Common Lisp providing support for graphics operations, object-oriented programming, and constraints. Constraints are used to manage three dimensions of sharing: sharing of underlying information, sharing of views, and sharing of access. The start-up architecture decouples invoking and joining an application so that not all users need be known when the application is started. At present, the run-time architecture is completed and running test applications. As a first test of the complete Rendezvous architecture, we will implement a multi-user card game by the end of the summer.

338 citations


Proceedings ArticleDOI
17 Jun 1990
TL;DR: Using state-of-the-art technology and innovative architectural techniques, the author's architecture approaches the speed and cost of analog systems while retaining much of the flexibility of large, general-purpose parallel machines.
Abstract: The motivation for the X1 architecture described was to develop inexpensive commercial hardware suitable for solving large, real-world problems. Such an architecture must be systems oriented and flexible enough to execute any neural network algorithm and work cooperatively with existing hardware and software. The early application of neural networks must proceed in conjunction with existing technologies, both hardware and software. Using state-of-the-art technology and innovative architectural techniques, the author's architecture approaches the speed and cost of analog systems while retaining much of the flexibility of large, general-purpose parallel machines. The author has aimed at a particular set of applications and has made cost-performance tradeoffs accordingly. The goal is an architecture that could be considered a general-purpose microprocessor for neurocomputing

260 citations


Proceedings Article
01 Oct 1990
TL;DR: A multi-network, or modular, connectionist architecture that captures that fact that many tasks have structure at a level of granularity intermediate to that assumed by local and global function approximation schemes is described.
Abstract: We describe a multi-network, or modular, connectionist architecture that captures that fact that many tasks have structure at a level of granularity intermediate to that assumed by local and global function approximation schemes The main innovation of the architecture is that it combines associative and competitive learning in order to learn task decompositions A task decomposition is discovered by forcing the networks comprising the architecture to compete to learn the training patterns As a result of the competition, different networks learn different training patterns and, thus, learn to partition the input space The performance of the architecture on a "what" and "where" vision task and on a multi-payload robotics task are presented

147 citations


Proceedings ArticleDOI
07 Mar 1990
TL;DR: An architecture for providing weak-consistency replication for databases in an internetwork is presented, designed to make the databases highly available and to operate reliably under difficult conditions, such as unreliable communication, low-bandwidth communication, network partitions, and host failures.
Abstract: An architecture for providing weak-consistency replication for databases in an internetwork is presented. It is designed to make the databases highly available and to operate reliably under difficult conditions, such as unreliable communication, low-bandwidth communication, network partitions, and host failures. Updates are stored in logs until they have been propagated to all database sites and properly delivered to the databases. A novel approach called mediation is used to provide integrated support for reliable replication and log purging. Other interesting features include requiring minimal support from database management systems, support of multiple weak-consistency methods, and easy tuning of the architecture's basic algorithms to particular environments. >

20 citations


Journal ArticleDOI
TL;DR: An overview is given of the SPAN project, which pooled the resources of numerous researchers in several countries to integrate symbolic and numeric computing on parallel systems to provide a central model for which two programming languages and two parallel-system architectures were developed.
Abstract: An overview is given of the SPAN project, which pooled the resources of numerous researchers in several countries to integrate symbolic and numeric computing on parallel systems. The resulting Kernel System architectures provided a central model for which two programming languages and two parallel-system architectures were developed. The Kernel System architecture, Parle high-level procedural language, Virtual Machine Code, Sprint processor architecture, and DICE distributed memory architecture are examined. >

10 citations


Proceedings ArticleDOI
06 Jun 1990
TL;DR: The Bellcore OSCA architecture consists of three logical layers: a corporate data layer, a processing layer, and a user layer, which communicate through specified interface mechanisms called contracts.
Abstract: The Bellcore OSCA architecture consists of three logical layers: a corporate data layer, a processing layer, and a user layer. Each layer is made up of one or more well-defined deliverable, functional units called building blocks, which communicate through specified interface mechanisms called contracts. A communications software fabric knits these building blocks together. The OSCA architecture was developed to promote the interoperability of large-scale software products consisting of large numbers of programs, transactions, and databases. Interoperability, defined as the ability for products to communicate regardless of implementation dissimilarities, enables independent software maintenance and development in a heterogeneous environment. The three layers and the communications software fabric defined by the OSCA architecture are described. A building block is defined, and the principles to which an OSCA building block adheres are given. The concept of the contract that specifies the interfaces among building blocks is discussed. Finally, the steps taken to converge on the architecture are described. >

7 citations


01 Dec 1990
TL;DR: This report provides a basic understanding of the services, architectures and technologies that are the foundation of advanced telecommunications networks.
Abstract: Telecommunications networks are shown to exhibit three attributes that distinguish them from each other, namely, the service offered, the functional architecture necessary to provide this service, and the hardware and software that implements this architecture. For each service there are many possible architectures and for each architecture there are many possible implementations. This report provides a basic understanding of the services, architectures and technologies that are the foundation of advanced telecommunications networks.

4 citations


Proceedings ArticleDOI
07 Mar 1990
TL;DR: The characteristics of distributed database architectures are discussed and the advent of powerful modeling facilities, such as object-oriented and knowledge-based data models, has paved the way for a complete reconsideration of several of the long standing assumptions and perspectives.
Abstract: The characteristics of distributed database architectures are discussed. It is noted that the advent of powerful modeling facilities, such as object-oriented and knowledge-based data models, has paved the way for a complete reconsideration of several of the long standing assumptions and perspectives that are pervading the field of distributed database architectures. It is now possible to propose novel architectures based on high-level data modeling facilities. The advocated architecture is called the semidecentralized or clustered architecture and combines the positive aspects of both logically centralized and federated databases. This architecture substantiates the substrate which automatically classifies the activities that each preexisting data base management system may undertake and is in the position to describe the entire volume of information that can be supplied by each individual database system in the network. >

3 citations


Book
01 Jan 1990
TL;DR: This is the first guide to designing network system architectures, written as an accessible, practical handbook for professionals who research, design, and develop information networks and systems.
Abstract: From the Publisher: Here is the first guide to designing network system architectures, written as an accessible, practical handbook. Network System Architecture is designed expressly for professionals who research, design, and develop information networks and systems.

2 citations


Book
01 Apr 1990
TL;DR: In this paper, the authors discuss document content architecture, data streams architecture, mixed object document content architectures, graphic and text object content architecture and system network architecture for office system architectures.
Abstract: As vendors rush to the market with products that are intended to complement IBM's products, office system architectures are becoming increasingly important. Topics covered in this book include: document content architecture, data streams architecture, mixed object document content architecture, graphic and text object content architecture, document interchange architecture, and system network architecture.

2 citations


Proceedings ArticleDOI
28 May 1990
TL;DR: The term structure-oriented computer architectures is introduced for a class of compound architectures built up from two rival architecture classes and some effects of these architectures on software quality are discussed.
Abstract: Structure-oriented computer architecture is a research direction that tries to join the parallel processing facilities and decentralized control of multiprocessors or data-flow architectures with the efficient memory access and pipelining techniques of data structure architectures. The main concepts and the motivation for introducing this term are shown. Data structure architectures are compared with the successful concept of multiprocessors, and some limits of the latter are shown. The term structure-oriented computer architectures is introduced for a class of compound architectures built up from two rival architecture classes. Some effects of these architectures on software quality are discussed. An example of a proposed structure-oriented computer architecture is presented. >

Journal ArticleDOI
01 Jan 1990

Book ChapterDOI
01 Jul 1990
TL;DR: Large computer-assisted systems generally have shortcomings: their functionality may differ from their specification, performance can be less than desirable, system flexibility is often unacceptably low, fault tolerance is all but absent, etc.
Abstract: Large computer-assisted systems generally have shortcomings of one or more kinds: their functionality may differ from their specification, performance can be less than desirable, system flexibility is often unacceptably low, fault tolerance is all but absent, etc. Particularly worrying is the almost total lack of confidence in a system’s correctness.

Proceedings ArticleDOI
08 May 1990
TL;DR: The Architect's Workbench is a set of software tools with the sole purpose of performing architecture comparison and evaluation of architecture features on a fair basis that has been used extensively for the evaluation of Architecture features such as instruction encoding, register allocation, and register sets.
Abstract: When comparing processor and memory architectures, when evaluating paper architecture, or when evaluating individual architecture features there are three fundamental issues that affect the validity of results: first, the influence of differences in architectural features not under investigation; second, the influence of the quality differences between compile-time trajectories; and third, the influence of the choice of benchmarks (representing a specific application or application area) driving the comparisons. An attempt to alleviate these problems without sacrificing flexibility and evaluation speed is embodied in the Architect's Workbench (AWB). The AWB is a set of software tools with the sole purpose of performing architecture comparison and evaluation of architecture features on a fair basis. The AWB has been used extensively for the evaluation of architecture features such as instruction encoding, register allocation, and register sets, as well as memory components like instruction and data caches. >