scispace - formally typeset
Search or ask a question

Showing papers on "Database-centric architecture published in 1983"


Proceedings Article
22 Aug 1983
TL;DR: The architecture is a declarative control language that allows one to write partial specifications of program behavior that facilitates incremental system development and the integration of disparate architectures like demons, object-oriented programming, and controlled deduction.
Abstract: One of the biggest problems in AT programming is the difficulty of specifying control. Meta-level architecture is a knowledge engineering approach to coping with this difficulty. The key feature of the architecture is a declarative control language that allows one to write partial specifications of program behavior. This flexibility facilitates incremental system development and the integration of disparate architectures like demons, object-oriented programming, and controlled deduction. This paper presents the language, describes an appropriate, and discusses the issues of compiling. It illustrates the architecture with a variety of examples and reports some experience in using the architecture in building expert systems.

101 citations


01 Nov 1983
TL;DR: Some of the fundamental design issues in parallel architecture for Artificial Intelligence are laid out, limitations of previous parallel architectures are delineated, and a new approach is outlined that is pursuing.
Abstract: : Development of highly intelligent computers requires a conceptual foundation that will overcome the limitations of the von Neumann architecture. Architectures for such a foundation should meet the following design goals: Address the fundamental organizational issues of large-scale parallelism and sharing in a fully integrated way. This means attention to organizational principles, as well as hardware and software. Serve as an experimental apparatus for testing large-scale artificial intelligence systems. Explore the feasibility of an architecture based on abstractions, which serve as natural computational primitives for parallel processing. Such abstractions should be logically independent of their software and hardware host implementations. In this paper we lay out some of the fundamental design issues in parallel architecture for Artificial Intelligence, delineate limitations of previous parallel architectures, and outline a new approach that we are pursuing. (Author)

36 citations



Journal ArticleDOI
TL;DR: The evolution of formal descriptive methods that provide precise, complete definitions of the architecture has culminated in the development of a programming language, Format and Protocol Language (FAPL), tailored for programming a reference model or meta-implementation of an SNA node.
Abstract: Systems Network Architecture (SNA) provides a framework or constructing networks of distributed processors and terminals. This paper discusses some of the fundamental properties of network architectures such as SNA, and the evolution of formal descriptive methods that provide precise, complete definitions of the architecture. This has culminated in the development of a programming language, Format and Protocol Language (FAPL), tailored for programming a reference model or meta-implementation of an SNA node. In this form, the architecture specification is itself machine-executable. This property has led to new software technologies that improve quality and productivity in the processes for developing a network architecture and the product implementations derived from it. Automated protocol validation provides the tool necessary to ensure a correct and internally consistent definition of the architecture. This definition can then be used as a standard for testing products to determine compliance with the architecture. Direct implementation of network software by compiling the meta-implementation program is another emerging technology. This paper reviews the current state of work in these areas.

13 citations


Proceedings ArticleDOI
13 Jun 1983
TL;DR: This paper describes the formalization and application of S@@@@ to the formal proofs of correctness of architecture designs, intended for the specifications of the outer and inner architectures of general purpose von Neumann style computers.
Abstract: In a previous paper [8], we had presented the notion of a family of languages for the multilevel design and description of computer architectures Details of a particular language family, currently under development, was also described One of the constituent members of this family is S

9 citations


Journal ArticleDOI
TL;DR: This note suggests a concept of a new kind of interrupt called GAP to be used in filling gaps in the Architecture of a computing system.
Abstract: This note suggests a concept of a new kind of interrupt called GAP to be used in filling gaps in the Architecture of a computing system.

2 citations


Journal ArticleDOI
TL;DR: This paper describes two architectures that are especially well suited for large scale integration because of their concurrent structure and their use of primarily local data flows.

2 citations


01 Nov 1983
TL;DR: In this article, the authors lay out some of the fundamental design issues in parallel architecture for Artificial Intelligence, delineate limitations of previous parallel architectures, and outline a new approach that they are pursuing.
Abstract: Development of highly intelligent computers requires a conceptual foundation that will overcome the limitations of the von Neumann architecture Architectures for such a foundation should meet the following design goals: Address the fundamental organizational issues of large-scale parallelism and sharing in a fully integrated way This means attention to organizational principles, as well as hardware and software Serve as an experimental apparatus for testing large-scale artificial intelligence systems Explore the feasibility of an architecture based on abstractions, which serve as natural computational primitives for parallel processing Such abstractions should be logically independent of their software and hardware host implementations In this paper we lay out some of the fundamental design issues in parallel architecture for Artificial Intelligence, delineate limitations of previous parallel architectures, and outline a new approach that we are pursuing

2 citations


27 Jan 1983
TL;DR: The author looks at the architecture of the synapse expansion general purpose computer, which is tolerant of component failures, may easily be economically expanded in small increments, and is not tied to any one microprocessor instruction set.
Abstract: The availability of fast, low-cost 16- and 32-bit microprocessors makes it possible at last to build a truly cost-effective generation of fault-tolerant computers. One such system employs a multiprocessor architecture optimized for transaction-oriented applications. Called the synapse expansion architecture, it is tolerant of component failures, may easily be economically expanded in small increments, and is not tied to any one microprocessor instruction set. Yet thanks to the specially developed operating software, neither operators nor programmers are aware of the architecture's uniqueness. The author looks at the architecture of the synapse expansion general purpose computer.

1 citations