scispace - formally typeset
Search or ask a question

Showing papers in "Lecture Notes in Computer Science in 1990"


Book ChapterDOI
TL;DR: A spatial data model is proposed which is based upon the mathematical theory of simplices and simplicial complexes from combinatorial topology and introduces completeness of incidence and completenessof inclusion as an extension to the closed world assumption.
Abstract: There is a growing demand for engineering applications which need a sophisticated treatment of geometric properties. Implementations of Euclidian geometry, commonly used in current commercial Geographic Information Systems and CAD/CAM, are impeded by the finiteness of computers and their numbering systems. To overcome these deficiencies a spatial data model is proposed which is based upon the mathematical theory of simplices and simplicial complexes from combinatorial topology and introduces completeness of incidence and completeness of inclusion as an extension to the closed world assumption. It guarantees the preservation of topology under affine transformations. This model leads to straightforward algorithms which are described. The implementation as a general spatial framework on top of an object-oriented database management system is discussed.

166 citations


Book ChapterDOI
TL;DR: Backpropagation converges slowly, even for medium sized network problems, because of the usually large dimension of the weight space and from the particular shape of the error surface in each iteration point.
Abstract: Like other gradient descent techniques, backpropagation converges slowly, even for medium sized network problems. This fact results from the usually large dimension of the weight space and from the particular shape of the error surface in each iteration point. Oscillation between the sides of deep and narrow valleys, for example, is a well known case where gradient descent provides poor convergence rates.

163 citations



Book ChapterDOI
TL;DR: The potential of the hint, whether applied to back-propagation learning or to more general types of pattern associators is to reduce training time and improve generalization performance.
Abstract: Neural networks can be given “hints” by increasing the number of parameters learned to include parameters related to the original relationship. The effect of this hint, whether applied to back-propagation learning or to more general types of pattern associators is to reduce training time and improve generalization performance. A detailed vector field analysis of a hinted back-propagation network solving the XOR problem, shows that the hint is capable of eliminating pathological local minima. A set-theory/functional entropy analysis shows that the hint can be applied to any learning mechanism that has an internal (“hidden”) layer of processing. These analyses and tests conducted on a variety of problems using different types of networks demonstrate the potential of the hint as a method of controlling training in order to predictably train systems to effectively model data.

84 citations


Book ChapterDOI
TL;DR: A performance comparison of multidimensional point access methods under arbitrary data distributions and under various types of queries and was surprised to see one method to be the clear winner which was the BUDDY hash tree, which exhibits an at least 20 % better average performance than its competitors and is robust under ugly data and queries.
Abstract: In the past few years a large number of multidimensional point access methods, also called multiattribute index structures, has been suggested, all of them claiming good performance. Since no performance comparison of these structures under arbitrary (strongly correlated nonuniform, short "ugly") data distributions and under various types of queries has been performed, database researchers and designers were hesitant to use any of these new point access methods. As shown in a recent paper, such point access methods are not only important in traditional database applications. In new applications such as CAD/CIM and geographic or environmental information systems, access methods for spatial objects are needed. As recently shown such access methods are based on point access methods in terms of functionality and performance. Our performance comparison naturally consists of two parts. In part I we will compare multidimensional point access methods, whereas in part II spatial access methods for rectangles will be compared. In part I we present a survey and classification of existing point access methods. Then we carefully select the following four methods for implementation and performance comparison under seven different data files (distributions) and various types of queries: the 2-level grid file, the BANG file, the hB-tree and a new scheme, called the BUDDY hash tree. We were surprised to see one method to be the clear winner which was the BUDDY hash tree. It exhibits an at least 20 % better average performance than its competitors and is robust under ugly data and queries. In part II we compare spatial access methods for rectangles. After presenting a survey and classification of existing spatial access methods we carefully selected the following four methods for implementation and performance comparison under six different data files (distributions) and various types of queries: the R-tree, the BANG file, PLOP hashing and the BUDDY hash tree. The result presented two winners: the BANG file and the BUDDY hash tree. This comparison is a first step towards a standardized tested or benchmark. We offer our data and query files to each designer of a new point or spatial access method such that he can run his implementation in our testbed.

77 citations



Book ChapterDOI
TL;DR: The performance of the back-propagation (BP) algorithm is investigated under overtraining for three different tasks, finding that interpolation performance is shown to decrease with overtraining and size of the sample space.
Abstract: The performance of the back-propagation (BP) algorithm is investigated under overtraining for three different tasks. In a first case study, a network was trained to map a function composed of two discontinuous intervals. Interpolation performance is shown to decrease with overtraining and size of the sample space. In a second case study, a network was trained to map a continuous, and continuously differentiable function, known to produce the Runge effect (i.e., complete deterioration of polynomial interpolation performance despite an adequate number of degrees of freedom). Simulation results suggested a minimal network strategy would solve the observed overfitting effect. Constraints added to the BP Least Mean Square error term were used to reduce the size of the network on-line during training. For a speech labeling task, this method eliminated the overfitting effects after overtraining. Interpretation of the results are given in terms of the properties of the back-propagation algorithm in relation to the data being learned.

55 citations


Book ChapterDOI
TL;DR: It is shown that in the Markovian case, this model has product form, i.e. the steady-state probability distribution of its potential vector is the product of the marginal probabilities of the potential at each neuron.
Abstract: In a recent paper [1] we have introduced a new neural network model, called the Random Network, in which "negative" or "positive" signals circulate, modelling inhibitory and excitatory signals. They are summed at the input of each neuron and constitute its signal potential. The state of each neuron in this model is its signal potential, while the network state is the vector of signal potentials at each neuron. If its potential is positive, a neuron fires, and sends out signals to the other neurons of the network or to the outside world. As it does so its signal potential is depleted. We have shown that in the Markovian case, this model has product form, i.e. the steady-state probability distribution of its potential vector is the product of the marginal probabilities of the potential at each neuron. The signal flow equations of the network, which describe the rate at which positive or negative signals arrive to each neuron, are non-linear, so that their existence and uniqueness is not easily established except for the case of feedforward (or backpropagation) networks [1]. We examine two sub-classes of networks: balanced, and damped networks and obtain stability conditions in each case. A hardware implementation of these networks is also suggested.

52 citations


Journal Article
TL;DR: Two theories are compared, viz. update semantics, and dynamic predicate logic, and a general characterization of the idea of a dynamic semantics for natural language is given which subsumes these two theories.
Abstract: The dynamic view on the semantics of natural language, though stemming already from the seventies, has developed into a widely studied subject in the second half of the eighties. At present, the unification of various dynamic theories constitutes an important issue. In this paper, two theories are compared, viz. update semantics, and dynamic predicate logic. In section 1 a general characterization of the idea of a dynamic semantics for natural language is given which subsumes these two theories. Sections 2 and 3 are devoted to short expositions of each of them. In the final section 4 a comparison is made.

42 citations


Book ChapterDOI
TL;DR: The Fieldtree is described, a data structure that provides one of such access methods for spatial databases and has been designed for GIS and similar applications, where range queries are predominant and spatial nesting and overlaping of objects are common.
Abstract: Efficient access methods, such as indices, are indispensable for the quick answer to database queries. In spatial databases the selection of an appropriate access method is particularly critical since different types of queries pose distinct requirements and no known data structure outperforms all others for all types of queries. Thus, spatial access methods must be designed for excelling in a particular kind of inquiry while performing reasonably in the other ones. This article describes the Fieldtree, a data structure that provides one of such access methods. The Fieldtree has been designed for GIS and similar applications, where range queries are predominant and spatial nesting and overlaping of objects are common. Besides their hierarchical organization of space, Fieldtrees are characterized by three other features:(i) they subdivide space regularly, (ii) spatial objects are never fragmented, and (iii) semantic information can be used to assign the location of a certain object in the tree. Besides describing the Fieldtree this work presents analytical results on several implementations of those variants, and compares them to published results on the Rtree and the R+tree.

40 citations


Book ChapterDOI
TL;DR: A revised version of the simulated annealing method which produces better solutions and can reduce the computation time is proposed and is used to improve the performance of the Boltzmann machine.
Abstract: By separating the search control and the solution updating of the commonly used simulated annealing technique, we propose a revised version of the simulated annealing method which produces better solutions and can reduce the computation time. We also use it to improve the performance of the Boltzmann machine. Furthermore, we present a simple combinatorial optimization model for solving the attributed graph matching problem of e.g. computer vision and give two algorithms to solve the model, one using our improved simulated annealing method directly, the other using it via the Boltzmann machine. Computer simulations have been conducted on the model using both the revised and the original simulated annealing and the Boltzmann machine. The advantages of our revised methods are shown by the results.

Book ChapterDOI
TL;DR: It is argued that the right network architecture is fundamental for a good solution to exist and the class of network architectures forms a basis for a complexity theory of classification problems and basic results on this measure of complexity are presented.
Abstract: Multilayered, feedforward neural network techniques have been proposed for a variety of classification and recognition problems ranging from speech to sonar signal processing problems. It is generally assumed that the underlying application does not need to be modeled very much and that an artificial neural network solution can be obtained instead by training from empirical data with little or no a priori information about the application. We argue that the right network architecture is fundamental for a good solution to exist and the class of network architectures forms a basis for a complexity theory of classification problems. An abstraction of this notion of complexity leads to ideas similar to Kolmogorov's minimum length description criterion, entropy and k-widths. We will present some basic results on this measure of complexity. From this point of view, artificial neural network solutions to real engineering problems may not ameliorate the difficulties of classification problems, but rather obscure and postpone them. In particular, we doubt that the design of neural networks for solving interesting nontrivial engineering problems will be any easier than other large scale engineering design problems (such as in aerodynamics and semiconductor device modeling).



Journal Article
TL;DR: The object 0 acts as a zero for both sum and multiplication in process algebra as mentioned in this paper, and the constant δ, representing deadlock or inaction, is only a left zero for multiplication.
Abstract: The object 0 acts as a zero for both sum and multiplication in process algebra. The constant δ, representing deadlock or inaction, is only a left zero for multiplication. We will call 0 predictable failure.

Journal Article
TL;DR: A novel process algebra is presented; algebraic expressions specify delay-insensitive circuits in terms of voltage-level transitions on wires in order to specify circuits concisely and facilitate the verification of designs.
Abstract: A novel process algebra is presented; algebraic expressions specify delay-insensitive circuits in terms of voltage-level transitions on wires. The algebraic laws make it possible to specify circuits concisely and facilitate the verification of designs. Individual components can be composed into circuits in which signals along internal wires are hidden from the environment.


Journal Article
TL;DR: A main result of the paper is that a specification formalism must be at least as expressive as Hennessy-Milner Logic in order to be decomposable.

Journal Article
TL;DR: The MetateM framework as discussed by the authors is a framework for programming in temporal logic and is based on the assumption-commitment framework for compositional verification of distributed programs, which can be used to verify the correctness of AADL modules using model checking.
Abstract: Composing specifications.- Refinement calculus, part I: Sequential nondeterministic programs.- Refinement calculus, part II: Parallel and reactive programs.- MetateM: A framework for programming in temporal logic.- Constraint-oriented specification in a constructive formal description technique.- Functional specification of time sensitive communicating systems.- Modular verification of Petri Nets.- Abadi & Lamport and stark: Towards a proof theory for stuttering, dense domains and refinement mappings.- Algebraic implementation of objects over objects.- Refinement of actions in causality based models.- Transformation of combined data type and process specifications using projection algebras.- Various simulations and refinements.- On decomposing and refining specifications of distributed systems.- Verifying the correctness of AADL modules using model checking.- Specialization in logic programming: From horn clause logic to prolog and concurrent prolog.- Analysis of discrete event coordination.- Refinement and projection of relational specifications.- Compositional theories based on an operational semantics of contexts.- Multivalued possibilities mappings.- Completeness theorems for automata.- Formal verification of data type refinement - Theory and practice.- From trace specifications to process terms.- Some comments on the assumption-commitment framework for compositional verification of distributed programs.- Refinement of concurrent systems based on local state transformations.- Construction of network protocols by stepwise refinement.- A derivation of a broadcasting protocol using sequentially phased reasoning.- Verifying atomic data types.- Predicates, predicate transformers and refinement.- Foundations of compositional program refinement.

Journal Article
TL;DR: The k-Gabriel graphs are used to improve the running time of solving the Euclidean bottleneck biconnected edge subgraph problem from O(n2) to 0(nlogn), and that of solved the Euclid's bottleneck matching problem from N(n1.5log0.5n) to O(k2nlogn).
Abstract: In this paper, we define and investigate the properties of k-Gabriel graphs and also propose an algorithm to construct the k-Gabriel graph of a points set in O(k2nlogn) time. The k-Gabriel graphs are also used to improve the running time of solving the Euclidean bottleneck biconnected edge subgraph problem from O(n2) to 0(nlogn), and that of solving the Euclidean bottleneck matching problem from O(n2) to O(n1.5log0.5n).

Book ChapterDOI
TL;DR: A significant theorem is established within this theory that describes the relationship between specifications and iterative implementations of arithmetic functions and thus provides a general approach to the synthesis and formal verification of arithmetic circuits.
Abstract: A specification language, veritas+, based on type theory, is proposed as being an ideal notation for specifying and reasoning about digital systems. The development, within veritas+, of a formal theory of arithmetic and numerals is outlined and its application to the specification, at differing levels of abstraction, of arithmetic devices is illustrated. A significant theorem is established within this theory that describes the relationship between specifications and iterative implementations of arithmetic functions and thus provides a general approach to the synthesis and formal verification of arithmetic circuits.


Journal Article
TL;DR: This paper addresses the synthesis of a circuit structure from a sequential behavioral specification as a sequence of behavior-preserving transformations of a data- and control-flow graph, giving results down to the logic level.
Abstract: This paper addresses the synthesis of a circuit structure from a sequential behavioral specification. The problem is formally stated as a sequence of behavior-preserving transformations of a data- and control-flow graph. Behavior equivalence is defined strongly, so that it implies equal output sequences for equal input sequences and equal initial state. The transformations introduce the minimum number of control steps. The resulting structure includes both control and data-path. The combinational logic in this structure is passed to logic synthesis for further optimization. Several examples illustrate these techniques, giving results down to the logic level.

Journal Article
TL;DR: Both the tangential and normal components of the flow can be computed reliably where the image Hessian is well-conditioned and a fast algorithm to propagate flow along contours from such locations is proposed.
Abstract: Both the tangential and normal components of the flow can be computed reliably where the image Hessian is well-conditioned. A fast algorithm to propagate flow along contours from such locations is proposed. Experimental results for an intrinsically parallel algorithm for computing the flow along zero-crossing contours are presented.

Book ChapterDOI
TL;DR: The first delay-insensitive microprocessor is designed, which is a 16-bit, RISC-like architecture and the chips were found functional on “first silicon.”
Abstract: We have designed the first delay-insensitive microprocessor. It is a 16-bit, RISC-like architecture. The version implemented in 1.6 micron SCMOS runs at 18 MIPS. The chips were found functional on “first silicon.”

Book ChapterDOI
TL;DR: This paper outlines the foundations for this algebra and presents several examples, ranging from the refinement of actual architecture to the synthesis of certain kinds of verification conditions.
Abstract: Logical organization refers to a system's decomposition into functional units, processes, and so on; it is sometimes called the ‘structural’ aspect of system description. In an approach to digital synthesis based on functional algebra, logical organization is developed using a class of transformations called system factorizations. These transformations isolate subsystems and encapsulate them as applicative combinators. Factorizations have a variety of uses, ranging from the refinement of actual architecture to the synthesis of certain kinds of verification conditions. This paper outlines the foundations for this algebra and presents several examples.

Book ChapterDOI
TL;DR: The concept of Design for Verifiability as discussed by the authors was introduced as a means of attacking the complexity problem encountered when verifying the correctness of hardware designs using mathematical proof techniques, and it has been successfully applied to the verification of embedded systems.
Abstract: The concept of Design for Verifiability is introduced as a means of attacking the complexity problem encountered when verifying the correctness of hardware designs using mathematical proof techniques The inherent complexity of systems implemented as integrated circuits results in a comparable descriptive complexity when modelling them in any framework which supports formal verification Performing formal verification then rapidly becomes intractable as a consequence of this descriptive complexity In this paper we propose a strategy for dealing, at least in part, with this problem We advocate the use of a particular design strategy involving the use of structural design rules which constrain the behaviour of a design resulting in a less complex design verification The term Design for Verifiability is used to capture this concept in an analogous way to the term Design for Testability

Journal Article
TL;DR: In this article, a mathematical model for object identity in a framework of typed pure functional languages is presented, where the properties of object identity are accurately captured by references as they are implemented in Standard ML.
Abstract: One of the central concepts in the field of object-oriented databases is object identity, which nicely captures mutability, sharing and cyclic structures. Although the concept is intuitively clear, its precise semantics has not yet been well established. This seems to be a major obstacle to achieve a clean integration of object-oriented databases and other paradigms of database programming in a modern type system of a programming language. This paper attempts to establish a mathematical model for object identity in a framework of typed pure functional languages. We argue that the properties of object identity are accurately captured by references as they are implemented in Standard ML. We then present a method to interpret an impure higher-order functional language with references in a typed pure functional language using Moggi's recent result on the categorical structure of monads. This establishes a precise semantics to the primitive operations for references and allows us to analyze various property of object identity. We investigate the interaction between set data types and object identity. Since the interpretation is shown to preserve all the properties of the existing data structures for databases, it enables us to integrate object identity with various existing data models within a type system of a programming language. We show that object identity and a generalized relational model can be uniformly integrated in a programming language.


Journal Article
TL;DR: A general framework is established to classify functions according to their propagation characteristics if a number of bits is kept constant and the relation between the Walsh-Hadamard transform and the auto-correlation function of Boolean functions is used.