scispace - formally typeset
Search or ask a question
Book

Computational Aspects of Vlsi

01 Jan 1984-
About: The article was published on 1984-01-01 and is currently open access. It has received 862 citations till now. The article focuses on the topics: Very-large-scale integration.
Citations
More filters
Proceedings ArticleDOI
06 Jun 1994
TL;DR: Algorithms for variable ordering for BDD representation of a system of interacting finite state machines are implemented in HSIS, a hierarchical synthesis and verification tool currently under development at Berkeley.
Abstract: We address the problem of obtaining good variable orderings for the BDD representation of a system of interacting finite state machines (FSMs). Orderings are derived from the communication structure of the system. Communication complexity arguments are used to prove upper bounds on the size of the BDD for the transition relation of the product machine in terms of the communication graph, and optimal orderings are exhibited for a variety of regular systems. Based on the bounds we formulate algorithms for variable ordering. We perform reached state analysis on a number of standard verification benchmarks to test the effectiveness of our ordering strategy; experimental results demonstrate the efficacy of our approach. The algorithms described in this paper have been implemented in HSIS, a hierarchical synthesis and verification tool currently under development at Berkeley.

67 citations


Cites methods from "Computational Aspects of Vlsi"

  • ...Using a communication complexity approach [Ull][McMil] we prove upper bounds on the size of BDDs for speci ed orderings....

    [...]

Proceedings ArticleDOI
08 Nov 1992
TL;DR: This paper demonstrates how for the large class of linear computations the maximally fast imple- mentation with respect to five important and powerful trans- formations can be efficiently derived and to show how an arbitrarily fast, asymptotically optimal implementation of a general linear computation can be pro- duced combining those five transformations with retiming and loop unfolding.
Abstract: Linear systems are the most often used type of systems in many engineering and scientific areas. By estab- lishing a relationship between the basic properties of linear computations and several optimizing transformations, it is possible to optimally speed-up linear computations with respect to those transformations while keeping the latency fixed. Furthermore, arbitrarily fast, asymptotically optimal implementations can be obtained by adding retiming and loop unrolling to the transformations set and trading latency for throughput. The proposed techniques have yielded results superior to the best published previously on ail benchmark examples. Finally, the presented approach is also applicable to general (non-linear) computations. 1.0 Motivation and Prior Art The major goal of this paper is to demonstrate how for the large class of linear computations the maximally fast imple- mentation with respect to five important and powerful trans- formations (associativity, distributivity, commutativity, common subexpression and constant propagation) can be efficiently derived and to show how an arbitrarily fast, asymptotically optimal (with respect to the hardware cost) implementation of a general linear computation can be pro- duced combining those five transformations with retiming and loop unfolding. Transformations alter the organization of a computation in a such a way that the user specified input/output relationship is maintained. They are often used as an effective approach for the improvement of the implementation of computations. Their use in compilers (1,5), theoretical computer science (2) and high level synthesis (3,7,9, 111 is surveyed in (lo).

66 citations

Journal ArticleDOI
TL;DR: Parallel algorithms are proposed for efficient implementation in processor arrays for low order moments and the basic idea is to decompose a 2-D moment into many vertical moments and a horizontal moment and to use the data parallelism for the vertical Moments and the task parallelist for the horizontal moment.

66 citations


Cites background from "Computational Aspects of Vlsi"

  • ...The area , t ime 2 complexity measure is based on the requirements for information flow within a chip, which is believed to be a "strong" bound for the best circuits which can be constructed [32]....

    [...]

Proceedings ArticleDOI
18 May 1987
TL;DR: Traditional algorithms using hardware division and square root are replaced with the special purpose CORDIC algorithms for computing vector rotations and inverse tangents.
Abstract: Arithmetic issues in the calculation of the Singular Value Decomposition (SVD) are discussed. Traditional algorithms using hardware division and square root are replaced with the special purpose CORDIC algorithms for computing vector rotations and inverse tangents. The CORDIC 2×2 SVD processor can be twice as fast as one assembled from traditional hardware units. A prototype VLSI implementation of a CORDIC SVD processor array is planned for use in real-time signal processing applications.

66 citations

Proceedings ArticleDOI
01 Nov 1986
TL;DR: A new paradigm for distributed computing, almost-everywhere agreement, is defined, in which it is required only that almost all correct processors reach consensus on networks of bounded degree, unlike the traditional Byzantine agreement problem.
Abstract: Achieving processor cooperation in the presence of faults is a major problem in distributed systems. Popular paradigms such as Byzantine agreement have been studied principally in the context of a complete network. Indeed, Dolev (J. Algorithms, 3 (1982), pp. 14-30) and Hadzilacos (Issues of Fault Tolerance in Concurrent Computations, Ph.D. thesis, Harvard University, Cambridge, MA, 1984) have shown that fl(t) connectivity is necessary if the requirement is that all nonfaulty processors decide unanlmously, where is the number of faults to be tolerated. We believe that in forseeable technologies the number of faults will grow with the size of the network while the degree will remain practically fixed. We therefore raise the question whether it is possible to avoid the connectivity requirements by slightly lowering our expectations. In many practical situations we may be willing to "lose" some correct processors and settle for cooperation between the vast majority of the processors. Thus motivated, we present a general simulation technique by which vertices (processors) in almost any network ofbounded degree can simulate an algorithm designed for the complete network. The simulation has the property that although some correct processors may be cut off from the majority of the network by faulty processors, the vast majority of the correct processors will be able to communicate among themselves undisturbed by the (arbitrary) behavior of the faulty nodes. We define a new paradigm for distributed computing, almost-everywhere agreement, in which we require only that almost all correct processors reach consensus. Unlike the traditional Byzantine agreement problem, almost-everywhere agreement can be solved on networks of bounded degree. Specifically, we can simulate any sufficiently resilient Byzantine agreement algorithm on a network ofbounded degree using our communi- cation scheme described above. Although we "lose" some correct processors, effectively treating them as faulty, the vast majority of correct processors decide on a common value. 1. Preliminaries. In 1982 Dolev (D) published the following damning result for distributed computing: "Byzantine agreement is achievable only ifthe number of faulty processors in the system is less than one-half of the connectivity of the system's network." Even in the absence of malicious failures connectivity + 1 is required to achieve agreement in the presence of faulty processors (H). The results are viewed as damning because of the fundamental nature of the Byzantine agreement problem. In this problem each processor begins with an initial value drawn from some domain V of possible values. At some point during the computation, during which processors repeatedly exchange messages and perform local computations, each processor must irreversibly decide on a value, subject to two conditions. No two correct processors may decide on different values, and if all correct processors begin with the same value v, then v must be the common decision value. (See (F) for a survey of related problems.) The ability to achieve this type of coordina- tion is important in a wide range of applications, such as database management, fault-tolerant analysis of sensor readings, and coordinated control of multiple agents. A simple corollary of the results of Dolev and Hadzilacos is that in order for a system to be able to reach Byzantine agreement in the presence of up to faulty processors, every processor must be directly connected to at least fl(t) others. Such high connectivity, while feasible in a small system, cannot be implemented at reasonable cost in a large system. As technology improves, increasingly large distributed systems and parallel com-

66 citations