scispace - formally typeset
Search or ask a question
Book

Computational Aspects of Vlsi

01 Jan 1984-
About: The article was published on 1984-01-01 and is currently open access. It has received 862 citations till now. The article focuses on the topics: Very-large-scale integration.
Citations
More filters
Book ChapterDOI
01 Jan 1993
TL;DR: This work has shown that it is possible for the first time to construct integrated circuits with as many as a million elements on a chip (in an area of approximately 1 cm2) with high degree of parallelism.
Abstract: With the introduction of VLSI (Very Large Scale Integration) technology [16], it has been possible for the first time to construct integrated circuits with as many as a million elements on a chip (in an area of approximately 1 cm2). The high degree of parallelism that can be achieved in circuits of such density enables computations to be realized on VLSI chips with extreme speed and efficiency.
Journal ArticleDOI
TL;DR: Dense edge-disjoint embeddings of the complete binary tree with n leaves in the following n-node communication networks: the hypercube, the de Bruijn and shuffle-exchange networks and the two-dimensional mesh are described.
01 Jan 1992
TL;DR: A CNF-recognition implementation that runs in $n$ communication steps is derived, roughly twice as fast as Kosaraju''s implementation $(2n -2)$.
Abstract: We can solve several problems, in particular the recognition of strings in context free grammars in Chomsky Normal Form, using dynamic programming methods in ${\rm O(n^3)}$ sequential time. Partial methods exist for mapping these algorithms onto systolic arrays that run in ${\rm O(2n)}$ time, similar to Kosaraju''s implementation of CNF recognition. These methods only accomplish a portion of the mapping; they involve pipelining steps that can require considerable insight and have a critical effect on the speed and complexity of the resulting algorithm. We present an alternative method for the derivation of a systolic implementation of these problems. We represent the algorithms as directed acyclic graphs (DAGs), where nodes represent specific computations and arcs indicate dependencies between these computations. The original DAG for an algorithm may have nodes of unlimited in-degree and out-degree, and thus captures all inherent parallelism in the algorithm. Then, we schedule the DAG by imposing two constraints: the maximum number of copies of any one operand that can exist at one time and the maximum number of operands that any one computation can accept in one timestep. By varying these constraints, we can derive a family of schedules of the computation. We then map these schedules into recurrence equations which represent systolic implementations that run at various speeds on different architectures. We derive a CNF-recognition implementation that runs in $n$ communication steps, roughly twice as fast as Kosaraju''s implementation $(2n -2)$.
Journal ArticleDOI
TL;DR: This paper provides a time minimal schedule that meets the computed processor lower and upper bounds including one for tensor product and its implementation for generating function and find processor-time-minimal schedule.
Journal ArticleDOI
TL;DR: The worst case behavior of a parallel sorting algorithm that uses a linear array of n − 1 finite state machines to sort n keys is analyzed and the upper bound to 4n 3 + O(1) for all inputs and any n matching Warshauer's lower bound to within an additive constant is improved.