scispace - formally typeset
Search or ask a question
Book

Computational Aspects of Vlsi

01 Jan 1984-
About: The article was published on 1984-01-01 and is currently open access. It has received 862 citations till now. The article focuses on the topics: Very-large-scale integration.
Citations
More filters
Journal ArticleDOI
TL;DR: This paper exploits the reconfigurable mesh architecture for the purpose of obtaining constant-time algorithms for a number of computational problems on interval graphs, including finding a maximum size clique, a maximum weight clique in the presence of integer weights, amaximum size independent set, a minimum clique cover, aminimum size dominating set, and a shortest path between any two vertices in G.
Abstract: Interval Graph Problems on Reconfigurable Meshes S. Olariu and J. L. Schwing Department of Computer Science Old Dominion University Norfolk, VA 23529-0162 and J. Zhang Department of Mathematics and Computer Science Elizabeth City State University Elizabeth City, NC 27909 Work supported by the National Science Foundation under grant CCR-8909996 and by NASA under grant NCC1-99; Abstract A graph G is an interval graph if there is a one-one correspondence between its vertices and a family I of intervals, such that two vertices in G are adjacent if and only if their corresponding intervals overlap. In this context, the family I of intervals is referred to as an interval model of G. Recently, a powerful architecture called the reconfigurable mesh has been proposed: in essence, a reconfigurable mesh consists of a mesh-connected architecture augmented by a dynamically reconfigurable bus system. In this paper, we exploit the reconfigurable mesh architecture for the purpose of obtaining constant-time algorithms for a number of computational problems on interval graphs. These problems include finding a maximum size clique, a maximum weight clique in the presence of integer weights, a maximum size independent set, a minimum clique cover, a minimum size dominating set, a shortest path between any two vertices in G, the diameter and the center of G, as well as Breadth-First Search and Depth-First Search trees for G. Specifically, with an n-vertex interval graph specified by its interval model as input, all our algorithms run in constant time on a reconfigurable mesh of size n x n.
Journal ArticleDOI
TL;DR: Criteria for layout construction are derived from the ones proposed for flow graph-based circuits, and evaluation of complexity is given in terms of recurrence equations expressing the area required by the different layouts.
DissertationDOI
14 Jun 2022
TL;DR: In this paper , the authors developed a parallel algorithm to find the maxima of a set of n points in the d-dimensional space, d $>$ 3, on a hypercube SIMD machine.
Abstract: Geometric algorithms have many important applications in science and technology. Some geometric problems require fast response time that could not be achieved by traditional sequential algorithms. However, the speed, power and versatility of parallel computers can be exploited to develop efficient geometric algorithms as shown in this dissertation. Our study focuses on designing efficient parallel geometric algorithms and analyzing their computational complexities. In this research, first we developed a parallel algorithm to find the maxima of a set of N points in the d-dimensional space, d $>$ 3, on a hypercube SIMD machine. Our algorithm is a parallel implementation from the sequential algorithm given by Kung, Luccio, and Preparata (KLP75). Although the time complexity, $O(N\sp{0.77}\log\sp{d-1}\ N),$ of our algorithm is not optimal, it is the first sublinear time algorithm for solving the high dimensional maxima problem. Next, we developed another parallel algorithm to construct the Voronoi diagram of a point set in the plane. Our algorithm is based on the sequential algorithm given by Brown (B79). We use an $N\times N$ mesh of trees (MOT) SIMD computer and get the optimal time complexity $O(log\sp2N).$. Finally, we developed another MOT algorithm to solve the congruent pattern problem. Given a simple polygon P with k edges and a planar graph G with N edges, $N>k.$ The problem is to find all the patterns (cycles) in G which are congruent to P. Our algorithm is based on the CREW PRAM algorithm given by Jeong, Kim, and Baek (JKB92). We also use an $N\times N$ MOT and get the optimal time complexity $O(k\log N).$.
Journal ArticleDOI
TL;DR: The first superlinear lower bound on the size of planar Boolean circuits computing a specific Boolean function and the first superpolylogarithmic lower bounds on the depth of monotone Boolean circuits have been established.
Abstract: Communication Complexity · 3 —Size of distinct models of branching programs —Depth of decision trees —Data structure problems. To illustrate the progress covered by the above list we mention two specific contributions. The first superlinear lower bound on the size of planar Boolean circuits computing a specific Boolean function and the first superpolylogarithmic lower bounds on the depth of monotone Boolean circuits have been established. The big success of communication complexity application should not to be surprising because we have information transfer in all computing models (for instance, between two parts of input data , between some parts (processors) of a parallel computing model, between two time moments, etc.). So, you can cut hardware, time, or both in your computing model, and then apply lower bounds on the communication complexity of your computing problem. In this way you have a lower bound on the information transfer that must be realized in the computing model considered in order to compute the given task. The appropriate choice of the cut is crucial for obtaining good lower bounds. One of the perspectives is to extend the applications for proving lower bounds for multilective and/or non-oblivious computing models. This is one of the hardest tasks of special importance in complexity theory. The recent results show that using Ramsey theory and communication complexity over overlapping (not disjoint) partitions of inputs one has good chances to achieve progress in this hard topic too. 3. NONDETERMINISTIC AND RANDOMIZED COMPUTATIONS One of the central principal questions of current theoretical computer science is which computational power have nondeterministic and randomized computations, especially in the comparison with the deterministic one. The fundamental questions about polynomial time computations (like P versus NP, P versus ZPP, P versus R) are long-stated open problems. For communication complexity the research has been successful and the relation between determinism, nondeterminism and randomness has been fixed. This has essentially contributed to the understanding of the nature of randomness and nondeterminism. Some of the main results are the following ones: (1) There are exponential gaps between —determinism and Monte Carlo randomness —nondeterminism and bounded error probabilism. (2) Deterministic communication can be bounded by at most twice the product of nondeterministic communication of the language and its complement. This implies an at most quadratic gap between determinism and Las Vegas randomization. A language having this quadratic gap has been found. (3) There is a linear gap between determinism and Las Vegas randomness for oneway communication complexity. (4) O(log n) random bits are sufficient to reach the full power of randomized communication for Las Vegas and Monte Carlo (error-bounded) protocols. (5) In contrast to 4. there exist high thresholds on the amount of nondeterminism (for some computing problems the deterministic communication complexity is
Journal ArticleDOI
TL;DR: This article describes a VLSI CAD workstation with a massively parallel computer the connection machine, as hardware accelerator, providing a suitable basis for knowledge-based tool development.
Abstract: This article describes a VLSI CAD workstation with a massively parallel computer the connection machine, as hardware accelerator. The connection machine offers workstation users general-purpose acceleration capabilities and high interactivity. Workstation software includes a novel CAD-system kernel and tools operating on the connection machine. The system kernel, designed to permit efficient interfaces to existing tools and tool environments, also includes more advanced design tools (procedural tools, for example) providing a suitable basis for knowledge-based tool development.