scispace - formally typeset
Search or ask a question
Book

Computational Aspects of Vlsi

01 Jan 1984-
About: The article was published on 1984-01-01 and is currently open access. It has received 862 citations till now. The article focuses on the topics: Very-large-scale integration.
Citations
More filters
Proceedings ArticleDOI
28 Nov 1984
TL;DR: Linear transformations of space-time are used to explore design alternatives in a formal way and cellular computations for convolution and matrix product areused to illustrate this linear transformation technique.
Abstract: Cellular computations (i.e., systolic or wavefront com- putations) are embedded in a vector space, one of whose dimensions models time, and the others, space. Cellular computations that are related by a lineartransformation may have different properties withrespect to input /output schedules, chip area, com- munication topology, latency, period, and thepresence /absence of broadcasting and pipelining.Linear transformations of space -time are used to explore design alternatives in a formal way. Cellularcomputations for convolution and matrix product areused to illustrate this linear transformation technique.Introduction There has been considerable research recently into cellular designs (see [12] for a sampling of this work).Leiserson, Rose, and Saxe [15],[16] provide a methodto eliminate broadcasting from a synchronous circuitwithout changing its communication structure. Johns - son and Cohen [8] and Weiser and Davis [25],[9] inves- tigate ways of formally representing computationaldesigns. Their respective goals are similar: To be able

19 citations


Cites background from "Computational Aspects of Vlsi"

  • ...Ullman [23] presents a matrix product design for a square mesh which does not use broadcasting....

    [...]

  • ...e Band matrix product l an [23] presents d a re s br ....

    [...]

Book
01 Oct 1990
TL;DR: A novel way of solving systems of linear equations with sparse coefficient matrices using iterative methods on a VLSI array that yields a superior time performance, greater ease of programmability and an area efficient design is proposed.
Abstract: Abstract We propose a novel way of solving systems of linear equations with sparse coefficient matrices using iterative methods on a VLSI array. The nonzero entries of the coefficient matrix are mapped onto a processor array of size √e × √e, where e is the number of nonzero elements, n is the number of equations and e ⩾ n. The data transport problem that arises because of this mapping is solved using an efficient routing technique. Preprocessing is carried out on the iteration matrix of the system to compute the routing control-words that are used in the data transfer. This results in O(√e) time for each iteration of the method, with a small constant factor. As compared to existing VLSI methods for solving the problem, the proposed method yields a superior time performance, greater ease of programmability and an area efficient design. We also develop a second implementation of our algorithm that uses a slightly higher number of communication steps, but reduces the number of arithmetic operations to O(log e). The latter algorithm is suitable for many other architectures as well. The algorithm can be implemented in O(log e) time using e processors on a hypercube, shuffle-exchange, and cube-connected-cycles.

19 citations

Journal ArticleDOI
01 Oct 1988
TL;DR: MUPPET as mentioned in this paper is a problem-solving environment for scientific computing with message-based multiprocessors, which consists of concurrent languages, programming environments, application environments and man-machine interfaces.
Abstract: MUPPET is a problem-solving environment for scientific computing with message-based multiprocessors. It consists of four part—concurrent languages, programming environments, application environments and man-machine interfaces. The programming paradigm of MUPPET is based on parallel abstract machines and transformations between them. This paradigm allows the development of programs which are portable among multiprocessors with different interconnection topologies. In this paper we discuss the MUPPET programming paradigm. We give an introduction to the language CONCURRENT MODULA-2 and the graphic specification system GONZO. The graphic specification system tries to introduce graphics as a tool for programming. It is also the basis for programming generation and transformation.

18 citations

01 Jan 2004
TL;DR: The need for a consumer market for mass-produced powerful integrated circuits is shown to underlie the Japanese objectives and the basis for a Western response to the Japanese program is summarized.
Abstract: In 1981 the Japanese announced a program of research on a fifth generation of computing systems (FGCS) that will integrate advances in very large scale integration, data base systems, artificial intelligence, and the human computer interface into a new range of computers that are closer to people in their communication and knowledge processing capabilities. The proposal was a shock at first but Western research quickly reoriented to match the Japanese program. This paper considers fifth generation computing from a wide range of perspectives in order to understand the logic behind the program, its chances of success, and its technical and social impact. The need for a consumer market for mass-produced powerful integrated circuits is shown to underlie the Japanese objectives. The project is placed in a historical perspective of work in computer science and related to the preceding generations of computers. The main projects in the Japanese program are summarized and discussed in relation to similar research elsewhere. The social implications of fifth generation developments are discussed and it is suggested that they grow out of society’s needs. The role of fifth generation computers in providing a new medium for communication is analyzed. Finally, the basis for a Western response to the Japanese program is summarized.

18 citations

Proceedings ArticleDOI
02 Dec 1990
TL;DR: A parallel architecture for high-speed data compression based on textual substitution using a sliding window that combines a systolic array with trees for data broadcast and reduction and discusses layout issues arising from the tree interconnect.
Abstract: The author presents a parallel architecture for high-speed data compression based on textual substitution using a sliding window. The architecture combines a systolic array with trees for data broadcast and reduction. Compression involves two steps. First, a match generator computes in parallel the maximal matches available at each position of the input. The generator uses a systolic array to hold the dictionary, a pipelined broadcast tree to deliver each input character simultaneously to every array cell, and a reduction tree to identify the largest available match each cycle. From this information a second process selects a match sequence exactly covering the input. Decoding mirrors encoding. The tree interconnect provides through-delay proportional to the log of the dictionary size, and pipelining reduces the effective per-character processing time to a single system cycle. A system data rate of 300 Mbit/sec is easily attainable. The author discusses layout issues arising from the tree interconnect. >

18 citations


Cites background from "Computational Aspects of Vlsi"

  • ...I t can be shown that a twodimensional tree layout requires that some wires will be of length O(sqrt(N)) [ll]....

    [...]