scispace - formally typeset
Search or ask a question
Book

Computational Aspects of Vlsi

01 Jan 1984-
About: The article was published on 1984-01-01 and is currently open access. It has received 862 citations till now. The article focuses on the topics: Very-large-scale integration.
Citations
More filters
Book ChapterDOI
01 Jan 1990
TL;DR: In this article, the authors review results on embedding network and program structures into popular parallel computer architectures and present a high level description of efficient methods to simulate an algorithm designed for one type of parallel machine on a different network structure and/or techniques to distribute data/program variables to achieve optimum use of all available processors.
Abstract: Embedding one Interconnection Network in Another. We review results on embedding network and program structures into popular parallel computer architectures. Such embeddings can be viewed as high level descriptions of efficient methods to simulate an algorithm designed for one type of parallel machine on a different network structure and/or techniques to distribute data/program variables to achieve optimum use of all available processors.

157 citations

Journal ArticleDOI
TL;DR: Traditional algorithms using hardware division and square root are replaced with the special purpose CORDIC algorithms for computing vector rotations and inverse tangents in the Singular Value Decomposition.

156 citations

Journal ArticleDOI
TL;DR: The generalised Gibbs sampler provides a framework encompassing a class of recently proposed tricks such as parameter expansion and reparameterisation and is applied to Bayesian inference problems for nonlinear state-space models, ordinal data and stochastic differential equations with discrete observations.
Abstract: SUMMARY Although Monte Carlo methods have frequently been applied with success, indiscriminate use of Markov chain Monte Carlo leads to unsatisfactory performances in numerous applications. We present a generalised version of the Gibbs sampler that is based on conditional moves along the traces of groups of transformations in the sample space. We explore its connection with the multigrid Monte Carlo method and its use in designing more efficient samplers. The generalised Gibbs sampler provides a framework encompassing a class of recently proposed tricks such as parameter expansion and reparameterisation. To illustrate, we apply this new method to Bayesian inference problems for nonlinear state-space models, ordinal data and stochastic differential equations with discrete observations.

153 citations


Cites background from "Computational Aspects of Vlsi"

  • ...See Kirkpatrick et al., 1983; Ullman, 1984); engineering (Geman & Geman, 1984; Liu & Chen,1995); statistics (e.g., bootstrap, data augmentation, multiple imputation, etc.), and other elds.1 Despite their success, the standard Metropolis and Gibbs sampling schemes (Gelfand & Smith, 1990;Metropolis…...

    [...]

01 Jan 1993
TL;DR: In this paper, the authors describe an efficient approach to partitioning unstructured meshes that occur naturally in the finite element and finite difference methods, making use of the underlying geometric structure of a given mesh and finding a provably good partition in random O(n) time.
Abstract: This paper describes an efficient approach to partitioning unstructured meshes that occur naturally in the finite element and finite difference methods. The approach makes use of the underlying geometric structure of a given mesh and finds a provably good partition in random O(n) time. It applies to meshes in both two and three dimensions. The new method has applications in efficient sequential and parallel algorithms for large-scale problems in scientific computing. This is an overview paper written with emphasis on the algorithmic aspects of the approach. Many detailed proofs can be found in companion papers.

152 citations

Journal ArticleDOI
TL;DR: It is believed that in forseeable technologies the number of faults will grow with the size of the network while the degree will remain practically fixed, and the question whether it is possible to avoid the connectivity requirements by slightly lowering the authors' expectations is raised.
Abstract: Achieving processor cooperation in the presence of faults is a major problem in distributed systems. Popular paradigms such as Byzantine agreement have been studied principally in the context of a complete network. Indeed, Dolev [J. Algorithms, 3 (1982), pp. 14–30] and Hadzilacos [Issues of Fault Tolerance in Concurrent Computations, Ph.D. thesis, Harvard University, Cambridge, MA, 1984] have shown that $\Omega (t)$ connectivity is necessary if the requirement is that all nonfaulty processors decide unanimously, where t is the number of faults to be tolerated. We believe that in forseeable technologies the number of faults will grow with the size of the network while the degree will remain practically fixed. We therefore raise the question whether it is possible to avoid the connectivity requirements by slightly lowering our expectations. In many practical situations we may be willing to “lose” some correct processors and settle for cooperation between the vast majority of the processors. Thus motivated, ...

152 citations