scispace - formally typeset
Search or ask a question
Topic

Degree of parallelism

About: Degree of parallelism is a research topic. Over the lifetime, 1515 publications have been published within this topic receiving 25546 citations.


Papers
More filters
Posted Content
TL;DR: PACMAN, a parallel database recovery mechanism that is specifically designed for lightweight, coarse-grained transaction-level logging, is proposed and can significantly reduce recovery time without compromising the efficiency of transaction processing.
Abstract: Main-memory database management systems (DBMS) can achieve excellent performance when processing massive volume of on-line transactions on modern multi-core machines. But existing durability schemes, namely, tuple-level and transaction-level logging-and-recovery mechanisms, either degrade the performance of transaction processing or slow down the process of failure recovery. In this paper, we show that, by exploiting application semantics, it is possible to achieve speedy failure recovery without introducing any costly logging overhead to the execution of concurrent transactions. We propose PACMAN, a parallel database recovery mechanism that is specifically designed for lightweight, coarse-grained transaction-level logging. PACMAN leverages a combination of static and dynamic analyses to parallelize the log recovery: at compile time, PACMAN decomposes stored procedures by carefully analyzing dependencies within and across programs; at recovery time, PACMAN exploits the availability of the runtime parameter values to attain an execution schedule with a high degree of parallelism. As such, recovery performance is remarkably increased. We evaluated PACMAN in a fully-fledged main-memory DBMS running on a 40-core machine. Compared to several state-of-the-art database recovery mechanisms, PACMAN can significantly reduce recovery time without compromising the efficiency of transaction processing.

16 citations

Patent
Raul E. Silvera1, Priya Unnikrishnan1
18 Sep 2007
TL;DR: In this paper, the authors propose a mechanism for folding all the data dependencies in a loop into a single, conservative dependence, which leads to one pair of synchronization primitives per loop.
Abstract: A mechanism for folding all the data dependencies in a loop into a single, conservative dependence. This mechanism leads to one pair of synchronization primitives per loop. This mechanism does not require complicated, multi-stage compile time analysis. This mechanism considers only the data dependence information in the loop. The low synchronization cost balances the loss in parallelism due to the reduced overlap between iterations. Additionally, a novel scheme is presented to implement required synchronization to enforce data dependences in a DOACROSS loop. The synchronization is based on an iteration vector, which identifies a spatial position in the iteration space of the loop. Multiple iterations executing in parallel have their own iteration vector for synchronization where they update their position in the iteration space. As no sequential updates to the synchronization variable exist, this method exploits a greater degree of parallelism.

16 citations

Journal ArticleDOI
23 Jun 1997
TL;DR: It is shown that, due to the diminishing returns from a further increase in ILP, multimedia applications will benefit more from an additional exploitation of parallelism at thread-level, and how simultaneous multithreading (SMT), a novel architectural approach combining VLIW techniques with parallel processing of threads, can efficiently be used to further increase performance of typical multimedia workloads.
Abstract: A number of recently published DSPs and multimedia processors emphasize on Very Long Instruction Word (VLIW) architectures to achieve flexibility, processing power and high-level language programmability needed for future multimedia applications. In this paper we show that exclusive exploitation of instruction level parallelism decreases in efficiency as the degree of parallelism increases. This is mainly caused by algorithm characteristics, VLSI design and compiler restrictions. We discuss selected aspects from these fields and possible solutions to upcoming bottlenecks from a practical point of view.

16 citations

Proceedings ArticleDOI
11 Apr 1988
TL;DR: A constrained maximum-likelihood estimator is derived by incorporating a rotationally invariant roughness penalty proposed by I.J. Good (1981) into the likelihood functional, which leads to a set of nonlinear differential equations the solution of which is a spline-smoothing of the data.
Abstract: A constrained maximum-likelihood estimator is derived by incorporating a rotationally invariant roughness penalty proposed by I.J. Good (1981) into the likelihood functional. This leads to a set of nonlinear differential equations the solution of which is a spline-smoothing of the data. The nonlinear partial differential equations are mapped onto a grid via finite differences, and it is shown that the resulting computations possess a high degree of parallelism as well as locality in the data-passage, which allows an efficient implementation on a 48-by-48 mesh-connected array of NCR GAPP processors. The smooth reconstruction of the intensity functions of Poisson point processes is demonstrated in two dimensions. >

16 citations

ReportDOI
01 Oct 1990
TL;DR: A formal mathematical framework which unifies the existing loop transformation techniques is given, and schedules of a loop transformation are classified into three classes: uniform, subdomain-variant, and statement-Variant.
Abstract: : This paper presents new loop transformation techniques that can extract more parallelism from a class of programs than existing techniques. A formal mathematical framework which unifies the existing loop transformation techniques is given. We classify schedules of a loop transformation into three classes: uniform, subdomain-variant, and statement-variant. New algorithms for generating these schedules are given. Viewing from the degree of parallelism to be gained by loop transformation, the schedules can also be classified as single-sequential level, multiple-sequential level, and mixed schedule. We described iterative and recursive algorithms to obtain multiple-sequential level and mixed schedules respectively based on the algorithms for single-sequential level schedules.

16 citations


Network Information
Related Topics (5)
Server
79.5K papers, 1.4M citations
85% related
Scheduling (computing)
78.6K papers, 1.3M citations
83% related
Network packet
159.7K papers, 2.2M citations
80% related
Web service
57.6K papers, 989K citations
80% related
Quality of service
77.1K papers, 996.6K citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20221
202147
202048
201952
201870
201775