scispace - formally typeset
Search or ask a question
Topic

Degree of parallelism

About: Degree of parallelism is a research topic. Over the lifetime, 1515 publications have been published within this topic receiving 25546 citations.


Papers
More filters
Proceedings ArticleDOI
08 Apr 2019
TL;DR: The results show that the combination of a high-level programming model with the scalable capabilities of AWS Lambda makes it easy for end users to efficiently exploit serverless computing for the optimized and cost-effective execution of loosely-coupled tasks.
Abstract: Serverless computing has introduced unprecedented levels of scalability and parallelism for the execution of High Throughput Computing tasks. This represents a challenge and an opportunity for different scientific workloads to be adapted to upcoming programming models that simplify the usage of such platforms. In this paper we introduce a serverless model for highly-parallel file-processing applications. We also describe a middleware implementation that supports the execution of customized execution environments based on Docker images on AWS Lambda, the leading serverless computing platform. Moreover, this middleware offers tools to manage the input/output of the serverless infrastructure and the creation of HTTP endpoints in a transparent way to the user. To test the programming model proposed and the middleware, this paper describes two case studies. The first one analyzes medical images with a high degree of parallelism. The second one presents an architecture to process video keyframes. The results from both case studies are discussed and a cost analysis of the medical image architecture comparing different Cloud options is carried out. The results show that the combination of a high-level programming model with the scalable capabilities of AWS Lambda makes it easy for end users to efficiently exploit serverless computing for the optimized and cost-effective execution of loosely-coupled tasks.

18 citations

Journal ArticleDOI
TL;DR: The principal proposal is that messages are to be selected for reception by a receiving process solely on the basis of their type and arrival order within type; in particular, the identity of the sending process does not influence message reception.
Abstract: The design of a communicating sequential process language is presented, featuring a parallel command, communication by message passing and the use of the guarded command as a means of introducing and controlling non-determinism. The language described here incorporates a number of new proposals regarding communications between sequential processes. The principal proposal is that messages are to be selected for reception by a receiving process solely on the basis of their type and arrival order within type; in particular, the identity of the sending process does not influence message reception. This results in a greater degree of parallelism and non-determinism, which is useful to both the programmer and the language implementor. Also a hierarchichal composition regime is proposed, which gives communications significance to the organization of subprocess hierarchies; this promotes an independence of specification of program components through information hiding properties. The language implementation is described, and several aspects are of particular interest: the design of a process scheduler in a non-deterministic situation leads to some interesting optimizations, as does the design of a message handler in the case where the communicating processes can access the same memory. Finally, example programs are given to illustrate some of the novel features of the language.

18 citations

Proceedings ArticleDOI
23 May 1994
TL;DR: New scalable interaction paradigms and their embodiment in a time- and space-efficient debugger with scalable performance are presented, making it easier to debug and understand message-passing programs.
Abstract: Developers of message-passing codes on massively parallel systems have to contend with difficulties that data-parallel programmers do not face, not the least of which is debuggers that do not scale with the degree of parallelism. In this paper, we present new scalable interaction paradigms and their embodiment in a time- and space-efficient debugger with scalable performance. The debugger offers scalable expression, execution, and interpretation of all debugging operations, making it easier to debug and understand message-passing programs. >

18 citations

Journal ArticleDOI
TL;DR: A two-level space–time domain decomposition method for solving an inverse source problem associated with the time-dependent convection–diffusion equation in three dimensions and eliminates the sequential steps in the optimization outer loop and the inner forward and backward time marching processes, thus achieves high degree of parallelism.
Abstract: As the number of processor cores on supercomputers becomes larger and larger, algorithms with high degree of parallelism attract more attention. In this work, we propose a two-level space---time domain decomposition method for solving an inverse source problem associated with the time-dependent convection---diffusion equation in three dimensions. We introduce a mixed finite element/finite difference method and a one-level and a two-level space---time parallel domain decomposition preconditioner for the Karush---Kuhn---Tucker system induced from reformulating the inverse problem as an output least-squares optimization problem in the entire space-time domain. The new full space---time approach eliminates the sequential steps in the optimization outer loop and the inner forward and backward time marching processes, thus achieves high degree of parallelism. Numerical experiments validate that this approach is effective and robust for recovering unsteady moving sources. We will present strong scalability results obtained on a supercomputer with more than 1000 processors.

18 citations

Journal ArticleDOI
TL;DR: In this paper, the authors describe the application of the parallel integration evaluation model (PIEM) in an industrial case study and propose a design solution that obeys the tradeoff that parallelism introduces into the networked supply operating system: while direct-production/supply time decreases, the overhead of interaction time among the parties, T, increases.
Abstract: This paper describes the application of the parallel integration evaluation model (PIEM) in an industrial case study. The PIEM model is based on modelling the interactions among supply network parties. It generates the parallel configuration of production and supply servers yielding the minimum total production and supply time/cost for the system, Φ. The design solution recommended by the model obeys the tradeoff that parallelism introduces into the networked supply operating system: while direct-production/supply time Π decreases, the overhead of interaction time among the networked parties, T, increases. The interaction time comprises two delay generating factors, limiting the implementation of massively parallel supply networks: the delay due to communication, negotiation, and coordination among the parties, K, and the congestion delay Γ at shared resources in the supply network. These two types of delay factors are positively correlated with the network's degree of parallelism, Ψ, and they affect inve...

18 citations


Network Information
Related Topics (5)
Server
79.5K papers, 1.4M citations
85% related
Scheduling (computing)
78.6K papers, 1.3M citations
83% related
Network packet
159.7K papers, 2.2M citations
80% related
Web service
57.6K papers, 989K citations
80% related
Quality of service
77.1K papers, 996.6K citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20221
202147
202048
201952
201870
201775