scispace - formally typeset
Search or ask a question
Topic

Degree of parallelism

About: Degree of parallelism is a research topic. Over the lifetime, 1515 publications have been published within this topic receiving 25546 citations.


Papers
More filters
Proceedings ArticleDOI
27 Jun 2014
TL;DR: This paper presents a formal model for BPMN processes in terms of Labelled Transition Systems, which are obtained through process algebra encodings and proposes an approach for automatically computing the degree of parallelism by using model checking techniques and dichotomic search.
Abstract: A business process is a set of structured, related activities that aims at fulfilling a specific organizational goal for a customer or market. An important metric when developing a business process is its degree of parallelism, i.e., the maximum number of tasks that are executable in parallel in that process. The degree of parallelism determines the peak demand on tasks, providing a valuable guide for the problem of resource allocation in business processes. In this paper, we investigate how to automatically measure the degree of parallelism for business processes, described using the BPMN standard notation. We first present a formal model for BPMN processes in terms of Labelled Transition Systems, which are obtained through process algebra encodings. We then propose an approach for automatically computing the degree of parallelism by using model checking techniques and dichotomic search. We implemented a tool for automating this check and we applied it successfully to more than one hundred BPMN processes.

13 citations

Proceedings ArticleDOI
01 Sep 1993
TL;DR: The paper discusses the value of abstraction and semantic richness, performance issues, portability, potential degree of parallelism, data distribution, process creation, communication and synchronization, frequency of program faults, and clarity of expression in the BLAS.
Abstract: Although multicomputers are becoming feasible for solving large problems, they are difficult to program: Extraction of parallelism from scalar languages is possible, but limited. Parallelism in algorithm design is difficult for those who think in von Neumann terms. Portability of programs and programming skills can only be achieved by hiding the underlying machine architecture from the user, yet this may impact performance on a specific host.APL, J, and other applicative array languages with adequately rich semantics can do much to solve these problems. The paper discusses the value of abstraction and semantic richness, performance issues, portability, potential degree of parallelism, data distribution, process creation, communication and synchronization, frequency of program faults, and clarity of expression. The BLAS are used as a basis for comparison with traditional supercomputing languages.

13 citations

Patent
19 Feb 2014
TL;DR: In this article, the authors propose a distributed data stream processing method and system, which includes the steps that the degree of parallelism corresponding to a designated operation is determined through the receiving rate of target logical tasks and the processing rate in logical tasks received by working nodes from a main node.
Abstract: The invention provides a distributed data stream processing method and system The method includes the steps that the degree of parallelism corresponding to a designated operation is determined through the receiving rate of target logical tasks and the processing rate in logical tasks received by working nodes from a main node, wherein the receiving rate is used for indication of conducting the designed operation, and the designated operation is conducted on the target logical tasks at the processing rate; physical tasks are acquired by integrating the target logical tasks according to the degree of parallelism, the number of the physical tasks is the degree of parallelism, and the designated operation is executed on the physical tasks in parallel The degrees of parallelism of operations are dynamically determined according to the receiving rate of the logical tasks and the processing rate of the logical tasks, and therefore the technical problems that in the prior art, system resources are wasted or data streams are delayed due to the fact that the fixed degrees of parallelism can not adapt to the time-varying characteristics of the data streams and external load change are solved

13 citations

Proceedings ArticleDOI
12 Oct 2014
TL;DR: AdaPNet is introduced, a run-time system to execute streaming applications, which are modeled as process networks, efficiently on platforms with dynamic resource al-location, and outperforms comparable run- time systems, which do not adapt the degree of parallelism.
Abstract: A widely considered strategy to prevent interference issues on multi-processor systems is to isolate the execution of the individual applications by running each of them on a dedicated virtual guest machine. The amount of computing power available to a single application, however, depends on the other applications running on the system and may change over time. A promising approach to maximize the performance under such conditions is to adapt the application's degree of parallelism when the resources allocated to the application are changed. This enables an application to exploit not more parallelism than required, thereby reducing inter-process communication and scheduling overheads. In this paper, we introduce AdaPNet, a run-time system to execute streaming applications, which are modeled as process networks, efficiently on platforms with dynamic resource al-location. AdaPNet responds to changes in the available resources by first calculating a process network that maximizes the performance of the application on the new resources. Then, AdaPNet transparently transforms the application into the alternative network without discarding the program state. Targeting two many-core systems, we demonstrate that AdaPNet outperforms comparable run-time systems, which do not adapt the degree of parallelism, in terms of speed-up and memory usage.

13 citations

Journal ArticleDOI
TL;DR: The techniques presented in this paper used in combination with prior work on reducing the height of data dependences provide a comprehensive approach to accelerating loops with conditional exits.
Abstract: The performance of applications executing on processors with instruction level parallelism is often limited by control and data dependences. Performance bottlenecks caused by dependences can frequently be eliminated through transformations which reduce the height of critical paths through the program. The utility of these techniques can be demonstrated in an increasingly broad range of important situations. This paper focuses on the height reduction of control recurrences within loops with data dependent exits. Loops with exits are transformed so as to alleviate performance bottlenecks resulting from control dependences. A compilation approach to effect these transformations is described. The techniques presented in this paper used in combination with prior work on reducing the height of data dependences provide a comprehensive approach to accelerating loops with conditional exits. In many cases, loops with conditional exits provide a degree of parallelism traditionally associated with vectorization. Multiple iterations of a loop can be retired in a single cycle on a processor with adequate instruction level parallelism with no cost in code redundancy. In more difficult cases, height reduction requires redundant computation or may not be feasible.

13 citations


Network Information
Related Topics (5)
Server
79.5K papers, 1.4M citations
85% related
Scheduling (computing)
78.6K papers, 1.3M citations
83% related
Network packet
159.7K papers, 2.2M citations
80% related
Web service
57.6K papers, 989K citations
80% related
Quality of service
77.1K papers, 996.6K citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20221
202147
202048
201952
201870
201775