scispace - formally typeset
Search or ask a question
Topic

Degree of parallelism

About: Degree of parallelism is a research topic. Over the lifetime, 1515 publications have been published within this topic receiving 25546 citations.


Papers
More filters
Journal Article
TL;DR: In this paper, a parallel version of AMIGO (Advanced Multidimensional Interval Analysis Global Optimization) algorithm is proposed to solve very hard global optimization problems in a multiprocessing environment.
Abstract: Interval Global Optimization based on Branch and Bound (B&B) technique is a standard for searching an optimal solution in the scope of continuous and discrete Global Optimization. It iteratively creates a search tree where each node represents a problem which is decomposed in several subproblems provided that a feasible solution can be found by solving this set of subproblems. The enormous computational power needed to solved most of the B&B Global Optimization problems and their high degree of parallelism make them suitable candidates to be solved in a multiprocessing environment. This work evaluates a parallel version of AMIGO (Advanced Multidimensional Interval Analysis Global Optimization) algorithm. AMIGO makes an efficient use of all the available information in continuous differentiable problems to reduce the search domain and to accelerate the search. Our parallel version takes advantage of the capabilities offered by Charm++. Preliminary results show our proposal as a good candidate to solve very hard global optimization problems.

3 citations

Journal ArticleDOI
TL;DR: A large‐scale graph data processing system to address public opinion analysis and a new graph data organization format to represent the social relationship, Arbor, which outperforms the state‐of‐the‐art systems.
Abstract: Trust in social network draws more and more attentions from both the academia and industry fields. Public opinion analysis is a direct way to increase the trust in social network. Because the public opinion analysis can be expressed naturally by the graph algorithm and graph data are the default data organization mechanism used in large-scale social network service applications, more and more research works apply the graph processing system to deal with the public opinion analysis. As the data volume is growing rapidly, the distributed graph systems are introduced to process the large-scale public opinion analysis. Most of graph algorithms introduce a large number of data iterations, so the synchronization requirements between successive iterations can severely jeopardize the effectiveness of parallel operations, which makes the data aggregation and analysis operations become slower. In this paper, we propose a large-scale graph data processing system to address these issues, which includes a graph data processing model, Arbor. Arbor develops a new graph data organization format to represent the social relationship, and the format can not only save storage space but also accelerate graph data processing operations. Furthermore, Arbor substitutes time-constrained synchronization operations with non-time-constrained control message transmissions to increase the degree of parallelism. Based on the system, we put forward two most frequently used graph applications on Arbor: shortest path and PageRank. In order to evaluate the system, we compare Arbor with the other graph processing systems using large-scale experimental graph data, and the results show that it outperforms the state-of-the-art systems. Copyright © 2014 John Wiley & Sons, Ltd.

3 citations

Proceedings ArticleDOI
23 Sep 2013
TL;DR: This paper will evaluate a moldable job model for LHCb grid jobs where the main challenge is the definition of the best degree of parallelism.
Abstract: The LHCb experiment at CERN processes its datasets over hundred different grid sites within the Worldwide LHC Computing Grid (WLCG). All those grid sites consist of multicore CPUs nowadays. However, the number of cores per worker node will increase in the near future. Using such worker nodes more efficiently requires parallelization of software as well as modifications at the level of scheduling. This paper will evaluate a moldable job model for LHCb grid jobs where the main challenge is the definition of the best degree of parallelism. Choosing an appropriate degree of parallelism depends on the parameters, on which optimization shall be applied. Commonly used features are for example scalability, workload and turnaround time. Prediction of run time is another major problem and it will be discussed how it can be handled using historical information. Furthermore, the advantages and disadvantages of a moldable job model will be discussed as well on how it must be extended to meet the requirements of LHCb jobs.

3 citations

Journal ArticleDOI
TL;DR: DSU is designed to assist geophysicists in developing and executing sequences of Seismic Unix (SU) applications in clusters of workstations as well as on tightly coupled multiprocessor machines.
Abstract: This paper describes a distributed system called Distributed Seismic Unix (DSU) DSU provides tools for creating and executing application sequences over several types of multiprocessor environments DSU is designed to assist geophysicists in developing and executing sequences of Seismic Unix (SU) applications in clusters of workstations as well as on tightly coupled multiprocessor machines SU is a large collection of subroutine libraries, graphics tools and fundamental seismic data processing applications that is freely available via the Internet from the Center for Wave Phenomena (CWP) of the Colorado School of Mines SU is currently used at more than 500 sites in 32 countries around the world DSU is built on top of three publicly available software packages: SU itself; TCL/TK, which provides the necessary tools to build the graphical user interface (GUI); and PVM (Parallel Virtual Machine), which supports process management and communication DSU handles tree-like graphs representing sequences of SU applications Nodes of a graph represent SU applications, while the arcs represent the way the data flow from the root node to the lead nodes of the tree In general the root node corresponds to an application that reads or creates synthetic seismic data, and the leaf nodes are associated with applications that write or display the processed seismic data; intermediate nodes are usually associated with typical seismic processing applications like filters, convolutions and signal processing Pipelining parallelism is obtained when executing single-branch tree sequences, while a higher degree of parallelism is obtained when executing sequences with several branches A major advantage of the DSU framework for distribution is that SU applications do not need to be modified for parallelism; only a few low-level system functions need to be modified Copyright © 1999 John Wiley & Sons, Ltd

3 citations

Book ChapterDOI
28 Aug 2016
TL;DR: This work addresses a problem of identifying optimal parameters that may affect the throughput and the latency of streaming engines, and identifies optimal cluster performance by balancing the degree of parallelism with number of nodes.
Abstract: Recent developments in Big Data are increasingly focusing on supporting computations in higher data velocity environments, including processing of continuous data streams in support of the discovery of valuable insights in real-time. In this work we investigate performance of streaming engines, specifically we address a problem of identifying optimal parameters that may affect the throughput (messages processed/second) and the latency (time to process a message). These parameters are also function of the parallelism property, i.e. a number of additional parallel tasks (threads) available to support parallel computation. In experimental evaluation we identify optimal cluster performance by balancing the degree of parallelism with number of nodes, which yield maximum throughput with minimum latency.

3 citations


Network Information
Related Topics (5)
Server
79.5K papers, 1.4M citations
85% related
Scheduling (computing)
78.6K papers, 1.3M citations
83% related
Network packet
159.7K papers, 2.2M citations
80% related
Web service
57.6K papers, 989K citations
80% related
Quality of service
77.1K papers, 996.6K citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20221
202147
202048
201952
201870
201775