scispace - formally typeset
Search or ask a question
Topic

Degree of parallelism

About: Degree of parallelism is a research topic. Over the lifetime, 1515 publications have been published within this topic receiving 25546 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: A spatial domain decomposition method is adopted in the computational domain, and different interface prediction and correction methods are introduced to solve the 1-D spherical geometric transport equation to avoid producing negative flux.

3 citations

Posted Content
TL;DR: A XML-based framework for implementing ANN on the Globus Toolkit Platform, using the Grid for simulating the neuron network will lead to a high degree of parallelism in the implementation of ANN.
Abstract: Artificial Neural Network is one of the most common AI application fields. This field has direct and indirect usages most sciences. The main goal of ANN is to imitate biological neural networks for solving scientific problems. But the level of parallelism is the main problem of ANN systems in comparison with biological systems. To solve this problem, we have offered a XML-based framework for implementing ANN on the Globus Toolkit Platform. Globus Toolkit is well known management software for multipurpose Grids. Using the Grid for simulating the neuron network will lead to a high degree of parallelism in the implementation of ANN. We have used the XML for improving flexibility and scalability in our framework.

3 citations

01 Jan 2013
TL;DR: A new approach to the design and implementation of computer simulation models of complex multicomponent systems is introduced, which allows to exclude imperative programming code and to generate code with a high degree of parallelism.
Abstract: The work is devoted to the description and simulation of complex systems, about which it is well known of what components they are made, what those components are able to do, what rules of interaction they obey. The challenging problem of modeling is to reproduce the behavior and to evaluate the capabilities of such a system as a whole. A new approach to the design and implementation of computer simulation models of complex multicomponent systems is introduced. It differs from the object-oriented approach. The central concept of this approach and at the same time, the basic building block for the construction of any more complex structures is the concept of the model- component. Model-component endowed with a more complicated structure than, for example, the object in the object- oriented analysis. This structure provides to the model-component an independent behavior - the ability of a standard way to respond to standard requests of its internal and external environment. At the same time, the computer implementation of model-component's behavior is invariant with respect to the integration of models-components into complexes, which allows firstly to build a fractal model of any complexity and secondly to implement a computational process of such structures uniformly - by a single program. In addition, the proposed paradigm of the multi-component simulations allows to exclude imperative programming code and to generate code with a high degree of parallelism.

3 citations

Journal ArticleDOI
TL;DR: Experimental results show that the profile-based partitioning can increase the degree of parallelism and improve the scalability of parallel simulations of large-scale wildfires.

3 citations

Proceedings ArticleDOI
23 Jun 2014
TL;DR: This work proposes a Python-based task parallel programming model called PyDac, which provides a two-level programming model based on the divide-and-conquer strategy and shows that through the use of double and triple modular redundancy it is able to complete the benchmarks with the correct results while only incurring a proportional performance penalty.
Abstract: Heterogeneous many-core architectures combined with scratch-pad memories are attractive because they promise better energy efficiency than conventional architectures and a good balance between single-thread performance and multi-thread throughput. However, programmers will need an environment for finding and managing the large degree of parallelism, locality, and system resilience. We propose a Python-based task parallel programming model called PyDac to support these objectives. PyDac provides a two-level programming model based on the divide-and-conquer strategy. The PyDac runtime system allows threads to be run on unreliable hardware by dynamically checking the results without involvement from the programmer. To test this programming model and runtime, an unconventional heterogeneous architecture consisting of PowerPC and ARM cores was developed and emulated on an FPGA device. We inject faults during the execution of micro-benchmarks and show that through the use of double and triple modular redundancy we are able to complete the benchmarks with the correct results while only incurring a proportional performance penalty.

3 citations


Network Information
Related Topics (5)
Server
79.5K papers, 1.4M citations
85% related
Scheduling (computing)
78.6K papers, 1.3M citations
83% related
Network packet
159.7K papers, 2.2M citations
80% related
Web service
57.6K papers, 989K citations
80% related
Quality of service
77.1K papers, 996.6K citations
79% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20221
202147
202048
201952
201870
201775