Example of Journal of Parallel and Distributed Computing format
Recent searches

Example of Journal of Parallel and Distributed Computing format Example of Journal of Parallel and Distributed Computing format Example of Journal of Parallel and Distributed Computing format Example of Journal of Parallel and Distributed Computing format Example of Journal of Parallel and Distributed Computing format Example of Journal of Parallel and Distributed Computing format Example of Journal of Parallel and Distributed Computing format Example of Journal of Parallel and Distributed Computing format Example of Journal of Parallel and Distributed Computing format Example of Journal of Parallel and Distributed Computing format Example of Journal of Parallel and Distributed Computing format Example of Journal of Parallel and Distributed Computing format Example of Journal of Parallel and Distributed Computing format Example of Journal of Parallel and Distributed Computing format Example of Journal of Parallel and Distributed Computing format Example of Journal of Parallel and Distributed Computing format
Sample paper formatted on SciSpace - SciSpace
This content is only for preview purposes. The original open access content can be found here.
Look Inside
Example of Journal of Parallel and Distributed Computing format Example of Journal of Parallel and Distributed Computing format Example of Journal of Parallel and Distributed Computing format Example of Journal of Parallel and Distributed Computing format Example of Journal of Parallel and Distributed Computing format Example of Journal of Parallel and Distributed Computing format Example of Journal of Parallel and Distributed Computing format Example of Journal of Parallel and Distributed Computing format Example of Journal of Parallel and Distributed Computing format Example of Journal of Parallel and Distributed Computing format Example of Journal of Parallel and Distributed Computing format Example of Journal of Parallel and Distributed Computing format Example of Journal of Parallel and Distributed Computing format Example of Journal of Parallel and Distributed Computing format Example of Journal of Parallel and Distributed Computing format Example of Journal of Parallel and Distributed Computing format
Sample paper formatted on SciSpace - SciSpace
This content is only for preview purposes. The original open access content can be found here.
open access Open Access

Journal of Parallel and Distributed Computing — Template for authors

Publisher: Elsevier
Categories Rank Trend in last 3 yrs
Theoretical Computer Science #17 of 120 -
Computer Networks and Communications #52 of 334 up up by 3 ranks
Hardware and Architecture #29 of 157 up up by 1 rank
Software #75 of 389 up up by 29 ranks
Artificial Intelligence #51 of 227 up up by 1 rank
journal-quality-icon Journal quality:
High
calendar-icon Last 4 years overview: 735 Published Papers | 5069 Citations
indexed-in-icon Indexed in: Scopus
last-updated-icon Last updated: 19/07/2020
Related journals
Insights
General info
Top papers
Popular templates
Get started guide
Why choose from SciSpace
FAQ

Related Journals

open access Open Access

Elsevier

Quality:  
Good
CiteRatio: 3.1
SJR: 0.302
SNIP: 0.914
open access Open Access

Frontiers Media

Quality:  
High
CiteRatio: 6.2
SJR: 0.427
SNIP: 1.319
open access Open Access

Cambridge University Press

Quality:  
Good
CiteRatio: 3.5
SJR: 0.685
SNIP: 1.383
open access Open Access

Elsevier

Quality:  
Good
CiteRatio: 2.9
SJR: 0.323
SNIP: 1.197

Journal Performance & Insights

Impact Factor

CiteRatio

Determines the importance of a journal by taking a measure of frequency with which the average article in a journal has been cited in a particular year.

A measure of average citations received per peer-reviewed paper published in the journal.

2.296

26% from 2018

Impact factor for Journal of Parallel and Distributed Computing from 2016 - 2019
Year Value
2019 2.296
2018 1.819
2017 1.815
2016 1.93
graph view Graph view
table view Table view

6.9

50% from 2019

CiteRatio for Journal of Parallel and Distributed Computing from 2016 - 2020
Year Value
2020 6.9
2019 4.6
2018 3.9
2017 4.4
2016 5.2
graph view Graph view
table view Table view

insights Insights

  • Impact factor of this journal has increased by 26% in last year.
  • This journal’s impact factor is in the top 10 percentile category.

insights Insights

  • CiteRatio of this journal has increased by 50% in last years.
  • This journal’s CiteRatio is in the top 10 percentile category.

SCImago Journal Rank (SJR)

Source Normalized Impact per Paper (SNIP)

Measures weighted citations received by the journal. Citation weighting depends on the categories and prestige of the citing journal.

Measures actual citations received relative to citations expected for the journal's category.

0.638

22% from 2019

SJR for Journal of Parallel and Distributed Computing from 2016 - 2020
Year Value
2020 0.638
2019 0.525
2018 0.417
2017 0.502
2016 0.597
graph view Graph view
table view Table view

1.44

0% from 2019

SNIP for Journal of Parallel and Distributed Computing from 2016 - 2020
Year Value
2020 1.44
2019 1.433
2018 1.494
2017 1.691
2016 1.834
graph view Graph view
table view Table view

insights Insights

  • SJR of this journal has increased by 22% in last years.
  • This journal’s SJR is in the top 10 percentile category.

insights Insights

  • SNIP of this journal has increased by 0% in last years.
  • This journal’s SNIP is in the top 10 percentile category.

Journal of Parallel and Distributed Computing

Guideline source: View

All company, product and service names used in this website are for identification purposes only. All product names, trademarks and registered trademarks are property of their respective owners.

Use of these names, trademarks and brands does not imply endorsement or affiliation. Disclaimer Notice

Elsevier

Journal of Parallel and Distributed Computing

This international journal is directed to researchers, engineers, educators, managers, programmers, and users of computers who have particular interests in parallel processing and/or distributed computing. The Journal of Parallel and Distributed Computing publishes original re...... Read More

Theoretical Computer Science

Hardware and Architecture

Computer Networks and Communications

Software

Artificial Intelligence

Mathematics

i
Last updated on
19 Jul 2020
i
ISSN
0743-7315
i
Impact Factor
High - 1.94
i
Open Access
No
i
Sherpa RoMEO Archiving Policy
Green faq
i
Plagiarism Check
Available via Turnitin
i
Endnote Style
Download Available
i
Bibliography Name
elsarticle-num
i
Citation Type
Numbered
[25]
i
Bibliography Example
G. E. Blonder, M. Tinkham, T. M. Klapwijk, Transition from metallic to tunneling regimes in superconducting microconstrictions: Excess current, charge imbalance, and supercurrent conversion, Phys. Rev. B 25 (7) (1982) 4515–4532. URL 10.1103/PhysRevB.25.4515

Top papers written in this journal

Journal Article DOI: 10.1006/JPDC.2000.1714
A Comparison of Eleven Static Heuristics for Mapping a Class of Independent Tasks onto Heterogeneous Distributed Computing Systems

Abstract:

Mixed-machine heterogeneous computing (HC) environments utilize a distributed suite of different high-performance machines, interconnected with high-speed links, to perform different computationally intensive applications that have diverse computational requirements. HC environments are well suited to meet the computational d... Mixed-machine heterogeneous computing (HC) environments utilize a distributed suite of different high-performance machines, interconnected with high-speed links, to perform different computationally intensive applications that have diverse computational requirements. HC environments are well suited to meet the computational demands of large, diverse groups of tasks. The problem of optimally mapping (defined as matching and scheduling) these tasks onto the machines of a distributed HC environment has been shown, in general, to be NP-complete, requiring the development of heuristic techniques. Selecting the best heuristic to use in a given environment, however, remains a difficult problem, because comparisons are often clouded by different underlying assumptions in the original study of each heuristic. Therefore, a collection of 11 heuristics from the literature has been selected, adapted, implemented, and analyzed under one set of common assumptions. It is assumed that the heuristics derive a mapping statically (i.e., off-line). It is also assumed that a metatask (i.e., a set of independent, noncommunicating tasks) is being mapped and that the goal is to minimize the total execution time of the metatask. The 11 heuristics examined are Opportunistic Load Balancing, Minimum Execution Time, Minimum Completion Time, Min?min, Max?min, Duplex, Genetic Algorithm, Simulated Annealing, Genetic Simulated Annealing, Tabu, and A*. This study provides one even basis for comparison and insights into circumstances where one technique will out-perform another. The evaluation procedure is specified, the heuristics are defined, and then comparison results are discussed. It is shown that for the cases studied here, the relatively simple Min?min heuristic performs well in comparison to the other techniques. read more read less

Topics:

Heuristics (62%)62% related to the paper, Heuristic (57%)57% related to the paper, Simulated annealing (55%)55% related to the paper, Genetic algorithm (52%)52% related to the paper, Symmetric multiprocessor system (51%)51% related to the paper
View PDF
1,689 Citations
open accessOpen access Journal Article DOI: 10.1006/JPDC.1996.0107
Cilk: An Efficient Multithreaded Runtime System

Abstract:

Cilk (pronounced “silk”) is a C-based runtime system for multithreaded parallel programming. In this paper, we document the efficiency of the Cilk work-stealing scheduler, both empirically and analytically. We show that on real and synthetic applications, the “work” and “critical-path length” of a Cilk computation can be used... Cilk (pronounced “silk”) is a C-based runtime system for multithreaded parallel programming. In this paper, we document the efficiency of the Cilk work-stealing scheduler, both empirically and analytically. We show that on real and synthetic applications, the “work” and “critical-path length” of a Cilk computation can be used to model performance accurately. Consequently, a Cilk programmer can focus on reducing the computation's work and critical-path length, insulated from load balancing and other runtime scheduling issues. We also prove that for the class of “fully strict” (well-structured) programs, the Cilk scheduler achieves space, time, and communication bounds all within a constant factor of optimal. The Cilk runtime system currently runs on the Connection Machine CM5 MPP, the Intel Paragon MPP, the Sun Sparcstation SMP, and the Cilk-NOW network of workstations. Applications written in Cilk include protein folding, graphic rendering, backtrack search, and the ★Socrates chess program, which won second prize in the 1995 ICCA World Computer Chess Championship. read more read less

Topics:

Cilk (76%)76% related to the paper, Work stealing (71%)71% related to the paper, Runtime system (61%)61% related to the paper, Intel Paragon (52%)52% related to the paper, Scheduling (computing) (50%)50% related to the paper
View PDF
1,671 Citations
Journal Article DOI: 10.1006/JPDC.1997.1404
Multilevelk-way Partitioning Scheme for Irregular Graphs

Abstract:

In this paper, we present and study a class of graph partitioning algorithms that reduces the size of the graph by collapsing vertices and edges, we find ak-way partitioning of the smaller graph, and then we uncoarsen and refine it to construct ak-way partitioning for the original graph. These algorithms compute ak-way partit... In this paper, we present and study a class of graph partitioning algorithms that reduces the size of the graph by collapsing vertices and edges, we find ak-way partitioning of the smaller graph, and then we uncoarsen and refine it to construct ak-way partitioning for the original graph. These algorithms compute ak-way partitioning of a graphG= (V,E) inO(|E|) time, which is faster by a factor ofO(logk) than previously proposed multilevel recursive bisection algorithms. A key contribution of our work is in finding a high-quality and computationally inexpensive refinement algorithm that can improve upon an initialk-way partitioning. We also study the effectiveness of the overall scheme for a variety of coarsening schemes. We present experimental results on a large number of graphs arising in various domains including finite element methods, linear programming, VLSI, and transportation. Our experiments show that this new scheme produces partitions that are of comparable or better quality than those produced by the multilevel bisection algorithm and requires substantially smaller time. Graphs containing up to 450,000 vertices and 3,300,000 edges can be partitioned in 256 domains in less than 40 s on a workstation such as SGI's Challenge. Compared with the widely used multilevel spectral bisection algorithm, our new algorithm is usually two orders of magnitude faster and produces partitions with substantially smaller edge-cut. read more read less

Topics:

Graph partition (64%)64% related to the paper, Strength of a graph (62%)62% related to the paper, Graph theory (59%)59% related to the paper, Bisection method (55%)55% related to the paper, Parallel algorithm (53%)53% related to the paper
View PDF
1,619 Citations
open accessOpen access Journal Article DOI: 10.1016/J.JPDC.2006.08.010
Distributed average consensus with least-mean-square deviation
Lin Xiao1, Stephen Boyd2, Seung-Jean Kim2

Abstract:

We consider a stochastic model for distributed average consensus, which arises in applications such as load balancing for parallel processors, distributed coordination of mobile autonomous agents, and network synchronization. In this model, each node updates its local variable with a weighted average of its neighbors' values,... We consider a stochastic model for distributed average consensus, which arises in applications such as load balancing for parallel processors, distributed coordination of mobile autonomous agents, and network synchronization. In this model, each node updates its local variable with a weighted average of its neighbors' values, and each new value is corrupted by an additive noise with zero mean. The quality of consensus can be measured by the total mean-square deviation of the individual variables from their average, which converges to a steady-state value. We consider the problem of finding the (symmetric) edge weights that result in the least mean-square deviation in steady state. We show that this problem can be cast as a convex optimization problem, so the global solution can be found efficiently. We describe some computational methods for solving this problem, and compare the weights and the mean-square deviations obtained by this method and several other weight design methods. read more read less

Topics:

Convex optimization (52%)52% related to the paper
View PDF
1,069 Citations
Journal Article DOI: 10.1016/0743-7315(89)90021-X
Dynamic load balancing for distributed memory multiprocessors
George Cybenko1

Abstract:

In this paper we study diffusion schemes for dynamic load balancing on message passing multiprocessor networks. One of the main results concerns conditions under which these dynamic schemes converge and their rates of convergence for arbitrary topologies. These results use the eigenstructure of the iteration matrices that ari... In this paper we study diffusion schemes for dynamic load balancing on message passing multiprocessor networks. One of the main results concerns conditions under which these dynamic schemes converge and their rates of convergence for arbitrary topologies. These results use the eigenstructure of the iteration matrices that arise in dynamic load balancing. We completely analyze the hypercube network by explicitly computing the eigenstructure of its node adjacency matrix. Using a realistic model of interprocessor communications, we show that a diffusion approach to load balancing on a hypercube multiprocessor is inferior to another approach which we call the dimension exchange method. For a d-dimensional hypercube, we compute the rate of convergence to a uniform work distribution and show that after d + 1 iterations of a diffusion type approach, we can guarantee that the work distribution is approximately within e-* of the uniform distribution independent of the hypercube dimension d. Both static and dynamic random models of work distribution are studied. o read more read less

Topics:

Hypercube (59%)59% related to the paper, Load balancing (computing) (58%)58% related to the paper, Distributed memory (53%)53% related to the paper, Network topology (51%)51% related to the paper, Rate of convergence (51%)51% related to the paper
1,039 Citations
Author Pic

SciSpace is a very innovative solution to the formatting problem and existing providers, such as Mendeley or Word did not really evolve in recent years.

- Andreas Frutiger, Researcher, ETH Zurich, Institute for Biomedical Engineering

Get MS-Word and LaTeX output to any Journal within seconds
1
Choose a template
Select a template from a library of 40,000+ templates
2
Import a MS-Word file or start fresh
It takes only few seconds to import
3
View and edit your final output
SciSpace will automatically format your output to meet journal guidelines
4
Submit directly or Download
Submit to journal directly or Download in PDF, MS Word or LaTeX

(Before submission check for plagiarism via Turnitin)

clock Less than 3 minutes

What to expect from SciSpace?

Speed and accuracy over MS Word

''

With SciSpace, you do not need a word template for Journal of Parallel and Distributed Computing.

It automatically formats your research paper to Elsevier formatting guidelines and citation style.

You can download a submission ready research paper in pdf, LaTeX and docx formats.

Time comparison

Time taken to format a paper and Compliance with guidelines

Plagiarism Reports via Turnitin

SciSpace has partnered with Turnitin, the leading provider of Plagiarism Check software.

Using this service, researchers can compare submissions against more than 170 million scholarly articles, a database of 70+ billion current and archived web pages. How Turnitin Integration works?

Turnitin Stats
Publisher Logos

Freedom from formatting guidelines

One editor, 100K journal formats – world's largest collection of journal templates

With such a huge verified library, what you need is already there.

publisher-logos

Easy support from all your favorite tools

Journal of Parallel and Distributed Computing format uses elsarticle-num citation style.

Automatically format and order your citations and bibliography in a click.

SciSpace allows imports from all reference managers like Mendeley, Zotero, Endnote, Google Scholar etc.

Frequently asked questions

1. Can I write Journal of Parallel and Distributed Computing in LaTeX?

Absolutely not! Our tool has been designed to help you focus on writing. You can write your entire paper as per the Journal of Parallel and Distributed Computing guidelines and auto format it.

2. Do you follow the Journal of Parallel and Distributed Computing guidelines?

Yes, the template is compliant with the Journal of Parallel and Distributed Computing guidelines. Our experts at SciSpace ensure that. If there are any changes to the journal's guidelines, we'll change our algorithm accordingly.

3. Can I cite my article in multiple styles in Journal of Parallel and Distributed Computing?

Of course! We support all the top citation styles, such as APA style, MLA style, Vancouver style, Harvard style, and Chicago style. For example, when you write your paper and hit autoformat, our system will automatically update your article as per the Journal of Parallel and Distributed Computing citation style.

4. Can I use the Journal of Parallel and Distributed Computing templates for free?

Sign up for our free trial, and you'll be able to use all our features for seven days. You'll see how helpful they are and how inexpensive they are compared to other options, Especially for Journal of Parallel and Distributed Computing.

5. Can I use a manuscript in Journal of Parallel and Distributed Computing that I have written in MS Word?

Yes. You can choose the right template, copy-paste the contents from the word document, and click on auto-format. Once you're done, you'll have a publish-ready paper Journal of Parallel and Distributed Computing that you can download at the end.

6. How long does it usually take you to format my papers in Journal of Parallel and Distributed Computing?

It only takes a matter of seconds to edit your manuscript. Besides that, our intuitive editor saves you from writing and formatting it in Journal of Parallel and Distributed Computing.

7. Where can I find the template for the Journal of Parallel and Distributed Computing?

It is possible to find the Word template for any journal on Google. However, why use a template when you can write your entire manuscript on SciSpace , auto format it as per Journal of Parallel and Distributed Computing's guidelines and download the same in Word, PDF and LaTeX formats? Give us a try!.

8. Can I reformat my paper to fit the Journal of Parallel and Distributed Computing's guidelines?

Of course! You can do this using our intuitive editor. It's very easy. If you need help, our support team is always ready to assist you.

9. Journal of Parallel and Distributed Computing an online tool or is there a desktop version?

SciSpace's Journal of Parallel and Distributed Computing is currently available as an online tool. We're developing a desktop version, too. You can request (or upvote) any features that you think would be helpful for you and other researchers in the "feature request" section of your account once you've signed up with us.

10. I cannot find my template in your gallery. Can you create it for me like Journal of Parallel and Distributed Computing?

Sure. You can request any template and we'll have it setup within a few days. You can find the request box in Journal Gallery on the right side bar under the heading, "Couldn't find the format you were looking for like Journal of Parallel and Distributed Computing?”

11. What is the output that I would get after using Journal of Parallel and Distributed Computing?

After writing your paper autoformatting in Journal of Parallel and Distributed Computing, you can download it in multiple formats, viz., PDF, Docx, and LaTeX.

12. Is Journal of Parallel and Distributed Computing's impact factor high enough that I should try publishing my article there?

To be honest, the answer is no. The impact factor is one of the many elements that determine the quality of a journal. Few of these factors include review board, rejection rates, frequency of inclusion in indexes, and Eigenfactor. You need to assess all these factors before you make your final call.

13. What is Sherpa RoMEO Archiving Policy for Journal of Parallel and Distributed Computing?

SHERPA/RoMEO Database

We extracted this data from Sherpa Romeo to help researchers understand the access level of this journal in accordance with the Sherpa Romeo Archiving Policy for Journal of Parallel and Distributed Computing. The table below indicates the level of access a journal has as per Sherpa Romeo's archiving policy.

RoMEO Colour Archiving policy
Green Can archive pre-print and post-print or publisher's version/PDF
Blue Can archive post-print (ie final draft post-refereeing) or publisher's version/PDF
Yellow Can archive pre-print (ie pre-refereeing)
White Archiving not formally supported
FYI:
  1. Pre-prints as being the version of the paper before peer review and
  2. Post-prints as being the version of the paper after peer-review, with revisions having been made.

14. What are the most common citation types In Journal of Parallel and Distributed Computing?

The 5 most common citation types in order of usage for Journal of Parallel and Distributed Computing are:.

S. No. Citation Style Type
1. Author Year
2. Numbered
3. Numbered (Superscripted)
4. Author Year (Cited Pages)
5. Footnote

15. How do I submit my article to the Journal of Parallel and Distributed Computing?

It is possible to find the Word template for any journal on Google. However, why use a template when you can write your entire manuscript on SciSpace , auto format it as per Journal of Parallel and Distributed Computing's guidelines and download the same in Word, PDF and LaTeX formats? Give us a try!.

16. Can I download Journal of Parallel and Distributed Computing in Endnote format?

Yes, SciSpace provides this functionality. After signing up, you would need to import your existing references from Word or Bib file to SciSpace. Then SciSpace would allow you to download your references in Journal of Parallel and Distributed Computing Endnote style according to Elsevier guidelines.

Fast and reliable,
built for complaince.

Instant formatting to 100% publisher guidelines on - SciSpace.

Available only on desktops 🖥

No word template required

Typset automatically formats your research paper to Journal of Parallel and Distributed Computing formatting guidelines and citation style.

Verifed journal formats

One editor, 100K journal formats.
With the largest collection of verified journal formats, what you need is already there.

Trusted by academicians

I spent hours with MS word for reformatting. It was frustrating - plain and simple. With SciSpace, I can draft my manuscripts and once it is finished I can just submit. In case, I have to submit to another journal it is really just a button click instead of an afternoon of reformatting.

Andreas Frutiger
Researcher & Ex MS Word user
Use this template