scispace - formally typeset
Search or ask a question
Topic

Comparison sort

About: Comparison sort is a research topic. Over the lifetime, 259 publications have been published within this topic receiving 7409 citations. The topic is also known as: comparison sorting algorithm & comparison sort algorithm.


Papers
More filters
Proceedings ArticleDOI
25 Jun 2012
TL;DR: This announcement describes the problem based benchmark suite (PBBS), a set of benchmarks designed for comparing parallel algorithmic approaches, parallel programming language styles, and machine architectures across a broad set of problems.
Abstract: This announcement describes the problem based benchmark suite (PBBS). PBBS is a set of benchmarks designed for comparing parallel algorithmic approaches, parallel programming language styles, and machine architectures across a broad set of problems. Each benchmark is defined concretely in terms of a problem specification and a set of input distributions. No requirements are made in terms of algorithmic approach, programming language, or machine architecture. The goal of the benchmarks is not only to compare runtimes, but also to be able to compare code and other aspects of an implementation (e.g., portability, robustness, determinism, and generality). As such the code for an implementation of a benchmark is as important as its runtime, and the public PBBS repository will include both code and performance results.The benchmarks are designed to make it easy for others to try their own implementations, or to add new benchmark problems. Each benchmark problem includes the problem specification, the specification of input and output file formats, default input generators, test codes that check the correctness of the output for a given input, driver code that can be linked with implementations, a baseline sequential implementation, a baseline multicore implementation, and scripts for running timings (and checks) and outputting the results in a standard format. The current suite includes the following problems: integer sort, comparison sort, remove duplicates, dictionary, breadth first search, spanning forest, minimum spanning forest, maximal independent set, maximal matching, K-nearest neighbors, Delaunay triangulation, convex hull, suffix arrays, n-body, and ray casting. For each problem, we report the performance of our baseline multicore implementation on a 40-core machine.

196 citations

Journal ArticleDOI
TL;DR: The main result is an optimal randomized parallel algorithm for INTEGER_SORT, the first known that is optimal: the product of its time and processor bounds is upper bounded by a linear function of the input size.
Abstract: This paper assumes a parallel RAM (random access machine) model which allows both concurrent reads and concurrent writes of a global memory.The main result is an optimal randomized parallel algorithm for INTEGER_SORT (i.e., for sorting n integers in the range $[1,n]$). This algorithm costs only logarithmic time and is the first known that is optimal: the product of its time and processor bounds is upper bounded by a linear function of the input size. Also given is a deterministic sublogarithmic time algorithm for prefix sum. In addition this paper presents a sublogarithmic time algorithm for obtaining a random permutation of n elements in parallel. And finally, sublogarithmic time algorithms for GENERAL_SORT and INTEGER_SORT are presented. Our sub-logarithmic GENERAL_SORT algorithm is also optimal.

175 citations

Proceedings Article
03 Jan 2009
TL;DR: This work gives theoretical and practical evidence that a combination of these different approaches gives algorithms that are superior to the individual algorithms, and performs an extensive evaluation of the "pure" algorithms and combinations of different approaches.
Abstract: We consider the problem of finding a ranking of a set of elements that is "closest to" a given set of input rankings of the elements; more precisely, we want to find a permutation that minimizes the Kendall-tau distance to the input rankings, where the Kendall-tau distance is defined as the sum over all input rankings of the number of pairs of elements that are in a different order in the input ranking than in the output ranking. If the input rankings are permutations, this problem is known as the Kemeny rank aggregation problem. This problem arises for example in building meta-search engines for Web search, aggregating viewers' rankings of movies, or giving recommendations to a user based on several different criteria, where we can think of having one ranking of the alternatives for each criterion. Many of the approximation algorithms and heuristics that have been proposed in the literature are either positional, comparison sort or local search algorithms. The rank aggregation problem is a special case of the (weighted) feedback arc set problem, but in the feedback arc set problem we use only information about the preferred relative ordering of pairs of elements to find a ranking of the elements, whereas in the case of the rank aggregation problem, we have additional information in the form of the complete input rankings. The positional methods are the only algorithms that use this additional information. Since the rank aggregation problem is NP-hard, none of these algorithms is guaranteed to find the optimal solution, and different algorithms will provide different solutions. We give theoretical and practical evidence that a combination of these different approaches gives algorithms that are superior to the individual algorithms. Theoretically, we give lower bounds on the performance for many of the "pure" methods. Practically, we perform an extensive evaluation of the "pure" algorithms and combinations of different approaches. We give three recommendations for which (combination of) methods to use based on whether a user wants to have a very fast, fast or reasonably fast algorithm.

118 citations

Proceedings ArticleDOI
19 Apr 2010
TL;DR: In this paper, the authors present a sample sort algorithm for manycore GPUs, which is robust to different distributions and entropy levels of keys and scales almost linearly with the input size.
Abstract: We present the design of a sample sort algorithm for manycore GPUs. Despite being one of the most efficient comparison-based sorting algorithms for distributed memory architectures its performance on GPUs was previously unknown. For uniformly distributed keys our sample sort is at least 25% and on average 68% faster than the best comparison-based sorting algorithm, GPU Thrust merge sort, and on average more than 2 times faster than GPU quicksort. Moreover, for 64-bit integer keys it is at least 63% and on average 2 times faster than the highly optimized GPU Thrust radix sort that directly manipulates the binary representation of keys. Our implementation is robust to different distributions and entropy levels of keys and scales almost linearly with the input size. These results indicate that multi-way techniques in general and sample sort in particular achieve substantially better performance than two-way merge sort and quicksort.

117 citations


Network Information
Related Topics (5)
Data structure
28.1K papers, 608.6K citations
69% related
Concurrency
13K papers, 347.1K citations
69% related
Time complexity
36K papers, 879.5K citations
67% related
Word (computer architecture)
28.5K papers, 472.9K citations
66% related
Compiler
26.3K papers, 578.5K citations
66% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
20212
20202
20182
20177
201619
201519