scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Cache-Oblivious Data Structures and Algorithms for Undirected Breadth-First Search and Shortest Paths

TL;DR: The cache-oblivious SSSP-algorithm takes nearly full advantage of block transfers for dense graphs, and the number of I/Os for sparse graphs is reduced by a factor of nearly sqrt{B}, where B is the cache-block size.
Abstract: We present improved cache-oblivious data structures and algorithms for breadth-first search (BFS) on undirected graphs and the single-source shortest path (SSSP) problem on undirected graphs with non-negative edge weights. For the SSSP problem, our result closes the performance gap between the currently best cache-aware algorithm and the cache-oblivious counterpart. Our cache-oblivious SSSP-algorithm takes nearly full advantage of block transfers for dense graphs. The algorithm relies on a new data structure, called bucket heap , which is the first cache-oblivious priority queue to efficiently support a weak D ECREASE K EY operation. For the BFS problem, we reduce the number of I/Os for sparse graphs by a factor of nearly sqrt{B}, where B is the cache-block size, nearly closing the performance gap between the currently best cache-aware and cache-oblivious algorithms.

Content maybe subject to copyright    Report

Citations
More filters
Proceedings ArticleDOI
11 Nov 2022
TL;DR: The B (cid:15) -tree as mentioned in this paper is a simple I/O-efficient external-memory-model data structure that supports updates orders of magnitude faster than B-tree with a query performance comparable to the B -tree.
Abstract: The B (cid:15) -tree [Brodal and Fagerberg 2003] is a simple I/O-efficient external-memory-model data structure that supports updates orders of magnitude faster than B-tree with a query performance comparable to the B-tree: for any positive constant (cid:15) < 1 insertions and deletions take O ( 1 B 1 − (cid:15) log B N ) time (rather than O (log B N ) time for the classic B-tree), queries take O (log B N ) time and range queries returning k items take O (log B N + kB ) time. Although the B (cid:15) -tree has an optimal update/query tradeoff, the runtimes are amortized. Another structure, the write-optimized skip list, introduced by Bender et al. [PODS 2017], has the same performance as the B (cid:15) -tree but with runtimes that are randomized rather than amortized. In this paper, we present a variant of the B (cid:15) -tree with deterministic worst-case running times that are identical to the original’s amortized running times.

1 citations

Journal ArticleDOI
TL;DR: In this article , a secure data-independent priority queue is proposed, which supports polylogarithmic-time insertion operations and constant-time deletions and read-front operations as opposed to the originally introduced queue by Toft.
Abstract: We introduce a secure data-independent priority queue which supports polylogarithmic-time insertion operations and constant-time deletions and read-front (aka peek) operations as opposed to the originally introduced queue by Toft (PODC '11). Moreover, we minimize the number of comparisons required to perform different operations on Toft's priority queue. Data-independent data structures—first identified explicitly by Toft, and further elaborated by Mitchell and Zimmerman (STACS '14)—serve the purpose of computing on encrypted data without executing branching code which can be used to avoid prohibitively expensive operations in secure computation applications. Focusing on the costly sorting operations, we show significant asymptotic improvements over prior privacy preserving dark pool applications. Dark pools are securities-trading venues which attain ad-hoc order privacy, by matching orders outside of publicly visible exchanges via the so-called dark pool operators. In this paper, we describe an efficient and secure dark pool (implementing a full continuous double auction) based on our new priority queue. Our construction's security guarantees are cryptographic based on secure multiparty computation (MPC), and do not require that the dark pool operators are trusted. Our construction improves upon the asymptotic efficiency attained by previous efforts. Existing cryptographic dark pools process new orders in time which grows linearly in the size of the standing order book; ours does so in polylogarithmic time. We describe a concrete implementation of our MPC protocol with malicious security in the honest majority setting. We also report benchmarks of our implementation and compare them to prior works. Our protocol reduces the total running time by several orders of magnitude over prior secure dark pool solutions.
Journal Article
TL;DR: A randomized algorithm for sorting strings in external memory for K binary strings comprising N words in total that works in the cache-oblivious model under the tall cache assumption, and improves on the (deterministic) algorithm of Arge et al.
Abstract: We give a randomized algorithm for sorting strings in external memory. For K binary strings comprising N words in total, our algorithm finds the sorted order and the longest common prefix sequence of the strings using O(K/B log M/B (K/M) log(N/K) + N/B) I/Os. This bound is never worse than O(K/B log M/B (K/B) log log M/B (K/M) + N/B) I/Os, and improves on the (deterministic) algorithm of Arge et al. (On sorting strings in external memory, STOC '97). The error probability of the algorithm can be chosen as O(N -c ) for any positive constant c. The algorithm even works in the cache-oblivious model under the tall cache assumption, i.e,, assuming M > B 1+ ∈ for some e > 0. An implication of our result is improved construction algorithms for external memory string dictionaries.
Journal ArticleDOI
TL;DR: An extension to minimize the cache complexity of neural networks by applying an appropriate cache-oblivious approach to neural networks is intro-duces.
Abstract: —The latest direction in cache-aware/cache-efficient algorithms is to use cache-oblivious algorithms based on the cache-oblivious model, which is an improvement of the external- memory model. The cache-oblivious model utilizes memory hierarchies without knowing memories’ parameters in advance since algorithms of this model are automatically tuned according to the actual memory parameters. As a result, cache-oblivious algorithms are particularly applied to multi-level caches with changing parameters and to environments in which the amount of available memory for an algorithm can fluctuate. This paper shows the state of the art in cache-oblivious algorithms and data structures; each with its complexity concerning cache misses, which is called cache complexity. Additionally, this paper intro- duces an extension to minimize the cache complexity of neural networks by applying an appropriate cache-oblivious approach to neural networks.