scispace - formally typeset
Search or ask a question

Showing papers by "Alejandro López-Ortiz published in 2014"


Proceedings Article
05 Jan 2014
TL;DR: This work performs the previous experiments using a native C implementation thus removing potential extraneous effects of the JVM, and provides analyses on cache behavior of the dual pivot quicksort algorithm and proposes a 3-pivot variant that performs very well in theory and practice.
Abstract: The idea of multi-pivot quicksort has recently received the attention of researchers after Vladimir Yaroslavskiy proposed a dual pivot quicksort algorithm that, contrary to prior intuition, outperforms standard quicksort by a a significant margin under the Java JVM [10]. More recently, this algorithm has been analysed in terms of comparisons and swaps by Wild and Nebel [9]. Our contributions to the topic are as follows. First, we perform the previous experiments using a native C implementation thus removing potential extraneous effects of the JVM. Second, we provide analyses on cache behavior of these algorithms. We then provide strong evidence that cache behavior is causing most of the performance differences in these algorithms. Additionally, we build upon prior work in multi-pivot quicksort and propose a 3-pivot variant that performs very well in theory and practice. We show that it makes fewer comparisons and has better cache behavior than the dual pivot quicksort in the expected case. We validate this with experimental results, showing a 7--8% performance improvement in our tests.

34 citations


Proceedings ArticleDOI
10 Feb 2014
TL;DR: In this article, the authors consider the problem of managing a bounded size queue buffer where traffic consists of packets of varying size, each packet requires several rounds of processing before it can be transmitted out, and the goal is to maximize the throughput, i.e., total size of successfully transmitted packets.
Abstract: We consider the fundamental problem of managing a bounded size queue buffer where traffic consists of packets of varying size, each packet requires several rounds of processing before it can be transmitted out, and the goal is to maximize the throughput, i.e., total size of successfully transmitted packets. Our work addresses the tension between two conflicting algorithmic approaches: favoring packets with fewer processing requirements as opposed to packets of larger size. We present a novel model for studying such systems and study the performance of online algorithms that aim to maximize throughput.

20 citations


Book ChapterDOI
10 Mar 2014
TL;DR: In this paper, the authors study the online list update problem under the advice model of computation and show that advice of linear size is required and sufficient for a deterministic algorithm to achieve an optimal solution or even a competitive ratio better than 15/14.
Abstract: We study the online list update problem under the advice model of computation. Under this model, an online algorithm receives partial information about the unknown parts of the input in the form of some bits of advice generated by a benevolent offline oracle. We show that advice of linear size is required and sufficient for a deterministic algorithm to achieve an optimal solution or even a competitive ratio better than 15/14. On the other hand, we show that surprisingly two bits of advice is sufficient to break the lower bound of 2 on the competitive ratio of deterministic online algorithms and achieve a deterministic algorithm with a competitive ratio of $1.\bar{6}$ . In this upper-bound argument, the bits of advice determine the algorithm with smaller cost among three classical online algorithms.

18 citations


Proceedings ArticleDOI
23 Jun 2014
TL;DR: A general lower bound is proved that shows any online algorithm for the online fault-tolerant server consolidation problem has a competitive ratio of at least 1.42, which makes it a good choice for use by cloud service providers.
Abstract: In the server consolidation problem, the goal is to minimize the number of servers needed to host a set of clients. The clients appear in an online manner and each of them has a certain load. The servers have uniform capacity and the total load of clients assigned to a server must not exceed this capacity. Additionally, to have a fault-tolerant solution, the load of each client should be distributed between at least two different servers so that failure of one server avoids service interruption by migrating the load to the other servers hosting the respective second loads. In a simple setting, upon receiving a client, an online algorithm needs to select two servers and assign half of the load of the client to each server. We analyze the problem in the framework of competitive analysis. First, we provide upper and lower bounds for the competitive ratio of two well known heuristics which are introduced in the context of tenant placement in the cloud. In particular, we show their competitive ratios are no better than 2. We then present a new algorithm called Horizontal Harmonic and show that it has an improved competitive ratio which converges to 1.59. The simplicity of this algorithm makes it a good choice for use by cloud service providers. Finally, we prove a general lower bound that shows any online algorithm for the online fault-tolerant server consolidation problem has a competitive ratio of at least 1.42.

15 citations


Book ChapterDOI
07 Jun 2014
TL;DR: This work introduces the first parameterized algorithm for the k-H-Packing with t-Overlap problem when H is an arbitrary graph of size r and obtains an algorithm for packing sets with possible overlap which is a version of thek-Set Packing problem.
Abstract: Finding subgraphs with arbitrary overlap was introduced as the k-H-Packing with t-Overlap problem in [10]. Specifically, does a given graph G have at least k induced subgraphs each isomorphic to a graph H such that any pair of subgraphs share at most t vertices? This problem has applications in the discovering of overlapping communities in real networks. In this work, we introduce the first parameterized algorithm for the k-H-Packing with t-Overlap problem when H is an arbitrary graph of size r. Our algorithm combines a bounded search tree with a greedy localization technique and runs in time O(r rk k (r − t − 1)k + 2 n r ), where n = |V(G)|, r = |V(H)|, and t < r. Applying similar ideas we also obtain an algorithm for packing sets with possible overlap which is a version of the k-Set Packing problem.

13 citations


Journal ArticleDOI
TL;DR: This model presents an asymptotically optimal strategy that is within a multiplicative factor of Θ ( log ( m − t ) ) from the optimal search cost that achieves optimal competitive ratio under this metric.

13 citations


Journal ArticleDOI
TL;DR: This paper gives matching (i.e., optimal) upper and lower bounds for the acceleration ratio under a simulation of contract algorithms using iterative deepening techniques, and shows how to evaluate the average acceleration ratio of the class of exponential strategies in the setting of n problem instances and m parallel processors.
Abstract: A contract algorithm is an algorithm which is given, as part of the input, a specified amount of allowable computation time. The algorithm must then complete its execution within the allotted time. An interruptible algorithm, in contrast, can be interrupted at an arbitrary point in time, at which point it must report its currently best solution. It is known that contract algorithms can simulate interruptible algorithms using iterative deepening techniques. This simulation is done at a penalty in the performance of the solution, as measured by the so-called acceleration ratio. In this paper we give matching (i.e., optimal) upper and lower bounds for the acceleration ratio under such a simulation. We assume the most general setting in which n problem instances must be solved by means of scheduling executions of contract algorithms in m identical parallel processors. This resolves an open conjecture of Bernstein, Finkelstein, and Zilberstein who gave an optimal schedule under the restricted setting of round robin and length-increasing schedules, but whose optimality in the general unrestricted case remained open. Lastly, we show how to evaluate the average acceleration ratio of the class of exponential strategies in the setting of n problem instances and m parallel processors. This is a broad class of schedules that tend to be either optimal or near-optimal, for several variants of the basic problem.

8 citations


Book ChapterDOI
13 Feb 2014
TL;DR: This work provides a new technique for this problem generalizing the crown decomposition technique and achieves a kernel with size bounded by 2(rk − r) for the k-\(\mathcal{G}\)-Packing with t-Overlap problem when t = r − 2 and \(\mathcal {G}\) is a clique of size r.
Abstract: We introduce the k-\(\mathcal{G}\)-Packing with t-Overlap problem to formalize the problem of finding communities in a network In the k-\(\mathcal{G}\)-Packing with t-Overlap problem, we search for at least k communities with possible overlap In contrast with previous work where communities are disjoint, we regulate the overlap through a parameter t Our focus is the parameterized complexity of the k-\(\mathcal{G}\)-Packing with t-Overlap problem Here, we provide a new technique for this problem generalizing the crown decomposition technique [2] Using our global rule, we achieve a kernel with size bounded by 2(rk − r) for the k-\(\mathcal{G}\)-Packing with t-Overlap problem when t = r − 2 and \(\mathcal{G}\) is a clique of size r

7 citations


Proceedings Article
01 Jan 2014
TL;DR: This work introduces an almostonline square packing algorithm which places squares in an online, sequential manner which receives advice of logarithmic size from an oine oracle which runs in linear time.
Abstract: In the square packing problem, the goal is to pack squares of dierent sizes into the smallest number of bins (squares) of uniform size. We introduce an almostonline square packing algorithm which places squares in an online, sequential manner. In doing so, it receives advice of logarithmic size from an oine oracle which runs in linear time. Our algorithm achieve a competitive ratio of at most 1:84 which is signicantly better than the best existing online algorithm which has a competitive ratio of 2.1187. In introducing the algorithm, we have been inspired by the advice model for the analyses of online problems. Our algorithm can also be regarded as a streaming algorithm which packs an input sequence of squares in two passes using a space of logarithmic size.

6 citations


Posted Content
TL;DR: In this paper, the Ultra-Wide Word architecture and model, an extension of the word-RAM model that allows for constant time operations on thousands of bits in parallel, is introduced.
Abstract: The effective use of parallel computing resources to speed up algorithms in current multi-core parallel architectures remains a difficult challenge, with ease of programming playing a key role in the eventual success of various parallel architectures. In this paper we consider an alternative view of parallelism in the form of an ultra-wide word processor. We introduce the Ultra-Wide Word architecture and model, an extension of the word-RAM model that allows for constant time operations on thousands of bits in parallel. Word parallelism as exploited by the word-RAM model does not suffer from the more difficult aspects of parallel programming, namely synchronization and concurrency. For the standard word-RAM algorithms, the speedups obtained are moderate, as they are limited by the word size. We argue that a large class of word-RAM algorithms can be implemented in the Ultra-Wide Word model, obtaining speedups comparable to multi-threaded computations while keeping the simplicity of programming of the sequential RAM model. We show that this is the case by describing implementations of Ultra-Wide Word algorithms for dynamic programming and string searching. In addition, we show that the Ultra-Wide Word model can be used to implement a nonstandard memory architecture, which enables the sidestepping of lower bounds of important data structure problems such as priority queues and dynamic prefix sums. While similar ideas about operating on large words have been mentioned before in the context of multimedia processors [Thorup 2003], it is only recently that an architecture like the one we propose has become feasible and that details can be worked out.

6 citations


Journal ArticleDOI
TL;DR: The first parameterized algorithm for the k-H-Packing with t-Overlap problem is introduced, which combines a bounded search tree with a greedy localization technique and runs in time O(r rk k (r t 1)k+2 n r ), where n = |V (G)|, r =|V (H)|, and t < r.
Abstract: We introduce the k-H-Packing with t-Overlap problem to formalize the problem of discovering overlapping communities in real networks. More precisely, in the k-H-Packing with t-Overlap problem, we search in a graph G for at least k subgraphs each isomorphic to a graph H such that any pair of subgraphs shares at most t vertices. In contrast with previous work where communities are disjoint, we regulate the overlap through a variable t. Our focus is on the parameterized complexity of the k-H-Packing with t-Overlap problem. Here, we provide a new technique for this problem generalizing the crown decomposition technique [2]. Using our global rule, we achieve a kernel with size bounded by 2(rk r) for the k-Kr-Packing with (r 2)Overlap problem. That is, when H is a clique of size r and t = r 2. In addition, we introduce the first parameterized algorithm for the kH-Packing with t-Overlap problem when H is an arbitrary graph of size r. Our algorithm combines a bounded search tree with a greedy localization technique and runs in time O(r rk k (r t 1)k+2 n r ), where n = |V (G)|, r = |V (H)|, and t < r. Finally, we apply this search tree algorithm to the kernel obtained for the k-Kr-Packing with (r 2)-Overlap problem, and we show that this approach is faster than applying a brute-force algorithm in the kernel. In all our results, r and t are constants.

Posted Content
TL;DR: It is proved that all Any-Fit strategies have a competitive ratio of at least μ, where μ is the max/min interval length ratio of jobs, and a simple algorithm called Move To Front (Mtf) which has a competitive ratios of at most 6μ + 8.
Abstract: In Cloud systems, we often deal with jobs that arrive and depart in an online manner. Upon its arrival, a job should be assigned to a server. Each job has a size which defines the amount of resources that it needs. Servers have uniform capacity and, at all times, the total size of jobs assigned to a server should not exceed the capacity. This setting is closely related to the classic bin packing problem. The difference is that, in bin packing, the objective is to minimize the total number of used servers. In the Cloud, however, the charge for each server is proportional to the length of the time interval it is rented for, and the goal is to minimize the cost involved in renting all used servers. Recently, certain bin packing strategies were considered for renting servers in the Cloud [Li et al. SPAA'14]. There, it is proved that all Any-Fit bin packing strategy has a competitive ratio of at least $\mu$, where $\mu$ is the max/min interval length ratio of jobs. It is also shown that First Fit has a competitive ratio of $2\mu + 13$ while Best Fit is not competitive at all. We observe that the lower bound of $\mu$ extends to all online algorithms. We also prove that, surprisingly, Next Fit algorithm has competitive ratio of at most $2 \mu +1$. We also show that a variant of Next Fit achieves a competitive ratio of $K \times max\{1,\mu/(K-1)\}+1$, where $K$ is a parameter of the algorithm. In particular, if the value of $\mu$ is known, the algorithm has a competitive ratio of $\mu+2$; this improves upon the existing upper bound of $\mu+8$. Finally, we introduce a simple algorithm called Move To Front (MTF) which has a competitive ratio of at most $6\mu + 7$ and also promising average-case performance. We experimentally study the average-case performance of different algorithms and observe that the typical behaviour of MTF is distinctively better than other algorithms.

Posted Content
TL;DR: An optimal strategy for searching with k robots starting from a common origin and moving at unit speed is developed and applied to more realistic scenarios such as differential search speeds, late arrival times to the search effort and low probability of detection under poor visibility conditions.
Abstract: We consider the problem of multiple agents or robots searching for a target in the plane. This is motivated by Search and Rescue operations (SAR) in the high seas which in the past were often performed with several vessels, and more recently by swarms of aerial drones and/or unmanned surface vessels. Coordinating such a search in an effective manner is a non trivial task. In this paper, we develop first an optimal strategy for searching with k robots starting from a common origin and moving at unit speed. We then apply the results from this model to more realistic scenarios such as differential search speeds, late arrival times to the search effort and low probability of detection under poor visibility conditions. We show that, surprisingly, the theoretical idealized model still governs the search with certain suitable minor adaptations.

Posted Content
TL;DR: The first algorithm with optimal average-case and close-to-best known worst-case performance for the classic on-line problem of bin packing is presented and extensive experimental evaluation of the studied bin packing algorithms shows that the proposed algorithms have comparable average- case performance with Best Fit and First Fit, and this holds also for sequences that follow distributions other than the uniform distribution.
Abstract: In this paper we present the first algorithm with optimal average-case and close-to-best known worst-case performance for the classic on-line problem of bin packing. It has long been observed that known bin packing algorithms with optimal average-case performance were not optimal in the worst-case sense. In particular First Fit and Best Fit had optimal average-case ratio of 1 but a worst-case competitive ratio of 1.7. The wasted space of First Fit and Best Fit for a uniform random sequence of length $n$ is expected to be $\Theta(n^{2/3})$ and $\Theta(\sqrt{n} \log ^{3/4} n)$, respectively. The competitive ratio can be improved to 1.691 using the Harmonic algorithm; further variations of this algorithm can push down the competitive ratio to 1.588. However, Harmonic and its variations have poor performance on average; in particular, Harmonic has average-case ratio of around 1.27. In this paper, first we introduce a simple algorithm which we term Harmonic Match. This algorithm performs as well as Best Fit on average, i.e., it has an average-case ratio of 1 and expected wasted space of $\Theta(\sqrt{n} \log ^{3/4} n)$. Moreover, the competitive ratio of the algorithm is as good as Harmonic, i.e., it converges to $ 1.691$ which is an improvement over 1.7 of Best Fit and First Fit. We also introduce a different algorithm, termed as Refined Harmonic Match, which achieves an improved competitive ratio of $1.636$ while maintaining the good average-case performance of Harmonic Match and Best Fit. Finally, our extensive experimental evaluation of the studied bin packing algorithms shows that our proposed algorithms have comparable average-case performance with Best Fit and First Fit, and this holds also for sequences that follow distributions other than the uniform distribution.

Posted Content
04 Oct 2014
TL;DR: An abstract model which allows to understand searches at sea under ideal conditions, and then applies the abstract model under actual search conditions such as differential search speeds, arrival times to the search area and low probability of detection under poor visibility conditions is developed.
Abstract: Motivated by the modern availability of drones and unmanned surface vessels as well as other low cost search agents, we consider the problem of a swarm of robots searching for a target on the high seas. Coordinating such a search in an effective manner is a non trivial task. In this paper we fully address this problem by first developing an abstract model which allows us to understand searches at sea under ideal conditions, and then apply the abstract model under actual search conditions such as differential search speeds, arrival times to the search area and low probability of detection under poor visibility conditions. We show that the theoretical model still governs the search with suitable adaptations. Lastly we give several search scenarios showing the cost effectiveness of such searches, deriving from lower cost and higher precision in the search. I. INTRODUCTION Historically, searches were conducted using a limited number (at most a handful) of vessels and aircrafts. This placed heavy constraints in the type of solutions that could be considered, and this is duly reflected in the modern search and rescue literature (6), (7), (8), (9). However, the comparably low cost of surface or underwa- ter unmanned vessels allows for searches using hundreds, if not thousands of vessels. For example, the cost of an unmanned search vehicle is in the order of tens of thousands of dollars (4) which can be amortized over hundreds of searches, while the cost of conventional searches range from the low hundred thousands of dollars up to sixty million dollars for high profile searches such as Malaysia Airlines MH370 and Air France 447. This suggests that somewhere in the order of a few hundred to a few tens of thousands of robots can be realistically brought to bear in such a search. Motivated by this consideration we propose search and rescue strategies for the high seas using a large number of agents in an intelligent coordinated swarm fashion. Coordinating such a search in an effective manner is referred to as a "difficult task" in the search and rescue literature (5), (13), (14). In this paper we develop (1) an abstract model which allows us to understand searches at sea under ideal conditions, and (2) we progressively incorporate realistic assumptions in the model, specifically different search speeds, different arrival times to the search target and poor visibility conditions. We show that the initial key idea still governs the search under these conditions subject to a few minor adaptations. Lastly we give several search scenarios showing the cost effectiveness of such searches, deriving from lower cost of robot search hardware and higher precision in the search. We begin with the theoretical model for two and four robots of L´

Posted Content
TL;DR: NP-Completeness results are shown for all of the packing problems and a dichotomy result is given for the $\mathcal{H}$-Packing with $t$-Membership problem analogous to the Kirkpatrick and Hell problem.
Abstract: We consider the problem of discovering overlapping communities in networks which we model as generalizations of Graph Packing problems with overlap. We seek a collection $\mathcal{S}' \subseteq \mathcal{S}$ consisting of at least $k$ sets subject to certain disjointness restrictions. In the $r$-Set Packing with $t$-Membership, each element of $\mathcal{U}$ belongs to at most $t$ sets of $\mathcal{S'}$ while in $t$-Overlap each pair of sets in $\mathcal{S'}$ overlaps in at most $t$ elements. Each set of $\mathcal{S}$ has at most $r$ elements. Similarly, both of our graph packing problems seek a collection $\mathcal{K}$ of at least $k$ subgraphs in a graph $G$ each isomorphic to a graph $H \in \mathcal{H}$. In $\mathcal{H}$-Packing with $t$-Membership, each vertex of $G$ belongs to at most $t$ subgraphs of $\mathcal{K}$ while in $t$-Overlap each pair of subgraphs in $\mathcal{K}$ overlaps in at most $t$ vertices. Each member of $\mathcal{H}$ has at most $r$ vertices and $m$ edges. We show NP-Completeness results for all of our packing problems and we give a dichotomy result for the $\mathcal{H}$-Packing with $t$-Membership problem analogous to the Kirkpatrick and Hell \cite{Kirk78}. We reduce the $r$-Set Packing with $t$-Membership to a problem kernel with $O((r+1)^r k^{r})$ elements while we achieve a kernel with $O(r^r k^{r-t-1})$ elements for the $r$-Set Packing with $t$-Overlap. In addition, we reduce the $\mathcal{H}$-Packing with $t$-Membership and its edge version to problem kernels with $O((r+1)^r k^{r})$ and $O((m+1)^{m} k^{m})$ vertices, respectively. On the other hand, we achieve kernels with $O(r^r k^{r-t-1})$ and $O(m^{m} k^{m-t-1})$ vertices for the $\mathcal{H}$-Packing with $t$-Overlap and its edge version, respectively. In all cases, $k$ is the input parameter while $t$, $r$, and $m$ are constants.

Posted Content
TL;DR: In this article, the authors present and analyze methods for patrolling an environment with a distributed swarm of robots using a physical data structure -a distributed triangulation of the workspace, where a large number of stationary "mapping" robots cover and triangulate the environment and a smaller number of mobile "patrolling" robots move amongst them.
Abstract: We present and analyze methods for patrolling an environment with a distributed swarm of robots. Our approach uses a physical data structure - a distributed triangulation of the workspace. A large number of stationary "mapping" robots cover and triangulate the environment and a smaller number of mobile "patrolling" robots move amongst them. The focus of this work is to develop, analyze, implement and compare local patrolling policies. We desire strategies that achieve full coverage, but also produce good coverage frequency and visitation times. Policies that provide theoretical guarantees for these quantities have received some attention, but gaps have remained. We present: 1) A summary of how to achieve coverage by building a triangulation of the workspace, and the ensuing properties. 2) A description of simple local policies (LRV, for Least Recently Visited and LFV, for Least Frequently Visited) for achieving coverage by the patrolling robots. 3) New analytical arguments why different versions of LRV may require worst case exponential time between visits of triangles. 4) Analytical evidence that a local implementation of LFV on the edges of the dual graph is possible in our scenario, and immensely better in the worst case. 5) Experimental and simulation validation for the practical usefulness of these policies, showing that even a small number of weak robots with weak local information can greatly outperform a single, powerful robots with full information and computational capabilities.

Journal ArticleDOI
TL;DR: A study of the Hausdor Core problem on simple polygons is presented and it is shown that Q is a k-bounded Hhausdor Core of a polygon P if P contains Q, Q is convex, and P is conveX.
Abstract: A polygon \(Q\) is a \(k\)-bounded Hausdorff Core of a polygon \(P\) if \(P\) contains \(Q\), \(Q\) is convex, and the Hausdorff distance between \(P\) and \(Q\) is at most \(k\). A Hausdorff Core of \(P\) is a \(k\)-bounded Hausdorff Core of \(P\) with the minimum possible value of \(k\), which we denote \(k_{\min}\). Given any \(k\) and any \(\varepsilon\gt 0\), we describe an algorithm for computing a \(k'\)-bounded Hausdorff Core (if one exists) in \(O(n^3+n^2\varepsilon^{-4}(\log n+ \varepsilon^{-2}))\) time, where \(k'\lt k+d_{\text{rad}}\cdot\varepsilon\) and \(d_{\text{rad}}\) is the radius of the smallest disc that encloses \(P\) and whose center is in \(P\). We use this solution to provide an approximation algorithm for the optimization Hausdorff Core problem which results in a solution of size \(k_{\min}+d_{\text{rad}}\cdot\varepsilon\) in \(O(\log(\varepsilon^{-1})(n^3+n^2\varepsilon^{-4}(\log n+ \varepsilon^{-2})))\) time. Finally, we describe an approximation scheme for the \(k\)-bounded Hausdorff Core problem which, given a polygon \(P\), a distance \(k\), and any \(\varepsilon\gt 0\), answers true if there is a \(((1+\varepsilon)k)\)-bounded Hausdorff Core and false if there is no \(k\)-bounded Hausdorff Core. The running time of the approximation scheme is in \(O(n^3+n^2\varepsilon^{-4}(\log n+ \varepsilon^{-2}))\).

Journal ArticleDOI
TL;DR: A generic translation of any recursive sequential implementation of a divide-and-conquer algorithm into an implementation that benefits from running in parallel in both multi-cores and GPUs is described.
Abstract: In the last few years, the development of programming languages for general purpose computing on Graphic Processing Units (GPUs) has led to the design and implementation of fast parallel algorithms for this architecture for a large spectrum of applications. Given the streaming-processing characteristics of GPUs, most practical applications consist of tasks that admit highly data-parallel algorithms. Many problems, however, allow for task-parallel solutions or a combination of task and data-parallel algorithms. For these, a hybrid CPU-GPU parallel algorithm that combines the highly parallel stream-processing power of GPUs with the higher scalar power of multi-cores is likely to be superior. In this paper we describe a generic translation of any recursive sequential implementation of a divide-and-conquer algorithm into an implementation that benefits from running in parallel in both multi-cores and GPUs. This translation is generic in the sense that it requires little knowledge of the particular algorithm. We then present a schedule and work division scheme that adapts to the characteristics of each algorithm and the underlying architecture, efficiently balancing the workload between GPU and CPU. Our experiments show a 4.5x speedup over a single core recursive implementation, while demonstrating the accuracy and practicality of the approach.Â

DOI
01 Jan 2014
TL;DR: This report documents the program and the outcomes of Dagstuhl Seminar 14091 "Data Structures and Advanced Models of Computation on Big Data".
Abstract: This report documents the program and the outcomes of Dagstuhl Seminar 14091 "Data Structures and Advanced Models of Computation on Big Data". In today's computing environment vast amounts of data are processed, exchanged and analyzed. The manner in which information is stored profoundly influences the efficiency of these operations over the data. In spite of the maturity of the field many data structuring problems are still open, while new ones arise due to technological advances. The seminar covered both recent advances in the "classical" data structuring topics as well as new models of computation adapted to modern architectures, scientific studies that reveal the need for such models, applications where large data sets play a central role, modern computing platforms for very large data, and new data structures for large data in modern architectures. The extended abstracts included in this report contain both recent state of the art advances and lay the foundation for new directions within data structures research.