Journal ArticleDOI
Apache Spark: a unified engine for big data processing
Matei Zaharia,Reynold Xin,Patrick Wendell,Tathagata Das,Michael Armbrust,Ankur Dave,Xiangrui Meng,Josh Rosen,Shivaram Venkataraman,Michael J. Franklin,Ali Ghodsi,Joseph E. Gonzalez,Scott Shenker,Ion Stoica +13 more
Reads0
Chats0
TLDR
This open source computing framework unifies streaming, batch, and interactive big data workloads to unlock new applications.Abstract:
This open source computing framework unifies streaming, batch, and interactive big data workloads to unlock new applicationsread more
Citations
More filters
Proceedings ArticleDOI
J-NVM: Off-heap Persistent Objects in Java
TL;DR: J-NVM as mentioned in this paper is a framework to access efficiently Non-Volatile Main Memory (NVMM) in Java using failure-atomic blocks, which relies internally on proxy objects that intermediate direct off-heap access to NVMM.
Journal ArticleDOI
Efficient Group K Nearest-Neighbor Spatial Query Processing in Apache Spark
TL;DR: In this article, the authors present the first distributed group K nearest neighbor (GKNN) query algorithm in Apache Spark and compare it against the one in Apache Hadoop, which is a memory-based framework suitable for real-time and batch processing.
Journal ArticleDOI
A graphical heuristic for reduction and partitioning of large datasets for scalable supervised training
Sumedh Yadav,Mathis Bode +1 more
TL;DR: In this article, a scalable graphical method is presented for selecting and partitioning datasets for the training phase of a classification task, where a clustering algorithm is required to get its computation cost in a reasonable proportion to the task itself.
Posted Content
Parallel String Graph Construction and Transitive Reduction for De Novo Genome Assembly
TL;DR: This work introduces new distributed-memory parallel algorithms for overlap detection and layout simplification steps of de novo genome assembly, and implements them in the diBELLA 2D pipeline, paving the way for efficient de noVO assembly of large genomes using long reads in distributed memory.
Proceedings ArticleDOI
Towards Adaptive Flow Programming for the IoT: The Fluidware Approach
TL;DR: The objective of this position paper is to present Fluidware, a proposal towards an innovative programming model for the IoT, conceived to ease the development of flexible and robust large-scale IoT services and applications.
References
More filters
Journal ArticleDOI
MapReduce: simplified data processing on large clusters
Jeffrey Dean,Sanjay Ghemawat +1 more
TL;DR: This paper presents the implementation of MapReduce, a programming model and an associated implementation for processing and generating large data sets that runs on a large cluster of commodity machines and is highly scalable.
Proceedings Article
Resilient distributed datasets: a fault-tolerant abstraction for in-memory cluster computing
Matei Zaharia,Mosharaf Chowdhury,Tathagata Das,Ankur Dave,Justin Ma,Murphy McCauley,Michael J. Franklin,Scott Shenker,Ion Stoica +8 more
TL;DR: Resilient Distributed Datasets is presented, a distributed memory abstraction that lets programmers perform in-memory computations on large clusters in a fault-tolerant manner and is implemented in a system called Spark, which is evaluated through a variety of user applications and benchmarks.
Journal ArticleDOI
A bridging model for parallel computation
TL;DR: The bulk-synchronous parallel (BSP) model is introduced as a candidate for this role, and results quantifying its efficiency both in implementing high-level language features and algorithms, as well as in being implemented in hardware.
Proceedings ArticleDOI
Pregel: a system for large-scale graph processing
Grzegorz Malewicz,Matthew H. Austern,Aart J. C. Bik,James C. Dehnert,Ilan Horn,Naty Leiser,Grzegorz Czajkowski +6 more
TL;DR: A model for processing large graphs that has been designed for efficient, scalable and fault-tolerant implementation on clusters of thousands of commodity computers, and its implied synchronicity makes reasoning about programs easier.
Proceedings ArticleDOI
Dryad: distributed data-parallel programs from sequential building blocks
TL;DR: The Dryad execution engine handles all the difficult problems of creating a large distributed, concurrent application: scheduling the use of computers and their CPUs, recovering from communication or computer failures, and transporting data between vertices.