Journal ArticleDOI
Apache Spark: a unified engine for big data processing
Matei Zaharia,Reynold Xin,Patrick Wendell,Tathagata Das,Michael Armbrust,Ankur Dave,Xiangrui Meng,Josh Rosen,Shivaram Venkataraman,Michael J. Franklin,Ali Ghodsi,Joseph E. Gonzalez,Scott Shenker,Ion Stoica +13 more
Reads0
Chats0
TLDR
This open source computing framework unifies streaming, batch, and interactive big data workloads to unlock new applications.Abstract:
This open source computing framework unifies streaming, batch, and interactive big data workloads to unlock new applicationsread more
Citations
More filters
Journal ArticleDOI
Statistical strategies for the analysis of massive data sets
TL;DR: There are many simpler things that can be done to handle large data sets in an efficient and intuitive manner that include the use of distributed analysis methodologies, clever subsampling, data coarsening, and clever data reductions that exploit concepts such as sufficiency.
Journal ArticleDOI
Software Packet-Level Network Analytics at Cloud Scale
TL;DR: In this paper, the authors propose to offload only critical preprocessing tasks (e.g., load balancing) to a line-rate hardware frontend while optimizing the core analytics software to exploit properties of network analytics workloads.
Journal ArticleDOI
JP-DAP: An Intelligent Data Analytics Platform for Metro Rail Transport Systems
TL;DR: In this paper , Jaison-Paul Data Analytics Platform (JP-DAP) is proposed for metro rail transport systems, which is intended to ensure smooth functioning, improved customer experience, ridership forecasting, and efficient administration of metro rail transportation systems by integrating and analysing its many data sources.
Proceedings ArticleDOI
Big Data Processing: Scalability with Extreme Single-Node Performance
Venkatraman Govindaraju,Sam Idicula,Sandeep R. Agrawal,Venkatanathan Vardarajan,Arun Raghavan,Jarod Wen,Cagri Balkesen,Georgios Giannikis,Nipun Agarwal,Eric Sedlar +9 more
TL;DR: This work analyzes workloads which distribute operations on correlated data—such as joins and aggregation found in SQL, text similarity searches, and image disparity computations and describes techniques to overcome challenges in scaling the applications to hundreds of nodes on a high-bandwidth network.
Posted Content
Comparing Spark vs MPI/OpenMP On Word Count MapReduce.
TL;DR: This paper presents a high performance MapReduce design in MPI/OpenMP and uses that to compare with Spark on the classic word count Map Reduce task, and shows that the MPI /OpenMP Map reduce outperforms Apache Spark by about 300%.
References
More filters
Journal ArticleDOI
MapReduce: simplified data processing on large clusters
Jeffrey Dean,Sanjay Ghemawat +1 more
TL;DR: This paper presents the implementation of MapReduce, a programming model and an associated implementation for processing and generating large data sets that runs on a large cluster of commodity machines and is highly scalable.
Proceedings Article
Resilient distributed datasets: a fault-tolerant abstraction for in-memory cluster computing
Matei Zaharia,Mosharaf Chowdhury,Tathagata Das,Ankur Dave,Justin Ma,Murphy McCauley,Michael J. Franklin,Scott Shenker,Ion Stoica +8 more
TL;DR: Resilient Distributed Datasets is presented, a distributed memory abstraction that lets programmers perform in-memory computations on large clusters in a fault-tolerant manner and is implemented in a system called Spark, which is evaluated through a variety of user applications and benchmarks.
Journal ArticleDOI
A bridging model for parallel computation
TL;DR: The bulk-synchronous parallel (BSP) model is introduced as a candidate for this role, and results quantifying its efficiency both in implementing high-level language features and algorithms, as well as in being implemented in hardware.
Proceedings ArticleDOI
Pregel: a system for large-scale graph processing
Grzegorz Malewicz,Matthew H. Austern,Aart J. C. Bik,James C. Dehnert,Ilan Horn,Naty Leiser,Grzegorz Czajkowski +6 more
TL;DR: A model for processing large graphs that has been designed for efficient, scalable and fault-tolerant implementation on clusters of thousands of commodity computers, and its implied synchronicity makes reasoning about programs easier.
Proceedings ArticleDOI
Dryad: distributed data-parallel programs from sequential building blocks
TL;DR: The Dryad execution engine handles all the difficult problems of creating a large distributed, concurrent application: scheduling the use of computers and their CPUs, recovering from communication or computer failures, and transporting data between vertices.