Journal ArticleDOI
Apache Spark: a unified engine for big data processing
Matei Zaharia,Reynold Xin,Patrick Wendell,Tathagata Das,Michael Armbrust,Ankur Dave,Xiangrui Meng,Josh Rosen,Shivaram Venkataraman,Michael J. Franklin,Ali Ghodsi,Joseph E. Gonzalez,Scott Shenker,Ion Stoica +13 more
TLDR
This open source computing framework unifies streaming, batch, and interactive big data workloads to unlock new applications.Abstract:
This open source computing framework unifies streaming, batch, and interactive big data workloads to unlock new applicationsread more
Citations
More filters
Journal ArticleDOI
Distributed arrays: an algebra for generic distributed query processing
TL;DR: A fairly complex algorithm for distributed density-based similarity clustering is described, showing the use of the distributed algebra as a language for distributed query processing, and a novel contribution by itself.
Proceedings ArticleDOI
Comparison of the HPC and Big Data Java Libraries Spark, PCJ and APGAS
TL;DR: This paper compares the big data library Spark, and the HPC libraries PCJ and APGAS, regarding productivity and performance, and includes both the original version and an own extension by locality-flexible tasks.
Proceedings ArticleDOI
A Novel Approach for Insight Finding Mechanism on ClickStream Data Using Hadoop
TL;DR: The main theme of this paper is to analyze clickstream data that has been gathered from online retail e-commerce website using Hadoop framework and use many tools like Pig, Hive, Sqoop which works based on map-reduce algorithm in order to process big data in efficient way.
Journal ArticleDOI
PyBDA: a command line tool for automated analysis of big biological data sets
Simon Dirmeier,Simon Dirmeier,Mario Emmenlauer,Christoph Dehio,Niko Beerenwinkel,Niko Beerenwinkel +5 more
TL;DR: A novel machine learning command line tool called PyBDA for automated, distributed analysis of big biological data sets by using Apache Spark in the backend and using Snakemake in order to automatically schedule jobs to a high-performance computing cluster.
Journal ArticleDOI
SCALPEL3: A scalable open-source library for healthcare claims databases.
Emmanuel Bacry,Stéphane Gaïffas,Fanny Leroy,Maryan Morel,Dinh Phong Nguyen,Youcef Sebiat,Dian Sun +6 more
TL;DR: ScalPEL3 as discussed by the authors is a scalable open-source framework for studies involving large Observational Databases (LODs) focusing on scalable medical concept extraction, easy interactive analysis, and helpers for data flow analysis to accelerate studies performed on LODs.
References
More filters
Journal ArticleDOI
MapReduce: simplified data processing on large clusters
Jeffrey Dean,Sanjay Ghemawat +1 more
TL;DR: This paper presents the implementation of MapReduce, a programming model and an associated implementation for processing and generating large data sets that runs on a large cluster of commodity machines and is highly scalable.
Proceedings Article
Resilient distributed datasets: a fault-tolerant abstraction for in-memory cluster computing
Matei Zaharia,Mosharaf Chowdhury,Tathagata Das,Ankur Dave,Justin Ma,Murphy McCauley,Michael J. Franklin,Scott Shenker,Ion Stoica +8 more
TL;DR: Resilient Distributed Datasets is presented, a distributed memory abstraction that lets programmers perform in-memory computations on large clusters in a fault-tolerant manner and is implemented in a system called Spark, which is evaluated through a variety of user applications and benchmarks.
Journal ArticleDOI
A bridging model for parallel computation
TL;DR: The bulk-synchronous parallel (BSP) model is introduced as a candidate for this role, and results quantifying its efficiency both in implementing high-level language features and algorithms, as well as in being implemented in hardware.
Proceedings ArticleDOI
Pregel: a system for large-scale graph processing
Grzegorz Malewicz,Matthew H. Austern,Aart J. C. Bik,James C. Dehnert,Ilan Horn,Naty Leiser,Grzegorz Czajkowski +6 more
TL;DR: A model for processing large graphs that has been designed for efficient, scalable and fault-tolerant implementation on clusters of thousands of commodity computers, and its implied synchronicity makes reasoning about programs easier.
Proceedings ArticleDOI
Dryad: distributed data-parallel programs from sequential building blocks
TL;DR: The Dryad execution engine handles all the difficult problems of creating a large distributed, concurrent application: scheduling the use of computers and their CPUs, recovering from communication or computer failures, and transporting data between vertices.