Journal ArticleDOI
Apache Spark: a unified engine for big data processing
Matei Zaharia,Reynold Xin,Patrick Wendell,Tathagata Das,Michael Armbrust,Ankur Dave,Xiangrui Meng,Josh Rosen,Shivaram Venkataraman,Michael J. Franklin,Ali Ghodsi,Joseph E. Gonzalez,Scott Shenker,Ion Stoica +13 more
Reads0
Chats0
TLDR
This open source computing framework unifies streaming, batch, and interactive big data workloads to unlock new applications.Abstract:
This open source computing framework unifies streaming, batch, and interactive big data workloads to unlock new applicationsread more
Citations
More filters
Book ChapterDOI
EvoChef: Show Me What to Cook! Artificial Evolution of Culinary Arts
TL;DR: EvoChef is the first semi-automated, open source, and valid recipe generator that creates easy to follow, and novel recipes that improves with the number of generations.
Posted Content
From Complex Event Processing to Simple Event Processing.
TL;DR: A simple, generic and extensible framework for the processing of event streams of diverse types is proposed; it describes in detail a stream processing engine, called BeepBeep, that implements these principles.
Book ChapterDOI
Semantic Foundations for Deterministic Dataflow and Stream Processing
TL;DR: An abstract typed framework of stream transductions and transducers can be used to verify the correctness of streaming computations, prove the soundness of optimizing transformations, and inform the design of programming models and query languages for stream processing.
Journal ArticleDOI
MPI windows on storage for HPC applications
Sergio Rivas-Gomez,Roberto Gioiosa,Ivy Bo Peng,Gokcen Kestor,Sai Narasimhamurthy,Erwin Laure,Stefano Markidis +6 more
TL;DR: The design and implementation of MPI storage windows are described, and its benefits for out-of-core execution, parallel I/O and fault-tolerance are presented, and the integration of heterogeneous window allocations are explored.
Journal ArticleDOI
"Know your epidemic, know your response": Epidemiological assessment of the substance use disorder crisis in the United States.
TL;DR: In this paper, the authors identified the most vulnerable populations at high risk of substance use disorders (SUD) mortality in the U.S., and identified the locations where these vulnerable population are located.
References
More filters
Journal ArticleDOI
MapReduce: simplified data processing on large clusters
Jeffrey Dean,Sanjay Ghemawat +1 more
TL;DR: This paper presents the implementation of MapReduce, a programming model and an associated implementation for processing and generating large data sets that runs on a large cluster of commodity machines and is highly scalable.
Proceedings Article
Resilient distributed datasets: a fault-tolerant abstraction for in-memory cluster computing
Matei Zaharia,Mosharaf Chowdhury,Tathagata Das,Ankur Dave,Justin Ma,Murphy McCauley,Michael J. Franklin,Scott Shenker,Ion Stoica +8 more
TL;DR: Resilient Distributed Datasets is presented, a distributed memory abstraction that lets programmers perform in-memory computations on large clusters in a fault-tolerant manner and is implemented in a system called Spark, which is evaluated through a variety of user applications and benchmarks.
Journal ArticleDOI
A bridging model for parallel computation
TL;DR: The bulk-synchronous parallel (BSP) model is introduced as a candidate for this role, and results quantifying its efficiency both in implementing high-level language features and algorithms, as well as in being implemented in hardware.
Proceedings ArticleDOI
Pregel: a system for large-scale graph processing
Grzegorz Malewicz,Matthew H. Austern,Aart J. C. Bik,James C. Dehnert,Ilan Horn,Naty Leiser,Grzegorz Czajkowski +6 more
TL;DR: A model for processing large graphs that has been designed for efficient, scalable and fault-tolerant implementation on clusters of thousands of commodity computers, and its implied synchronicity makes reasoning about programs easier.
Proceedings ArticleDOI
Dryad: distributed data-parallel programs from sequential building blocks
TL;DR: The Dryad execution engine handles all the difficult problems of creating a large distributed, concurrent application: scheduling the use of computers and their CPUs, recovering from communication or computer failures, and transporting data between vertices.