Journal ArticleDOI
Apache Spark: a unified engine for big data processing
Matei Zaharia,Reynold Xin,Patrick Wendell,Tathagata Das,Michael Armbrust,Ankur Dave,Xiangrui Meng,Josh Rosen,Shivaram Venkataraman,Michael J. Franklin,Ali Ghodsi,Joseph E. Gonzalez,Scott Shenker,Ion Stoica +13 more
TLDR
This open source computing framework unifies streaming, batch, and interactive big data workloads to unlock new applications.Abstract:
This open source computing framework unifies streaming, batch, and interactive big data workloads to unlock new applicationsread more
Citations
More filters
Proceedings ArticleDOI
Event Stream Processing on Heterogeneous System Architecture
TL;DR: A prototypical event processing framework based on the Heterogeneous System Architecture (HSA) is developed and it is shown that a variety of new HSA features enable iGPUs to be an affordable accelerator for a wide variety of event processing queries.
Book ChapterDOI
Semantic Data Integration for the SMT Manufacturing Process Using SANSA Stack
TL;DR: The SANSA Stack is deployed to enable the uniform access to Surface-Mount Technology (SMT) data and an ergonomic visual user interface is proposed to help nontechnical users coping with the various concepts underlying the process and conveniently interacting with the data.
Proceedings ArticleDOI
Unifying Data and Replica Placement for Data-intensive Services in Geographically Distributed Clouds
TL;DR: CPR is a unified paradigm of data placement which combines data placement and replication of data-intensive services into geographically distributed clouds as a joint optimization problem, and lies an overlapping correlation clustering algorithm capable of assigning a data-item to multiple data centers.
Journal ArticleDOI
A Serverless-Based, On-the-Fly Computing Framework for Remote Sensing Image Collection
TL;DR: The proof-of-concept experiments have suggested that the on-the-fly computing model for remote sensing data analysis can be efficiently implemented as a serverless software and the corresponding software architecture based on the serverless computing commodities are presented.
Journal ArticleDOI
Spark-based adaptive Mapreduce data processing method for remote sensing imagery
TL;DR: An adaptive Spark-based remote sensing data processing method on the cloud that achieves improved efficiency and stability and improved performance, stability and scalability compared to the existing Hadoop-based method is proposed.
References
More filters
Journal ArticleDOI
MapReduce: simplified data processing on large clusters
Jeffrey Dean,Sanjay Ghemawat +1 more
TL;DR: This paper presents the implementation of MapReduce, a programming model and an associated implementation for processing and generating large data sets that runs on a large cluster of commodity machines and is highly scalable.
Proceedings Article
Resilient distributed datasets: a fault-tolerant abstraction for in-memory cluster computing
Matei Zaharia,Mosharaf Chowdhury,Tathagata Das,Ankur Dave,Justin Ma,Murphy McCauley,Michael J. Franklin,Scott Shenker,Ion Stoica +8 more
TL;DR: Resilient Distributed Datasets is presented, a distributed memory abstraction that lets programmers perform in-memory computations on large clusters in a fault-tolerant manner and is implemented in a system called Spark, which is evaluated through a variety of user applications and benchmarks.
Journal ArticleDOI
A bridging model for parallel computation
TL;DR: The bulk-synchronous parallel (BSP) model is introduced as a candidate for this role, and results quantifying its efficiency both in implementing high-level language features and algorithms, as well as in being implemented in hardware.
Proceedings ArticleDOI
Pregel: a system for large-scale graph processing
Grzegorz Malewicz,Matthew H. Austern,Aart J. C. Bik,James C. Dehnert,Ilan Horn,Naty Leiser,Grzegorz Czajkowski +6 more
TL;DR: A model for processing large graphs that has been designed for efficient, scalable and fault-tolerant implementation on clusters of thousands of commodity computers, and its implied synchronicity makes reasoning about programs easier.
Proceedings ArticleDOI
Dryad: distributed data-parallel programs from sequential building blocks
TL;DR: The Dryad execution engine handles all the difficult problems of creating a large distributed, concurrent application: scheduling the use of computers and their CPUs, recovering from communication or computer failures, and transporting data between vertices.