Journal ArticleDOI
Apache Spark: a unified engine for big data processing
Matei Zaharia,Reynold Xin,Patrick Wendell,Tathagata Das,Michael Armbrust,Ankur Dave,Xiangrui Meng,Josh Rosen,Shivaram Venkataraman,Michael J. Franklin,Ali Ghodsi,Joseph E. Gonzalez,Scott Shenker,Ion Stoica +13 more
Reads0
Chats0
TLDR
This open source computing framework unifies streaming, batch, and interactive big data workloads to unlock new applications.Abstract:
This open source computing framework unifies streaming, batch, and interactive big data workloads to unlock new applicationsread more
Citations
More filters
Proceedings ArticleDOI
Towards making sense of Spark-SQL performance for processing vast distributed RDF datasets
TL;DR: A systematic evaluation of the performance of SparkSQL engine for processing SPARQL queries using three relevant RDF relational schemas, and two different storage backends, namely, Hive, and HDFS is presented.
Journal ArticleDOI
RandPro- A practical implementation of random projection-based feature extraction for high dimensional multivariate data analysis in R
R. Siddharth,Gnanasekaran Aghila +1 more
TL;DR: This article describes a practical implementation of random projection method in the popular statistical programming language R and it is compared with the other similar implementations.
Proceedings ArticleDOI
Cloud Infrastructure for Storing and Processing EEG and ERP Experimental Data
Petr Ježek,Lukáš Vařeka +1 more
TL;DR: A cloud-based system for the EEG/ERP domain containing a distributed data storage, a signal processing method library and a client GUI is presented and was tested using a machine learning workflow based on the data stored in the system.
Proceedings ArticleDOI
A Scalable System for Neural Architecture Search
Jeff Hajewski,Suely Oliveira +1 more
TL;DR: This work proposes an RPC-based system that is robust to node failures and provides elastic compute abilities, allowing the system to add or remove computational resources as needed, and is demonstrated on the task of neural architecture search for image classification using the CIFAR-10 dataset.
Journal ArticleDOI
Safe-by-default Concurrency for Modern Programming Languages
TL;DR: In this paper, the authors present a design principle that we call safety by default and performance by choice, which they call safety-by-default and performance-bychoice, respectively.
References
More filters
Journal ArticleDOI
MapReduce: simplified data processing on large clusters
Jeffrey Dean,Sanjay Ghemawat +1 more
TL;DR: This paper presents the implementation of MapReduce, a programming model and an associated implementation for processing and generating large data sets that runs on a large cluster of commodity machines and is highly scalable.
Proceedings Article
Resilient distributed datasets: a fault-tolerant abstraction for in-memory cluster computing
Matei Zaharia,Mosharaf Chowdhury,Tathagata Das,Ankur Dave,Justin Ma,Murphy McCauley,Michael J. Franklin,Scott Shenker,Ion Stoica +8 more
TL;DR: Resilient Distributed Datasets is presented, a distributed memory abstraction that lets programmers perform in-memory computations on large clusters in a fault-tolerant manner and is implemented in a system called Spark, which is evaluated through a variety of user applications and benchmarks.
Journal ArticleDOI
A bridging model for parallel computation
TL;DR: The bulk-synchronous parallel (BSP) model is introduced as a candidate for this role, and results quantifying its efficiency both in implementing high-level language features and algorithms, as well as in being implemented in hardware.
Proceedings ArticleDOI
Pregel: a system for large-scale graph processing
Grzegorz Malewicz,Matthew H. Austern,Aart J. C. Bik,James C. Dehnert,Ilan Horn,Naty Leiser,Grzegorz Czajkowski +6 more
TL;DR: A model for processing large graphs that has been designed for efficient, scalable and fault-tolerant implementation on clusters of thousands of commodity computers, and its implied synchronicity makes reasoning about programs easier.
Proceedings ArticleDOI
Dryad: distributed data-parallel programs from sequential building blocks
TL;DR: The Dryad execution engine handles all the difficult problems of creating a large distributed, concurrent application: scheduling the use of computers and their CPUs, recovering from communication or computer failures, and transporting data between vertices.