Journal ArticleDOI
Apache Spark: a unified engine for big data processing
Matei Zaharia,Reynold Xin,Patrick Wendell,Tathagata Das,Michael Armbrust,Ankur Dave,Xiangrui Meng,Josh Rosen,Shivaram Venkataraman,Michael J. Franklin,Ali Ghodsi,Joseph E. Gonzalez,Scott Shenker,Ion Stoica +13 more
Reads0
Chats0
TLDR
This open source computing framework unifies streaming, batch, and interactive big data workloads to unlock new applications.Abstract:
This open source computing framework unifies streaming, batch, and interactive big data workloads to unlock new applicationsread more
Citations
More filters
Journal ArticleDOI
Fireworks: Reproducible Machine Learning and Preprocessing with PyTorch
Saad Khan,Libusha Kelly +1 more
TL;DR: A batch-processing library for constructing machine learning pipelines using PyTorch and dataframes to provide an easy method to stream data from a dataset into a machine learning model while performing reprocessing steps such as randomization, train/test split, batch normalization, etc. along the way.
Journal ArticleDOI
Instruments of change for academic tool development
Georg E. Fantner,Andrew C. Oates +1 more
TL;DR: In this article, the authors describe how to spread the news of how to replicate an open-source scientific tool is an evolving art, ripe for an open source revolution, and how to find the best way to distribute it.
Proceedings ArticleDOI
Workflow Environments for Advanced Cyberinfrastructure Platforms
Rosa M. Badia,Jorge Ejarque,Francesc Lordan,Daniele Lezzi,Javier Conejero,Javier Álvarez Cid-Fuentes,Yolanda Becerra,Anna Queralt +7 more
TL;DR: The vision is that future workflow environments and tools for the development of scientific workflows should follow a holistic approach, where both data and computing are integrated in a single flow built on simple, high-level interfaces.
Journal ArticleDOI
SMusket: Spark-based DNA error correction on distributed-memory systems
TL;DR: SparkMusket (SMusket), a Big Data tool built upon the open-source Apache Spark cluster computing framework to boost the performance of Musket, one of the most widely adopted and top-performing multithreaded correctors, is proposed.
Proceedings ArticleDOI
A Scalable Framework for Online Power Modelling of High-Performance Computing Nodes in Production
TL;DR: This paper describes a methodology and a framework for training power models derived with two of the best-in-class procedures directly on the online in production nodes and without requiring dedicated training instances.
References
More filters
Journal ArticleDOI
MapReduce: simplified data processing on large clusters
Jeffrey Dean,Sanjay Ghemawat +1 more
TL;DR: This paper presents the implementation of MapReduce, a programming model and an associated implementation for processing and generating large data sets that runs on a large cluster of commodity machines and is highly scalable.
Proceedings Article
Resilient distributed datasets: a fault-tolerant abstraction for in-memory cluster computing
Matei Zaharia,Mosharaf Chowdhury,Tathagata Das,Ankur Dave,Justin Ma,Murphy McCauley,Michael J. Franklin,Scott Shenker,Ion Stoica +8 more
TL;DR: Resilient Distributed Datasets is presented, a distributed memory abstraction that lets programmers perform in-memory computations on large clusters in a fault-tolerant manner and is implemented in a system called Spark, which is evaluated through a variety of user applications and benchmarks.
Journal ArticleDOI
A bridging model for parallel computation
TL;DR: The bulk-synchronous parallel (BSP) model is introduced as a candidate for this role, and results quantifying its efficiency both in implementing high-level language features and algorithms, as well as in being implemented in hardware.
Proceedings ArticleDOI
Pregel: a system for large-scale graph processing
Grzegorz Malewicz,Matthew H. Austern,Aart J. C. Bik,James C. Dehnert,Ilan Horn,Naty Leiser,Grzegorz Czajkowski +6 more
TL;DR: A model for processing large graphs that has been designed for efficient, scalable and fault-tolerant implementation on clusters of thousands of commodity computers, and its implied synchronicity makes reasoning about programs easier.
Proceedings ArticleDOI
Dryad: distributed data-parallel programs from sequential building blocks
TL;DR: The Dryad execution engine handles all the difficult problems of creating a large distributed, concurrent application: scheduling the use of computers and their CPUs, recovering from communication or computer failures, and transporting data between vertices.