Journal ArticleDOI
Apache Spark: a unified engine for big data processing
Matei Zaharia,Reynold Xin,Patrick Wendell,Tathagata Das,Michael Armbrust,Ankur Dave,Xiangrui Meng,Josh Rosen,Shivaram Venkataraman,Michael J. Franklin,Ali Ghodsi,Joseph E. Gonzalez,Scott Shenker,Ion Stoica +13 more
Reads0
Chats0
TLDR
This open source computing framework unifies streaming, batch, and interactive big data workloads to unlock new applications.Abstract:
This open source computing framework unifies streaming, batch, and interactive big data workloads to unlock new applicationsread more
Citations
More filters
Posted Content
MARS-Gym: A Gym framework to model, train, and evaluate Recommender Systems for Marketplaces
Marlesson R. O. Santana,Luckeciano Carvalho Melo,Fernando H. F. Camargo,Bruno Brandão,Anderson da Silva Soares,Renan M. Oliveira,Sandor Caetano +6 more
TL;DR: The MArketplace Recommender Systems Gym (MARS-Gym) is proposed, an open-source framework to empower researchers and engineers to quickly build and evaluate Reinforcement Learning agents for recommendations in marketplaces, and to bridge the gap between academic research and production systems.
Book ChapterDOI
Addressing the Big Data Multi-class Imbalance Problem with Oversampling and Deep Learning Neural Networks.
TL;DR: Results show that ROS and SMOTE are not always enough to improve the classifiers performance in the minority classes, however, they slightly increase the overall performance of the classifier in comparison to the unsampled data.
Journal ArticleDOI
Ensemble machine learning modeling for the prediction of artemisinin resistance in malaria
Colby T. Ford,Daniel Janies +1 more
TL;DR: This work develops machine learning models using novel methods for transforming isolate data and handling the tens of thousands of variables that result from these data transformation exercises, and shows the utility of ensemble machine learning modeling for highly effective predictions of both goals of this challenge.
Journal ArticleDOI
FAIRly big: A framework for computationally reproducible processing of large-scale data
TL;DR: DataLad as discussed by the authors is a DataLad-based, domain-agnostic framework suitable for reproducible data processing in compliance with open science mandates, which can capture machine-actionable computational provenance records that can be used to retrace and verify the origins of research outcomes.
Book ChapterDOI
Chapter 6 Big Data and FAIR Data for Data Science
TL;DR: In this paper, a review of modern phenomena in the field of data storage and processing as Big Data and FAIR data is devoted to the review of such modern phenomena, including the Internet of FAIR Data & Services (IFDS).
References
More filters
Journal ArticleDOI
MapReduce: simplified data processing on large clusters
Jeffrey Dean,Sanjay Ghemawat +1 more
TL;DR: This paper presents the implementation of MapReduce, a programming model and an associated implementation for processing and generating large data sets that runs on a large cluster of commodity machines and is highly scalable.
Proceedings Article
Resilient distributed datasets: a fault-tolerant abstraction for in-memory cluster computing
Matei Zaharia,Mosharaf Chowdhury,Tathagata Das,Ankur Dave,Justin Ma,Murphy McCauley,Michael J. Franklin,Scott Shenker,Ion Stoica +8 more
TL;DR: Resilient Distributed Datasets is presented, a distributed memory abstraction that lets programmers perform in-memory computations on large clusters in a fault-tolerant manner and is implemented in a system called Spark, which is evaluated through a variety of user applications and benchmarks.
Journal ArticleDOI
A bridging model for parallel computation
TL;DR: The bulk-synchronous parallel (BSP) model is introduced as a candidate for this role, and results quantifying its efficiency both in implementing high-level language features and algorithms, as well as in being implemented in hardware.
Proceedings ArticleDOI
Pregel: a system for large-scale graph processing
Grzegorz Malewicz,Matthew H. Austern,Aart J. C. Bik,James C. Dehnert,Ilan Horn,Naty Leiser,Grzegorz Czajkowski +6 more
TL;DR: A model for processing large graphs that has been designed for efficient, scalable and fault-tolerant implementation on clusters of thousands of commodity computers, and its implied synchronicity makes reasoning about programs easier.
Proceedings ArticleDOI
Dryad: distributed data-parallel programs from sequential building blocks
TL;DR: The Dryad execution engine handles all the difficult problems of creating a large distributed, concurrent application: scheduling the use of computers and their CPUs, recovering from communication or computer failures, and transporting data between vertices.