Journal ArticleDOI
Apache Spark: a unified engine for big data processing
Matei Zaharia,Reynold Xin,Patrick Wendell,Tathagata Das,Michael Armbrust,Ankur Dave,Xiangrui Meng,Josh Rosen,Shivaram Venkataraman,Michael J. Franklin,Ali Ghodsi,Joseph E. Gonzalez,Scott Shenker,Ion Stoica +13 more
Reads0
Chats0
TLDR
This open source computing framework unifies streaming, batch, and interactive big data workloads to unlock new applications.Abstract:
This open source computing framework unifies streaming, batch, and interactive big data workloads to unlock new applicationsread more
Citations
More filters
Journal ArticleDOI
Accurate Differentially Private Deep Learning on the Edge
TL;DR: In this paper, the authors present a systematical analysis that unveils the influential factors capable of mitigating local and aggregated noises, and design PrivateDL to leverage these factors in noise calibration so as to improve model accuracy while fulfilling privacy guarantee.
Journal ArticleDOI
Scalable algorithm for generation of attribute implication base using FP-growth and spark
TL;DR: In this article, the authors proposed a scalable algorithm to find the implication base using machine learning technique FP-growth, big data processing framework Apache Spark and executed on large formal contexts.
Posted ContentDOI
Rapid reconstruction of neural circuits using tissue expansion and lattice light sheet microscopy
Joshua L. Lillvis,Hideo Otsuna,Xiaoyu Ding,Igor Pisarev,Takashi Kawase,Jennifer Colonell,Konrad Rokicki,Cristian Goina,Ruixuan Gao,Ruixuan Gao,Ruixuan Gao,Amy Hu,Kaiyu Wang,John A. Bogovic,Daniel E. Milkie,Linus Meienberg,Edward S. Boyden,Edward S. Boyden,Stephan Saalfeld,Paul W. Tillberg,Barry J. Dickson,Barry J. Dickson +21 more
TL;DR: In this article, a protocol and analysis pipeline using tissue expansion and lattice light-sheet microscopy (ExLLSM) was developed to rapidly reconstruct selected circuits across many samples with single synapse resolution and molecular contrast.
Proceedings ArticleDOI
Big Data Processing: Batch-based processing and stream-based processing
Sarah Benjelloun,Mohamed El Mehdi El Aissi,Yassine Loukili,Younes Lakhrissi,Safae El Haj Ben Ali,Hiba Chougrad,Abdessamad El Boushaki +6 more
TL;DR: Two types of big data processing methods are defined, namely: Batch-based processing and stream-based Processing, which have certainly different use cases, architectures and tools.
Journal ArticleDOI
Linking place records using multi-view encoders
Vincius Cousseau,Luciano Barbosa +1 more
TL;DR: This work aims to detect replicated places using a deep-learning model, named PlacERN, that relies on multi-view encoders, and indicates how this model can be used to solve the place linkage problem in an end-to-end fashion by fitting it into a pipeline.
References
More filters
Journal ArticleDOI
MapReduce: simplified data processing on large clusters
Jeffrey Dean,Sanjay Ghemawat +1 more
TL;DR: This paper presents the implementation of MapReduce, a programming model and an associated implementation for processing and generating large data sets that runs on a large cluster of commodity machines and is highly scalable.
Proceedings Article
Resilient distributed datasets: a fault-tolerant abstraction for in-memory cluster computing
Matei Zaharia,Mosharaf Chowdhury,Tathagata Das,Ankur Dave,Justin Ma,Murphy McCauley,Michael J. Franklin,Scott Shenker,Ion Stoica +8 more
TL;DR: Resilient Distributed Datasets is presented, a distributed memory abstraction that lets programmers perform in-memory computations on large clusters in a fault-tolerant manner and is implemented in a system called Spark, which is evaluated through a variety of user applications and benchmarks.
Journal ArticleDOI
A bridging model for parallel computation
TL;DR: The bulk-synchronous parallel (BSP) model is introduced as a candidate for this role, and results quantifying its efficiency both in implementing high-level language features and algorithms, as well as in being implemented in hardware.
Proceedings ArticleDOI
Pregel: a system for large-scale graph processing
Grzegorz Malewicz,Matthew H. Austern,Aart J. C. Bik,James C. Dehnert,Ilan Horn,Naty Leiser,Grzegorz Czajkowski +6 more
TL;DR: A model for processing large graphs that has been designed for efficient, scalable and fault-tolerant implementation on clusters of thousands of commodity computers, and its implied synchronicity makes reasoning about programs easier.
Proceedings ArticleDOI
Dryad: distributed data-parallel programs from sequential building blocks
TL;DR: The Dryad execution engine handles all the difficult problems of creating a large distributed, concurrent application: scheduling the use of computers and their CPUs, recovering from communication or computer failures, and transporting data between vertices.