Journal ArticleDOI
Apache Spark: a unified engine for big data processing
Matei Zaharia,Reynold Xin,Patrick Wendell,Tathagata Das,Michael Armbrust,Ankur Dave,Xiangrui Meng,Josh Rosen,Shivaram Venkataraman,Michael J. Franklin,Ali Ghodsi,Joseph E. Gonzalez,Scott Shenker,Ion Stoica +13 more
Reads0
Chats0
TLDR
This open source computing framework unifies streaming, batch, and interactive big data workloads to unlock new applications.Abstract:
This open source computing framework unifies streaming, batch, and interactive big data workloads to unlock new applicationsread more
Citations
More filters
Posted Content
Apache Spark Streaming and HarmonicIO: A Performance and Architecture Comparison.
TL;DR: A performance benchmark comparison between Apache Spark Streaming (ASS) under both file and TCP streaming modes; and HarmonicIO, comparing maximum throughput over a broad domain of message sizes and CPU loads is presented.
Journal ArticleDOI
Development Trend of Computer Network Security Technology Based on the Big Data Era
TL;DR: The experimental results surface: the data-trained model has reached a high level of accuracy in the data detection rate, which proves that network security urgently needs the help of big data analysis technology to make changes to meet the needs of development.
Journal ArticleDOI
Interactive Algorithms in Complex Image Processing Systems Based on Big Data
Yuanjin Xu,Xiaojun Liu +1 more
TL;DR: The experimental results show that the interactive algorithm in the complex image processing system in this paper optimizes the image extraction rate and improves the antinoise performance of the segmentations and the segmentation effect of the deep depression region.
Proceedings ArticleDOI
Polystore++: Accelerated Polystore System for Heterogeneous Workloads
TL;DR: Polystore++ is envisioned, an architecture to accelerate existing polystore systems using hardware accelerators (e.g., FPGAs, CGRAs, and GPUs) and can achieve high performance at low power by identifying and offloading components of a polystore system that are amenable to acceleration using specialized hardware.
Book ChapterDOI
Machine learning and data analytics
TL;DR: This chapter presents the current advancements toward the analysis of medical data to address the unmet needs for several diseases including patient stratification, detection of biomarkers, and effective treatment monitoring, among others.
References
More filters
Journal ArticleDOI
MapReduce: simplified data processing on large clusters
Jeffrey Dean,Sanjay Ghemawat +1 more
TL;DR: This paper presents the implementation of MapReduce, a programming model and an associated implementation for processing and generating large data sets that runs on a large cluster of commodity machines and is highly scalable.
Proceedings Article
Resilient distributed datasets: a fault-tolerant abstraction for in-memory cluster computing
Matei Zaharia,Mosharaf Chowdhury,Tathagata Das,Ankur Dave,Justin Ma,Murphy McCauley,Michael J. Franklin,Scott Shenker,Ion Stoica +8 more
TL;DR: Resilient Distributed Datasets is presented, a distributed memory abstraction that lets programmers perform in-memory computations on large clusters in a fault-tolerant manner and is implemented in a system called Spark, which is evaluated through a variety of user applications and benchmarks.
Journal ArticleDOI
A bridging model for parallel computation
TL;DR: The bulk-synchronous parallel (BSP) model is introduced as a candidate for this role, and results quantifying its efficiency both in implementing high-level language features and algorithms, as well as in being implemented in hardware.
Proceedings ArticleDOI
Pregel: a system for large-scale graph processing
Grzegorz Malewicz,Matthew H. Austern,Aart J. C. Bik,James C. Dehnert,Ilan Horn,Naty Leiser,Grzegorz Czajkowski +6 more
TL;DR: A model for processing large graphs that has been designed for efficient, scalable and fault-tolerant implementation on clusters of thousands of commodity computers, and its implied synchronicity makes reasoning about programs easier.
Proceedings ArticleDOI
Dryad: distributed data-parallel programs from sequential building blocks
TL;DR: The Dryad execution engine handles all the difficult problems of creating a large distributed, concurrent application: scheduling the use of computers and their CPUs, recovering from communication or computer failures, and transporting data between vertices.