scispace - formally typeset
Journal ArticleDOI

Apache Spark: a unified engine for big data processing

Reads0
Chats0
TLDR
This open source computing framework unifies streaming, batch, and interactive big data workloads to unlock new applications.
Abstract
This open source computing framework unifies streaming, batch, and interactive big data workloads to unlock new applications

read more

Citations
More filters
Book ChapterDOI

Explaining Query Answer Completeness and Correctness with Partition Patterns

TL;DR: This article tackles the issue of efficiently describing and inferring knowledge about data completeness w.r.t. to a complete reference data set and study the use of a partition pattern algebra for summarizing the completeness and validity of query answers.
Proceedings Article

Declarative Languages for Big Streaming Data.

TL;DR: This tutorial gives an overview of the various languages for declarative querying interfaces big streaming data and discusses how the different Big Stream Processing Engines (BigSPE) interpret, execute, and optimize continuous queries expressed with SQL-like languages such as KSQL, Flink-SQL, and Spark SQL.
Proceedings ArticleDOI

Building and Interpreting Risk Models from Imbalanced Clinical Data

TL;DR: A case study modeling melanoma risk using structured clinical records and the use of logistic regression, decision tree, and random forest classifiers with various feature selection and random undersampling techniques is presented.
Proceedings ArticleDOI

The Effects of Random Undersampling for Big Data Medicare Fraud Detection

TL;DR: It is shown it is possible to obtain better classification performance for experiments involving highly imbalanced Big Data with the application of data sampling techniques, and the effectiveness of Random Undersampling in classifying Medicare Big Data is proved.
Book ChapterDOI

Key Aspects for Achieving Hits by Virtual Screening Studies

TL;DR: In this article, the authors address the fundaments of virtual screening studies, as well as the emergence and how these studies are currently conducted, from the initial choice of strategies that can be adopted, followed by the choice and preparation of databases, techniques that can been adopted, and ending with studies of hits optimization.
References
More filters
Journal ArticleDOI

MapReduce: simplified data processing on large clusters

TL;DR: This paper presents the implementation of MapReduce, a programming model and an associated implementation for processing and generating large data sets that runs on a large cluster of commodity machines and is highly scalable.
Proceedings Article

Resilient distributed datasets: a fault-tolerant abstraction for in-memory cluster computing

TL;DR: Resilient Distributed Datasets is presented, a distributed memory abstraction that lets programmers perform in-memory computations on large clusters in a fault-tolerant manner and is implemented in a system called Spark, which is evaluated through a variety of user applications and benchmarks.
Journal ArticleDOI

A bridging model for parallel computation

TL;DR: The bulk-synchronous parallel (BSP) model is introduced as a candidate for this role, and results quantifying its efficiency both in implementing high-level language features and algorithms, as well as in being implemented in hardware.
Proceedings ArticleDOI

Pregel: a system for large-scale graph processing

TL;DR: A model for processing large graphs that has been designed for efficient, scalable and fault-tolerant implementation on clusters of thousands of commodity computers, and its implied synchronicity makes reasoning about programs easier.
Proceedings ArticleDOI

Dryad: distributed data-parallel programs from sequential building blocks

TL;DR: The Dryad execution engine handles all the difficult problems of creating a large distributed, concurrent application: scheduling the use of computers and their CPUs, recovering from communication or computer failures, and transporting data between vertices.
Related Papers (5)