scispace - formally typeset
Journal ArticleDOI

Apache Spark: a unified engine for big data processing

Reads0
Chats0
TLDR
This open source computing framework unifies streaming, batch, and interactive big data workloads to unlock new applications.
Abstract
This open source computing framework unifies streaming, batch, and interactive big data workloads to unlock new applications

read more

Citations
More filters
Posted ContentDOI

Scedar: a scalable Python package for single-cell RNA-seq exploratory data analysis

TL;DR: The scedar package is a scalable Python package for scRNA-seq exploratory data analysis that provides a convenient and reliable interface for performing visualization, imputation of gene dropouts, detection of rare transcriptomic profiles, and clustering on large-scale sc RNA-seq datasets.
Posted Content

Technical Report: Developing a Working Data Hub.

Vijay Gadepally, +1 more
- 01 Apr 2020 - 
TL;DR: This document provides background in databases, data management and outlines best practices and recommendations for developing and deploying a working data hub.
Journal ArticleDOI

Databases fit for blockchain technology: A complete overview

TL;DR: In this article , the authors present a complete overview of many different DBMS types and how these systems can be used to implement, enhance, and further improve blockchain technology, such as high throughput, low latency, and high capacity.
Posted Content

Role of Apache Software Foundation in Big Data Projects.

TL;DR: This investigation has shown that many of the Apache Big Data projects are autonomous but some are built based on other Apache projects and some work in conjunction with other projects to improve and ease development in Big Data space.
Proceedings ArticleDOI

Informative Evaluation Metrics for Highly Imbalanced Big Data Classification

TL;DR: In this paper , the authors compare the performance of random undersampling and AUC in the classification of highly imbalanced Big Data. But, they only used Medicare Part D data and the results of 600 experiments where they apply Random Undersampling to a dataset with about 175 million instances.
References
More filters
Journal ArticleDOI

MapReduce: simplified data processing on large clusters

TL;DR: This paper presents the implementation of MapReduce, a programming model and an associated implementation for processing and generating large data sets that runs on a large cluster of commodity machines and is highly scalable.
Proceedings Article

Resilient distributed datasets: a fault-tolerant abstraction for in-memory cluster computing

TL;DR: Resilient Distributed Datasets is presented, a distributed memory abstraction that lets programmers perform in-memory computations on large clusters in a fault-tolerant manner and is implemented in a system called Spark, which is evaluated through a variety of user applications and benchmarks.
Journal ArticleDOI

A bridging model for parallel computation

TL;DR: The bulk-synchronous parallel (BSP) model is introduced as a candidate for this role, and results quantifying its efficiency both in implementing high-level language features and algorithms, as well as in being implemented in hardware.
Proceedings ArticleDOI

Pregel: a system for large-scale graph processing

TL;DR: A model for processing large graphs that has been designed for efficient, scalable and fault-tolerant implementation on clusters of thousands of commodity computers, and its implied synchronicity makes reasoning about programs easier.
Proceedings ArticleDOI

Dryad: distributed data-parallel programs from sequential building blocks

TL;DR: The Dryad execution engine handles all the difficult problems of creating a large distributed, concurrent application: scheduling the use of computers and their CPUs, recovering from communication or computer failures, and transporting data between vertices.
Related Papers (5)