scispace - formally typeset
Open AccessProceedings Article

Map-Reduce for Machine Learning on Multicore

TLDR
This work shows that algorithms that fit the Statistical Query model can be written in a certain "summation form," which allows them to be easily parallelized on multicore computers and shows basically linear speedup with an increasing number of processors.
Abstract
We are at the beginning of the multicore era. Computers will have increasingly many cores (processors), but there is still no good programming framework for these architectures, and thus no simple and unified way for machine learning to take advantage of the potential speed up. In this paper, we develop a broadly applicable parallel programming method, one that is easily applied to many different learning algorithms. Our work is in distinct contrast to the tradition in machine learning of designing (often ingenious) ways to speed up a single algorithm at a time. Specifically, we show that algorithms that fit the Statistical Query model [15] can be written in a certain "summation form," which allows them to be easily parallelized on multicore computers. We adapt Google's map-reduce [7] paradigm to demonstrate this parallel speed up technique on a variety of learning algorithms including locally weighted linear regression (LWLR), k-means, logistic regression (LR), naive Bayes (NB), SVM, ICA, PCA, gaussian discriminant analysis (GDA), EM, and backpropagation (NN). Our experimental results show basically linear speedup with an increasing number of processors.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

NOMAD: Non-locking, stOchastic Multi-machine algorithm for Asynchronous and Decentralized matrix completion

TL;DR: In this article, a non-locking, stOchastic multi-machine algorithm for asynchronous and decentralized matrix completion (NOMAD) is proposed. But it is not a lock-free parallel algorithm.
Journal ArticleDOI

Evolutionary Feature Selection for Big Data Classification: A MapReduce Approach

TL;DR: A feature selection algorithm based on evolutionary computation that uses the MapReduce paradigm to obtain subsets of features from big datasets, improving both the classification accuracy and its runtime when dealing with big data problems.
Journal ArticleDOI

Rosefw-rf

TL;DR: This work describes the methodology that won the ECBDL'14 big data challenge for a bioinformatics big data problem, named as ROSEFW-RF, which is based on several MapReduce approaches to balance the classes distribution through random oversampling and detect the most relevant features via an evolutionary feature weighting process.
Proceedings ArticleDOI

On scheduling in map-reduce and flow-shops

TL;DR: This work formalizes job scheduling in map-reduce as a novel generalization of the two-stage classical flexible flow shop (FFS) problem: instead of a single task at each stage, a job now consists of a set of tasks per stage.
Journal ArticleDOI

On Distributed Fuzzy Decision Trees for Big Data

TL;DR: A distributed FDT learning scheme shaped according to the MapReduce programming model for generating both binary and multiway FDTs from big data, which relies on a novel distributed fuzzy discretizer that generates a strong fuzzy partition for each continuous attribute based on fuzzy information entropy.
References
More filters
Journal ArticleDOI

Learning representations by back-propagating errors

TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Journal ArticleDOI

MapReduce: simplified data processing on large clusters

TL;DR: This paper presents the implementation of MapReduce, a programming model and an associated implementation for processing and generating large data sets that runs on a large cluster of commodity machines and is highly scalable.
Journal ArticleDOI

An information-maximization approach to blind separation and blind deconvolution

TL;DR: It is suggested that information maximization provides a unifying framework for problems in "blind" signal processing and dependencies of information transfer on time delays are derived.
Journal ArticleDOI

Principal component analysis

TL;DR: Principal Component Analysis is a multivariate exploratory analysis method useful to separate systematic variation from noise and to define a space of reduced dimensions that preserve noise.
Book

Clustering Algorithms

Related Papers (5)