scispace - formally typeset
Open AccessProceedings Article

Map-Reduce for Machine Learning on Multicore

TLDR
This work shows that algorithms that fit the Statistical Query model can be written in a certain "summation form," which allows them to be easily parallelized on multicore computers and shows basically linear speedup with an increasing number of processors.
Abstract
We are at the beginning of the multicore era. Computers will have increasingly many cores (processors), but there is still no good programming framework for these architectures, and thus no simple and unified way for machine learning to take advantage of the potential speed up. In this paper, we develop a broadly applicable parallel programming method, one that is easily applied to many different learning algorithms. Our work is in distinct contrast to the tradition in machine learning of designing (often ingenious) ways to speed up a single algorithm at a time. Specifically, we show that algorithms that fit the Statistical Query model [15] can be written in a certain "summation form," which allows them to be easily parallelized on multicore computers. We adapt Google's map-reduce [7] paradigm to demonstrate this parallel speed up technique on a variety of learning algorithms including locally weighted linear regression (LWLR), k-means, logistic regression (LR), naive Bayes (NB), SVM, ICA, PCA, gaussian discriminant analysis (GDA), EM, and backpropagation (NN). Our experimental results show basically linear speedup with an increasing number of processors.

read more

Content maybe subject to copyright    Report

Citations
More filters
Book ChapterDOI

Distributed Stochastic Optimization of Regularized Risk via Saddle-Point Problem

TL;DR: This paper shows that one can rewrite the regularized risk minimization problem as an equivalent saddle-point problem, and proposes an efficient distributed stochastic optimization (DSO) algorithm, and proves the algorithm’s rate of convergence; remarkably, the analysis shows that the algorithm scales almost linearly with the number of processors.

Personalized large scale classification of public tenders on hadoop

TL;DR: Experimental show that the large scale system not only maintains high recall on the classification task, but can readily take advantage of the massive scalability gains made possible by Hadoop’s distributed architecture.
Journal ArticleDOI

An experimental analysis of limitations of MapReduce for iterative algorithms on Spark

Minseo Kang, +1 more
- 01 Dec 2017 - 
TL;DR: The limitations of MapReduce in handling iterative algorithms are identified and categorize, and the consequences of these limitations are experimentally investigated by using the most flexible and stable distributed system, Spark.
Posted Content

"Why did you do that?": Explaining black box models with Inductive Synthesis

TL;DR: RICE, a method for generating explanations of the behaviour of black box models by probing a model to extract model output examples using sensitivity analysis, and interpreting the target program as a human-readable explanation, is presented.
Dissertation

Design and Implementation of Scalable Hierarchical Density Based Clustering

TL;DR: This thesis introduces Partitioned HDS that provides significant reduction in time and space complexity and makes it possible to generate the Auto-HDS cluster hierarchy on much larger datasets with 100s of millions of data points and describes Parallel Auto-hDS that takes advantage of the inherent parallelism available in Partitioning Auto- HDS to scale to even larger datasets without a corresponding increase in actual run time.
References
More filters
Journal ArticleDOI

Learning representations by back-propagating errors

TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Journal ArticleDOI

MapReduce: simplified data processing on large clusters

TL;DR: This paper presents the implementation of MapReduce, a programming model and an associated implementation for processing and generating large data sets that runs on a large cluster of commodity machines and is highly scalable.
Journal ArticleDOI

An information-maximization approach to blind separation and blind deconvolution

TL;DR: It is suggested that information maximization provides a unifying framework for problems in "blind" signal processing and dependencies of information transfer on time delays are derived.
Journal ArticleDOI

Principal component analysis

TL;DR: Principal Component Analysis is a multivariate exploratory analysis method useful to separate systematic variation from noise and to define a space of reduced dimensions that preserve noise.
Book

Clustering Algorithms

Related Papers (5)