scispace - formally typeset
Open AccessProceedings Article

Map-Reduce for Machine Learning on Multicore

TLDR
This work shows that algorithms that fit the Statistical Query model can be written in a certain "summation form," which allows them to be easily parallelized on multicore computers and shows basically linear speedup with an increasing number of processors.
Abstract
We are at the beginning of the multicore era. Computers will have increasingly many cores (processors), but there is still no good programming framework for these architectures, and thus no simple and unified way for machine learning to take advantage of the potential speed up. In this paper, we develop a broadly applicable parallel programming method, one that is easily applied to many different learning algorithms. Our work is in distinct contrast to the tradition in machine learning of designing (often ingenious) ways to speed up a single algorithm at a time. Specifically, we show that algorithms that fit the Statistical Query model [15] can be written in a certain "summation form," which allows them to be easily parallelized on multicore computers. We adapt Google's map-reduce [7] paradigm to demonstrate this parallel speed up technique on a variety of learning algorithms including locally weighted linear regression (LWLR), k-means, logistic regression (LR), naive Bayes (NB), SVM, ICA, PCA, gaussian discriminant analysis (GDA), EM, and backpropagation (NN). Our experimental results show basically linear speedup with an increasing number of processors.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

Anomaly Detection Framework Using Rule Extraction for Efficient Intrusion Detection

TL;DR: This work addresses the issue of ecient network trac classication by creating an intrusion detection framework that applies dimensionality reduction and conjunctive rule extraction and the implemented system is transparent and does not work like a black box, making it intuitive for domain experts, such as network administrators.
Proceedings Article

A Gaussian Latent Variable Model for Large Margin Classification of Labeled and Unlabeled Data

TL;DR: A Gaussian latent variable model for semi-supervised learning of linear large margin classifiers is investigated and it is shown that a Lyapunov central limit theorem provides an excellent approximation to the true posterior distribution.

First-Order Optimization (Training) Algorithms in Deep Learning.

TL;DR: A comparative analysis of convolutional neural networks training algorithms that are used in tasks of image recognition is provided and studies show that for this task a simple gradient descent algorithm is quite effective.
Proceedings ArticleDOI

Interactive Rendering for Large-Scale Mesh Based on MapReduce

TL;DR: This work proposes a novel adaptive parallel rasterization method based on MapReduce, whose results are stored in a data format called enhanced layered depth image (ELDI) and integrates pixel and triangle based strategies in one processing pipeline so as to significantly reduce data transfer between Map and Reduce steps.
Journal ArticleDOI

Feature Selection for Microarray Data using WGCNA Based Fuzzy Forest in Map Reduce Paradigm

TL;DR: This work investigates the feature selection drawback for microarray information with tiny samples and variant correlation and proposes Fuzzy Forest using Weighted Gene Correlation Network Analysis (WGCNA) that makes use of interaction between the features.
References
More filters
Journal ArticleDOI

Learning representations by back-propagating errors

TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Journal ArticleDOI

MapReduce: simplified data processing on large clusters

TL;DR: This paper presents the implementation of MapReduce, a programming model and an associated implementation for processing and generating large data sets that runs on a large cluster of commodity machines and is highly scalable.
Journal ArticleDOI

An information-maximization approach to blind separation and blind deconvolution

TL;DR: It is suggested that information maximization provides a unifying framework for problems in "blind" signal processing and dependencies of information transfer on time delays are derived.
Journal ArticleDOI

Principal component analysis

TL;DR: Principal Component Analysis is a multivariate exploratory analysis method useful to separate systematic variation from noise and to define a space of reduced dimensions that preserve noise.
Book

Clustering Algorithms

Related Papers (5)