scispace - formally typeset
Open AccessProceedings Article

Map-Reduce for Machine Learning on Multicore

TLDR
This work shows that algorithms that fit the Statistical Query model can be written in a certain "summation form," which allows them to be easily parallelized on multicore computers and shows basically linear speedup with an increasing number of processors.
Abstract
We are at the beginning of the multicore era. Computers will have increasingly many cores (processors), but there is still no good programming framework for these architectures, and thus no simple and unified way for machine learning to take advantage of the potential speed up. In this paper, we develop a broadly applicable parallel programming method, one that is easily applied to many different learning algorithms. Our work is in distinct contrast to the tradition in machine learning of designing (often ingenious) ways to speed up a single algorithm at a time. Specifically, we show that algorithms that fit the Statistical Query model [15] can be written in a certain "summation form," which allows them to be easily parallelized on multicore computers. We adapt Google's map-reduce [7] paradigm to demonstrate this parallel speed up technique on a variety of learning algorithms including locally weighted linear regression (LWLR), k-means, logistic regression (LR), naive Bayes (NB), SVM, ICA, PCA, gaussian discriminant analysis (GDA), EM, and backpropagation (NN). Our experimental results show basically linear speedup with an increasing number of processors.

read more

Content maybe subject to copyright    Report

Citations
More filters
Posted Content

GraphLab: A Distributed Framework for Machine Learning in the Cloud

TL;DR: A comprehensive evaluation of GraphLab on three state-of-the-art ML algorithms using real large-scale data and a 64 node EC2 cluster of 512 processors finds that GraphLab achieves orders of magnitude performance gains over Hadoop while performing comparably or superior to hand-tuned MPI implementations.
Proceedings ArticleDOI

LaSA: A locality-aware scheduling algorithm for Hadoop-MapReduce resource assignment

TL;DR: A locality-aware scheduling algorithm (LaSA) for Hadoop-MapReduce scheduler is presented to use weight of data interference to provide data locality- Aware resource assignment in Hadoo scheduler.
Proceedings Article

A scale-out RDF molecule store for distributed processing of biomedical data

TL;DR: The BioMANTA project focuses on utilizing Semantic Web technologies together with a scale-out architecture to tackle the above challenges and to provide efficient analysis, querying, and reasoning about protein-protein interaction data.

Towards a big data reference architecture

TL;DR: The proposed reference architecture and a survey of the current state of art in ‘big data’ technologies guides designers in the creation of systems, which create new value from existing, but also previously under-used data.
Journal ArticleDOI

Parallel Implementation of Classification Algorithms Based on Cloud Computing Environment

TL;DR: A parallel Naive Bayes classification algorithm based on MapReduce, which is a simple yet powerful parallel programming technique, is introduced and demonstrated that it improves the original algorithm performance, and it can process large datasets efficiently on commodity hardware.
References
More filters
Journal ArticleDOI

Learning representations by back-propagating errors

TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Journal ArticleDOI

MapReduce: simplified data processing on large clusters

TL;DR: This paper presents the implementation of MapReduce, a programming model and an associated implementation for processing and generating large data sets that runs on a large cluster of commodity machines and is highly scalable.
Journal ArticleDOI

An information-maximization approach to blind separation and blind deconvolution

TL;DR: It is suggested that information maximization provides a unifying framework for problems in "blind" signal processing and dependencies of information transfer on time delays are derived.
Journal ArticleDOI

Principal component analysis

TL;DR: Principal Component Analysis is a multivariate exploratory analysis method useful to separate systematic variation from noise and to define a space of reduced dimensions that preserve noise.
Book

Clustering Algorithms

Related Papers (5)