scispace - formally typeset
Open AccessProceedings Article

Map-Reduce for Machine Learning on Multicore

TLDR
This work shows that algorithms that fit the Statistical Query model can be written in a certain "summation form," which allows them to be easily parallelized on multicore computers and shows basically linear speedup with an increasing number of processors.
Abstract
We are at the beginning of the multicore era. Computers will have increasingly many cores (processors), but there is still no good programming framework for these architectures, and thus no simple and unified way for machine learning to take advantage of the potential speed up. In this paper, we develop a broadly applicable parallel programming method, one that is easily applied to many different learning algorithms. Our work is in distinct contrast to the tradition in machine learning of designing (often ingenious) ways to speed up a single algorithm at a time. Specifically, we show that algorithms that fit the Statistical Query model [15] can be written in a certain "summation form," which allows them to be easily parallelized on multicore computers. We adapt Google's map-reduce [7] paradigm to demonstrate this parallel speed up technique on a variety of learning algorithms including locally weighted linear regression (LWLR), k-means, logistic regression (LR), naive Bayes (NB), SVM, ICA, PCA, gaussian discriminant analysis (GDA), EM, and backpropagation (NN). Our experimental results show basically linear speedup with an increasing number of processors.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Scalable clustering algorithms for continuous environmental flow cytometry

TL;DR: This work proposes the Gaussian mixture model with partitioning approach for classification of large-scale, high-frequency flow cytometry data, which outperforms current state-of-the-art cytometry classification algorithms in accuracy and can be coupled with manual or automatic partitioning of data into homogeneous sections for further classification gains.
Proceedings ArticleDOI

CAF: Core to Core Communication Acceleration Framework

TL;DR: This paper proposes a novel C2C Communication Acceleration Framework (CAF), which combines hardware and software optimizations to effectively reduce the queue-induced communication overheads and improve the overall system performance by up to 2-12× over traditional software queue implementations.
Proceedings ArticleDOI

Ex-MATE: Data Intensive Computing with Large Reduction Objects and Its Application to Graph Mining

TL;DR: This paper describes a system, Extended MATE or Ex-MATE, which supports this alternate API with reduction objects of arbitrary sizes and develops support for managing disk-resident reduction objects and updating them efficiently.
Proceedings ArticleDOI

Large-scale music tag recommendation with explicit multiple attributes

TL;DR: This work proposes a scheme for tag recommendation using Explicit Multiple Attributes based on tag semantic similarity and music content that is both effective for recommending attribute-diverse relevant tags and efficient at scalable processing.
Proceedings ArticleDOI

Ensemble-based Feature Selection and Classification Model for DNS Typo-squatting Detection

TL;DR: In this paper, an ensemble-based feature selection and bagging classification model was proposed to detect DNS typo-squatting attack, which can lead to information threat, corporate secret leakage, and can facilitate fraud.
References
More filters
Journal ArticleDOI

Learning representations by back-propagating errors

TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Journal ArticleDOI

MapReduce: simplified data processing on large clusters

TL;DR: This paper presents the implementation of MapReduce, a programming model and an associated implementation for processing and generating large data sets that runs on a large cluster of commodity machines and is highly scalable.
Journal ArticleDOI

An information-maximization approach to blind separation and blind deconvolution

TL;DR: It is suggested that information maximization provides a unifying framework for problems in "blind" signal processing and dependencies of information transfer on time delays are derived.
Journal ArticleDOI

Principal component analysis

TL;DR: Principal Component Analysis is a multivariate exploratory analysis method useful to separate systematic variation from noise and to define a space of reduced dimensions that preserve noise.
Book

Clustering Algorithms

Related Papers (5)