Open AccessProceedings Article
Map-Reduce for Machine Learning on Multicore
Cheng-Tao Chu,Sang K. Kim,Yi-an Lin,Yuanyuan Yu,Gary Bradski,Kunle Olukotun,Andrew Y. Ng +6 more
- Vol. 19, pp 281-288
TLDR
This work shows that algorithms that fit the Statistical Query model can be written in a certain "summation form," which allows them to be easily parallelized on multicore computers and shows basically linear speedup with an increasing number of processors.Abstract:
We are at the beginning of the multicore era. Computers will have increasingly many cores (processors), but there is still no good programming framework for these architectures, and thus no simple and unified way for machine learning to take advantage of the potential speed up. In this paper, we develop a broadly applicable parallel programming method, one that is easily applied to many different learning algorithms. Our work is in distinct contrast to the tradition in machine learning of designing (often ingenious) ways to speed up a single algorithm at a time. Specifically, we show that algorithms that fit the Statistical Query model [15] can be written in a certain "summation form," which allows them to be easily parallelized on multicore computers. We adapt Google's map-reduce [7] paradigm to demonstrate this parallel speed up technique on a variety of learning algorithms including locally weighted linear regression (LWLR), k-means, logistic regression (LR), naive Bayes (NB), SVM, ICA, PCA, gaussian discriminant analysis (GDA), EM, and backpropagation (NN). Our experimental results show basically linear speedup with an increasing number of processors.read more
Citations
More filters
Dissertation
Hadoop extensions for distributed computing on reconfigurable active SSD
TL;DR: The design of new extensions to Hadoop to enable clusters of reconfigurable active solid-state drives (RASSDs) to process streaming data from SSDs using FPGAs is proposed and its impact on performance for different workloads taken from Stanford's Phoenix MapReduce project is demonstrated.
Proceedings ArticleDOI
Scalable Complex Query Processing over Large Semantic Web Data Using Cloud
TL;DR: A scalable semantic web framework built using cloud computing technologies that handles not only queries with Basic Graph Patterns (BGP) but also complex queries with optional blocks and efficiently answers complex queries.
Journal ArticleDOI
A vlHMM approach to context-aware search
TL;DR: A general approach to context-aware search by learning a variable length hidden Markov model (vlHMM) from search sessions extracted from log data by developing several distributed learning techniques to learn a very large vlHMM under the map-reduce framework.
High Performance Machine Learning through Codesign and Rooflining
TL;DR: Two new approaches, butterfly mixing and ``Kylix'' which cover the requirements of machine learning and graph algorithms respectively are described and roofline bounds for both approaches are given.
Journal ArticleDOI
Evaluating Point-Based POMDP Solvers on Multicore Machines
TL;DR: Several ways in which point-based algorithms can be adapted to parallel computing are evaluated and experimental results are presented, providing evidence to the usability of the suggestions.
References
More filters
Journal ArticleDOI
Learning representations by back-propagating errors
TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Journal ArticleDOI
MapReduce: simplified data processing on large clusters
Jeffrey Dean,Sanjay Ghemawat +1 more
TL;DR: This paper presents the implementation of MapReduce, a programming model and an associated implementation for processing and generating large data sets that runs on a large cluster of commodity machines and is highly scalable.
Journal ArticleDOI
An information-maximization approach to blind separation and blind deconvolution
TL;DR: It is suggested that information maximization provides a unifying framework for problems in "blind" signal processing and dependencies of information transfer on time delays are derived.
Journal ArticleDOI
Principal component analysis
TL;DR: Principal Component Analysis is a multivariate exploratory analysis method useful to separate systematic variation from noise and to define a space of reduced dimensions that preserve noise.