Open AccessProceedings Article
Map-Reduce for Machine Learning on Multicore
Cheng-Tao Chu,Sang K. Kim,Yi-an Lin,Yuanyuan Yu,Gary Bradski,Kunle Olukotun,Andrew Y. Ng +6 more
- Vol. 19, pp 281-288
TLDR
This work shows that algorithms that fit the Statistical Query model can be written in a certain "summation form," which allows them to be easily parallelized on multicore computers and shows basically linear speedup with an increasing number of processors.Abstract:
We are at the beginning of the multicore era. Computers will have increasingly many cores (processors), but there is still no good programming framework for these architectures, and thus no simple and unified way for machine learning to take advantage of the potential speed up. In this paper, we develop a broadly applicable parallel programming method, one that is easily applied to many different learning algorithms. Our work is in distinct contrast to the tradition in machine learning of designing (often ingenious) ways to speed up a single algorithm at a time. Specifically, we show that algorithms that fit the Statistical Query model [15] can be written in a certain "summation form," which allows them to be easily parallelized on multicore computers. We adapt Google's map-reduce [7] paradigm to demonstrate this parallel speed up technique on a variety of learning algorithms including locally weighted linear regression (LWLR), k-means, logistic regression (LR), naive Bayes (NB), SVM, ICA, PCA, gaussian discriminant analysis (GDA), EM, and backpropagation (NN). Our experimental results show basically linear speedup with an increasing number of processors.read more
Citations
More filters
Book ChapterDOI
EML: A Scalable, Transparent Meta-Learning Paradigm for Big Data Applications
TL;DR: A meta-learning paradigm, EML, is described and analyzed that combines techniques from evolutionary computation and supervised learning to produce a powerful approach for inducing transparent models for big data ML applications.
Journal ArticleDOI
A Survey on MapReduce Implementations
TL;DR: This survey presents a comprehensive review of various implementations of the most successful implementations of MapReduce framework reported in the literature and discusses their main strengths and weaknesses.
Journal ArticleDOI
Efficient machine learning on data science languages with parallel data summarization
TL;DR: In this article, the authors present an efficient way to compute the statistical and machine learning models with parallel data summarization that can work with popular data science languages such as R and Python.
Journal ArticleDOI
Study of Various Parallel Implementations of Association Rule Mining Algorithm
TL;DR: A comparative study on various parallel implementation of Apriori technique is given and the advantages of using the Map Reduce technology, the latest technology used in parallelization of large dataset mining are focused on.
Journal ArticleDOI
Effect of garbage collection in iterative algorithms on Spark: an experimental analysis
Minseo Kang,Jae-Gil Lee +1 more
TL;DR: The experimental results show that garbage collection accounts for 16–47% of the total elapsed time of running iterative algorithms on Spark and that the memory cache is no less advantageous in terms of garbage collection than the disk cache.
References
More filters
Journal ArticleDOI
Learning representations by back-propagating errors
TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Journal ArticleDOI
MapReduce: simplified data processing on large clusters
Jeffrey Dean,Sanjay Ghemawat +1 more
TL;DR: This paper presents the implementation of MapReduce, a programming model and an associated implementation for processing and generating large data sets that runs on a large cluster of commodity machines and is highly scalable.
Journal ArticleDOI
An information-maximization approach to blind separation and blind deconvolution
TL;DR: It is suggested that information maximization provides a unifying framework for problems in "blind" signal processing and dependencies of information transfer on time delays are derived.
Journal ArticleDOI
Principal component analysis
TL;DR: Principal Component Analysis is a multivariate exploratory analysis method useful to separate systematic variation from noise and to define a space of reduced dimensions that preserve noise.