Open AccessProceedings Article
Map-Reduce for Machine Learning on Multicore
Cheng-Tao Chu,Sang K. Kim,Yi-an Lin,Yuanyuan Yu,Gary Bradski,Kunle Olukotun,Andrew Y. Ng +6 more
- Vol. 19, pp 281-288
TLDR
This work shows that algorithms that fit the Statistical Query model can be written in a certain "summation form," which allows them to be easily parallelized on multicore computers and shows basically linear speedup with an increasing number of processors.Abstract:
We are at the beginning of the multicore era. Computers will have increasingly many cores (processors), but there is still no good programming framework for these architectures, and thus no simple and unified way for machine learning to take advantage of the potential speed up. In this paper, we develop a broadly applicable parallel programming method, one that is easily applied to many different learning algorithms. Our work is in distinct contrast to the tradition in machine learning of designing (often ingenious) ways to speed up a single algorithm at a time. Specifically, we show that algorithms that fit the Statistical Query model [15] can be written in a certain "summation form," which allows them to be easily parallelized on multicore computers. We adapt Google's map-reduce [7] paradigm to demonstrate this parallel speed up technique on a variety of learning algorithms including locally weighted linear regression (LWLR), k-means, logistic regression (LR), naive Bayes (NB), SVM, ICA, PCA, gaussian discriminant analysis (GDA), EM, and backpropagation (NN). Our experimental results show basically linear speedup with an increasing number of processors.read more
Citations
More filters
Proceedings ArticleDOI
Food Image Recognition Using Pervasive Cloud Computing
TL;DR: SIFT and Gabor descriptors and KMeans algorithm for feature clustering are proposed and pervasive cloud computing paradigm is proposed to improve the performance of food image recognition due to the heavy computing requirement for large number of concurrent recognition requests.
Book ChapterDOI
Data Intensive Computing for Bioinformatics
Posted Content
Scalable Data Cube Analysis over Big Data
TL;DR: HaCube is introduced, an extension of MapReduce, designed for efficient parallel data cube analysis on large-scale data by taking advantages from both Map Reduce and parallel DBMS by providing a general data cube materialization algorithm which is able to facilitate the features in Map reduce-like systems towards an efficient data cube computation.
Book ChapterDOI
Massively parallel feature selection: an approach based on variance preservation
TL;DR: A novel large-scale feature selection algorithm that is based on variance analysis is presented, which selects features by evaluating their abilities to explain data variance and can be readily implemented in most distributed computing environments.
Journal ArticleDOI
Twister2: Design of a big data toolkit
Supun Kamburugamuve,Kannan Govindarajan,Pulasthi Wickramasinghe,Vibhatha Abeykoon,Geoffrey C. Fox +4 more
TL;DR: This paper presents a loosely coupled component‐based design of a big data toolkit where each component can have different implementations to support various applications and would allow services and data analytics to be integrated seamlessly and expand from edge to cloud to HPC environments.
References
More filters
Journal ArticleDOI
Learning representations by back-propagating errors
TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Journal ArticleDOI
MapReduce: simplified data processing on large clusters
Jeffrey Dean,Sanjay Ghemawat +1 more
TL;DR: This paper presents the implementation of MapReduce, a programming model and an associated implementation for processing and generating large data sets that runs on a large cluster of commodity machines and is highly scalable.
Journal ArticleDOI
An information-maximization approach to blind separation and blind deconvolution
TL;DR: It is suggested that information maximization provides a unifying framework for problems in "blind" signal processing and dependencies of information transfer on time delays are derived.
Journal ArticleDOI
Principal component analysis
TL;DR: Principal Component Analysis is a multivariate exploratory analysis method useful to separate systematic variation from noise and to define a space of reduced dimensions that preserve noise.