scispace - formally typeset
Search or ask a question

Showing papers by "Charles W. Anderson published in 2010"


Journal ArticleDOI
TL;DR: Both the motor and visual loops were involved in acquisition of categorization ability: activity during correct categorization increased across learning and was sensitive to reward prediction, consistent with the executive loop role in integrating reward and action.

83 citations


Proceedings ArticleDOI
24 Apr 2010
TL;DR: This paper shows that for a specific class of programs fairly accurate TSS models can be automatically created by using a combination of simple program features, synthetic kernels, and standard machine learning techniques.
Abstract: Tiling is a widely used loop transformation for exposing/exploiting parallelism and data locality. Effective use of tiling requires selection and tuning of the tile sizes. This is usually achieved by hand-crafting tile size selection (TSS) models that characterize the performance of the tiled program as a function of tile sizes. The best tile sizes are selected by either directly using the TSS model or by using the TSS model together with an empirical search. Hand-crafting accurate TSS models is hard, and adapting them to different architecture/compiler, or even keeping them up-to-date with respect to the evolution of a single compiler is often just as hard. Instead of hand-crafting TSS models, can we automatically learn or create them? In this paper, we show that for a specific class of programs fairly accurate TSS models can be automatically created by using a combination of simple program features, synthetic kernels, and standard machine learning techniques. The automatic TSS model generation scheme can also be directly used for adapting the model and/or keeping it up-to-date. We evaluate our scheme on six different architecture-compiler combinations (chosen from three different architectures and four different compilers). The models learned by our method have consistently shown near-optimal performance (within 5% of the optimal on average) across all architecture-compiler combinations.

59 citations


Proceedings ArticleDOI
30 Nov 2010
TL;DR: The suitability of using Granules to classify multiple EEG streams in a distributed environment is demonstrated and a competing approach that uses an R package, Snowfall, that simplifies execution of R computations in a dispersed setting is contrasted.
Abstract: Brain Computer Interfaces (BCIs) allow users to interact with a computer via electroencephalogram (EEG) signals generated by their brain. The BCI application that we consider allows a user to initiate actions such as keyboard input or control the motion of their wheelchair. Our goal is to be able to train the neural network and classify the EEG signals from multiple users to infer their intended actions in a distributed environment. The processing is developed using the Map-Reduce framework. We use our cloud runtime, Granules, to classify these EEG streams. One of our objectives is to be able to process these EEG streams in real-time. The BCI software has been developed in R, which is an interpreted language designed for the fast computation of matrix multiplications, making it an effective language for the development of artificial neural networks. We contrast our approach of using Granules with a competing approach that uses an R package – Snowfall that simplifies execution of R computations in a distributed setting. We have performed experiments to evaluate the costs introduced by our scheme for training the neural networks and classifying the EEG signals. Our results demonstrate the suitability of using Granules to classify multiple EEG streams in a distributed environment.

28 citations


Journal ArticleDOI
TL;DR: In this paper, Artificial Neural Networks (ANNs) are applied to efficient modeling of stream-aquifer responses in an intensively irrigated river basin under a variety of water management alternatives for improving irrigation efficiency, reducing soil water salinity, increasing crop yields, controlling nonbeneficial consumptive use, and decreasing salt loadings to the river.

19 citations


Book ChapterDOI
01 Jan 2010
TL;DR: Observations include that using a large amount of covariance regularization consistently provides classificationcuracy as good if not better than using little or no covarianceregularization and that random projection complements covariance r egularization.
Abstract: This paper studies the effect of covariance regularization for classific ation of high-dimensional data. This is done by fitting a mixture of Gaussians with a regularized covariance matrix to each class. Three data sets are ch osen to suggest the results are applicable to any domain with high-dimensiona l data. The regularization needs of the data when pre-processed using th e dimensionality reduction techniques principal component analysis (PCA) a nd random projection are also compared. Observations include that using a large amount of covariance regularization consistently provides classification ccuracy as good if not better than using little or no covariance regularization . The results also indicate that random projection complements covariance r egularization.

2 citations