scispace - formally typeset
Search or ask a question

Showing papers by "Byron M. Yu published in 2014"


Journal ArticleDOI
TL;DR: This review examines three important motivations for population studies: single-trial hypotheses requiring statistical power, hypotheses of population response structure and exploratory analyses of large data sets, and practical advice about selecting methods and interpreting their outputs.
Abstract: Most sensory, cognitive and motor functions depend on the interactions of many neurons. In recent years, there has been rapid development and increasing use of technologies for recording from large numbers of neurons, either sequentially or simultaneously. A key question is what scientific insight can be gained by studying a population of recorded neurons beyond studying each neuron individually. Here, we examine three important motivations for population studies: single-trial hypotheses requiring statistical power, hypotheses of population response structure and exploratory analyses of large data sets. Many recent studies have adopted dimensionality reduction to analyze these populations and to find features that are not apparent at the level of individual neurons. We describe the dimensionality reduction methods commonly applied to population activity and offer practical advice about selecting methods and interpreting their outputs. This review is intended for experimental and computational researchers who seek to understand the role dimensionality reduction has had and can have in systems neuroscience, and who seek to apply these methods to their own data.

883 citations


Journal ArticleDOI
28 Aug 2014-Nature
TL;DR: The results suggest that the existing structure of a network can shape learning, and offer a network-level explanation for the observation that the authors are more readily able to learn new skills when they are related to the skills that they already possess.
Abstract: Learning, whether motor, sensory or cognitive, requires networks of neurons to generate new activity patterns. As some behaviours are easier to learn than others, we asked if some neural activity patterns are easier to generate than others. Here we investigate whether an existing network constrains the patterns that a subset of its neurons is capable of exhibiting, and if so, what principles define this constraint. We employed a closed-loop intracortical brain-computer interface learning paradigm in which Rhesus macaques (Macaca mulatta) controlled a computer cursor by modulating neural activity patterns in the primary motor cortex. Using the brain-computer interface paradigm, we could specify and alter how neural activity mapped to cursor velocity. At the start of each session, we observed the characteristic activity patterns of the recorded neural population. The activity of a neural population can be represented in a high-dimensional space (termed the neural space), wherein each dimension corresponds to the activity of one neuron. These characteristic activity patterns comprise a low-dimensional subspace (termed the intrinsic manifold) within the neural space. The intrinsic manifold presumably reflects constraints imposed by the underlying neural circuitry. Here we show that the animals could readily learn to proficiently control the cursor using neural activity patterns that were within the intrinsic manifold. However, animals were less able to learn to proficiently control the cursor using activity patterns that were outside of the intrinsic manifold. These results suggest that the existing structure of a network can shape learning. On a timescale of hours, it seems to be difficult to learn to generate neural activity patterns that are not consistent with the existing network structure. These findings offer a network-level explanation for the observation that we are more readily able to learn new skills when they are related to the skills that we already possess.

536 citations


Journal ArticleDOI
TL;DR: The finding of relatively little speed information in motor cortex inspired a speed-dampening Kalman filter (SDKF) that automatically slows the cursor upon detecting changes in decoded movement direction, which enhances speed control by using prevalent directional signals, rather than requiring speed to be directly decoded from neural activity.
Abstract: Motor cortex plays a substantial role in driving movement, yet the details underlying this control remain unresolved. We analyzed the extent to which movement-related information could be extracted from single-trial motor cortical activity recorded while monkeys performed center-out reaching. Using information theoretic techniques, we found that single units carry relatively little speed-related information compared with direction-related information. This result is not mitigated at the population level: simultaneously recorded population activity predicted speed with significantly lower accuracy relative to direction predictions. Furthermore, a unit-dropping analysis revealed that speed accuracy would likely remain lower than direction accuracy, even given larger populations. These results suggest that the instantaneous details of single-trial movement speed are difficult to extract using commonly assumed coding schemes. This apparent paucity of speed information takes particular importance in the context of brain-machine interfaces (BMIs), which rely on extracting kinematic information from motor cortex. Previous studies have highlighted subjects' difficulties in holding a BMI cursor stable at targets. These studies, along with our finding of relatively little speed information in motor cortex, inspired a speed-dampening Kalman filter (SDKF) that automatically slows the cursor upon detecting changes in decoded movement direction. Effectively, SDKF enhances speed control by using prevalent directional signals, rather than requiring speed to be directly decoded from neural activity. SDKF improved success rates by a factor of 1.7 relative to a standard Kalman filter in a closed-loop BMI task requiring stable stops at targets. BMI systems enabling stable stops will be more effective and user-friendly when translated into clinical applications.

63 citations


Journal ArticleDOI
TL;DR: It is believed that the development of classifiers that require no daily retraining will accelerate the clinical translation of BCI systems and two novel self-recalibrating classifiers are produced.
Abstract: Objective. Intracortical brain–computer interface (BCI) decoders are typically retrained daily to maintain stable performance. Self-recalibrating decoders aim to remove the burden this may present in the clinic by training themselves autonomously during normal use but have only been developed for continuous control. Here we address the problem for discrete decoding (classifiers). Approach. We recorded threshold crossings from 96-electrode arrays implanted in the motor cortex of two rhesus macaques performing center-out reaches in 7 directions over 41 and 36 separate days spanning 48 and 58 days in total for offline analysis. Main results. We show that for the purposes of developing a self-recalibrating classifier, tuning parameters can be considered as fixed within days and that parameters on the same electrode move up and down together between days. Further, drift is constrained across time, which is reflected in the performance of a standard classifier which does not progressively worsen if it is not retrained daily, though overall performance is reduced by more than 10% compared to a daily retrained classifier. Two novel self-recalibrating classifiers produce a increase in classification accuracy over that achieved by the non-retrained classifier to nearly recover the performance of the daily retrained classifier. Significance. We believe that the development of classifiers that require no daily retraining will accelerate the clinical translation of BCI systems. Future work should test these results in a closed-loop setting.

52 citations


Proceedings Article
08 Dec 2014
TL;DR: This work proposes extensions to probabilistic canonical correlation analysis to capture the temporal structure of the latent variables, as well as to distinguish within-population dynamics from across-population interactions (termed Group Latent Auto-Regressive Analysis, gLARA), and applied these methods to populations of neurons recorded simultaneously in visual areas V1 and V2 and found that gLara provides a better description of the recordings than pCCA.
Abstract: Developments in neural recording technology are rapidly enabling the recording of populations of neurons in multiple brain areas simultaneously, as well as the identification of the types of neurons being recorded (e.g., excitatory vs. inhibitory). There is a growing need for statistical methods to study the interaction among multiple, labeled populations of neurons. Rather than attempting to identify direct interactions between neurons (where the number of interactions grows with the number of neurons squared), we propose to extract a smaller number of latent variables from each population and study how these latent variables interact. Specifically, we propose extensions to probabilistic canonical correlation analysis (pCCA) to capture the temporal structure of the latent variables, as well as to distinguish within-population dynamics from across-population interactions (termed Group Latent Auto-Regressive Analysis, gLARA). We then applied these methods to populations of neurons recorded simultaneously in visual areas V1 and V2, and found that gLARA provides a better description of the recordings than pCCA. This work provides a foundation for studying how multiple populations of neurons interact and how this interaction supports brain function.

38 citations


Proceedings Article
08 Dec 2014
TL;DR: A set of sufficient conditions for the recovery of a SPSD matrix from a set of its principal submatrices are developed and an algorithm that can exactly recover a matrix when these conditions are met is developed.
Abstract: We consider the problem of recovering a symmetric, positive semidefinite (SPSD) matrix from a subset of its entries, possibly corrupted by noise. In contrast to previous matrix recovery work, we drop the assumption of a random sampling of entries in favor of a deterministic sampling of principal submatrices of the matrix. We develop a set of sufficient conditions for the recovery of a SPSD matrix from a set of its principal submatrices, present necessity results based on this set of conditions and develop an algorithm that can exactly recover a matrix when these conditions are met. The proposed algorithm is naturally generalized to the problem of noisy matrix recovery, and we provide a worst-case bound on reconstruction error for this scenario. Finally, we demonstrate the algorithm's utility on noiseless and noisy simulated datasets.

35 citations


Journal ArticleDOI
TL;DR: A study in which mice learn to modulate neural activity merges these technologies to investigate the neural basis of BCI learning with unprecedented spatial detail.
Abstract: Brain-computer interfaces (BCIs) and optical imaging have both undergone impressive technological growth in recent years. A study in which mice learn to modulate neural activity merges these technologies to investigate the neural basis of BCI learning with unprecedented spatial detail.

3 citations