scispace - formally typeset
Search or ask a question

Showing papers on "Neural coding published in 2017"


Journal ArticleDOI
19 Jul 2017-Nature
TL;DR: It is shown that population codes can be essential to achieve long coding timescales and that coupling is a variable property of cortical populations that affects the timescale of information coding and the accuracy of behaviour.
Abstract: Calcium imaging data from mice performing a virtual reality auditory decision-making task are used to analyse the population codes in primary auditory and posterior parietal cortex that support choice behaviour. Information must be represented at many timescales in the cortex, from precise millisecond tracking of rapidly fluctuating inputs to seconds-long representation of behavioural choice variables. Using calcium imaging data from mice performing a virtual reality auditory decision-making task, Christopher Harvey and colleagues analyse the population codes in the primary auditory and posterior parietal cortex that support choice behaviour. Parietal cortex neurons have stronger activity correlations and carry information over longer timescales than neurons in the auditory cortex, revealing that correlation is a cortical property that enables information coding by populations over different timescales. The cortex represents information across widely varying timescales1,2,3,4,5. For instance, sensory cortex encodes stimuli that fluctuate over few tens of milliseconds6,7, whereas in association cortex behavioural choices can require the maintenance of information over seconds8,9. However, it remains poorly understood whether diverse timescales result mostly from features intrinsic to individual neurons or from neuronal population activity. This question remains unanswered, because the timescales of coding in populations of neurons have not been studied extensively, and population codes have not been compared systematically across cortical regions. Here we show that population codes can be essential to achieve long coding timescales. Furthermore, we find that the properties of population codes differ between sensory and association cortices. We compared coding for sensory stimuli and behavioural choices in auditory cortex and posterior parietal cortex as mice performed a sound localization task. Auditory stimulus information was stronger in auditory cortex than in posterior parietal cortex, and both regions contained choice information. Although auditory cortex and posterior parietal cortex coded information by tiling in time neurons that were transiently informative for approximately 200 milliseconds, the areas had major differences in functional coupling between neurons, measured as activity correlations that could not be explained by task events. Coupling among posterior parietal cortex neurons was strong and extended over long time lags, whereas coupling among auditory cortex neurons was weak and short-lived. Stronger coupling in posterior parietal cortex led to a population code with long timescales and a representation of choice that remained consistent for approximately 1 second. In contrast, auditory cortex had a code with rapid fluctuations in stimulus and choice information over hundreds of milliseconds. Our results reveal that population codes differ across cortex and that coupling is a variable property of cortical populations that affects the timescale of information coding and the accuracy of behaviour.

294 citations


Journal ArticleDOI
TL;DR: Stable population-level WM representations in PFC are found, despite strong temporal neural dynamics, thereby providing insights into neural circuit mechanisms supporting WM.
Abstract: Working memory (WM) is a cognitive function for temporary maintenance and manipulation of information, which requires conversion of stimulus-driven signals into internal representations that are maintained across seconds-long mnemonic delays. Within primate prefrontal cortex (PFC), a critical node of the brain's WM network, neurons show stimulus-selective persistent activity during WM, but many of them exhibit strong temporal dynamics and heterogeneity, raising the questions of whether, and how, neuronal populations in PFC maintain stable mnemonic representations of stimuli during WM. Here we show that despite complex and heterogeneous temporal dynamics in single-neuron activity, PFC activity is endowed with a population-level coding of the mnemonic stimulus that is stable and robust throughout WM maintenance. We applied population-level analyses to hundreds of recorded single neurons from lateral PFC of monkeys performing two seminal tasks that demand parametric WM: oculomotor delayed response and vibrotactile delayed discrimination. We found that the high-dimensional state space of PFC population activity contains a low-dimensional subspace in which stimulus representations are stable across time during the cue and delay epochs, enabling robust and generalizable decoding compared with time-optimized subspaces. To explore potential mechanisms, we applied these same population-level analyses to theoretical neural circuit models of WM activity. Three previously proposed models failed to capture the key population-level features observed empirically. We propose network connectivity properties, implemented in a linear network model, which can underlie these features. This work uncovers stable population-level WM representations in PFC, despite strong temporal neural dynamics, thereby providing insights into neural circuit mechanisms supporting WM.

289 citations


Proceedings ArticleDOI
10 Mar 2017
TL;DR: This paper bulided a simple Convolutional neural network on image classification and analyzed different methods of learning rate set and different optimization algorithm of solving the optimal parameters of the influence on image Classification.
Abstract: In recent years, deep learning has been used in image classification, object tracking, pose estimation, text detection and recognition, visual saliency detection, action recognition and scene labeling. Auto Encoder, sparse coding, Restricted Boltzmann Machine, Deep Belief Networks and Convolutional neural networks is commonly used models in deep learning. Among different type of models, Convolutional neural networks has been demonstrated high performance on image classification. In this paper we bulided a simple Convolutional neural network on image classification. This simple Convolutional neural network completed the image classification. Our experiments are based on benchmarking datasets minist [1] and cifar-10. On the basis of the Convolutional neural network, we also analyzed different methods of learning rate set and different optimization algorithm of solving the optimal parameters of the influence on image classification.

236 citations


Journal Article
TL;DR: In this paper, a multi-layer model, ML-CSC, is proposed, in which signals are assumed to emerge from a cascade of Convolutional Sparse Coding (CSC) layers.
Abstract: Convolutional neural networks (CNN) have led to many state-of-the-art results spanning through various fields. However, a clear and profound theoretical understanding of the forward pass, the core algorithm of CNN, is still lacking. In parallel, within the wide field of sparse approximation, Convolutional Sparse Coding (CSC) has gained increasing attention in recent years. A theoretical study of this model was recently conducted, establishing it as a reliable and stable alternative to the commonly practiced patch-based processing. Herein, we propose a novel multi-layer model, ML-CSC, in which signals are assumed to emerge from a cascade of CSC layers. This is shown to be tightly connected to CNN, so much so that the forward pass of the CNN is in fact the thresholding pursuit serving the ML-CSC model. This connection brings a fresh view to CNN, as we are able to attribute to this architecture theoretical claims such as uniqueness of the representations throughout the network, and their stable estimation, all guaranteed under simple local sparsity conditions. Lastly, identifying the weaknesses in the above pursuit scheme, we propose an alternative to the forward pass, which is connected to deconvolutional and recurrent networks, and also has better theoretical guarantees.

233 citations


Journal ArticleDOI
TL;DR: In this article, the authors explore the use of ANNs in the context of computational neuroscience from various perspectives, such as rate-based models that are used in AI or more biologically plausible models that make use of spiking neurons.
Abstract: In artificial intelligence (AI), new advances make it possible that artificial neural networks (ANNs) learn to solve complex problems in a reasonable amount of time (LeCun et al., 2015). To the computational neuroscientist, ANNs are theoretical vehicles that aid in the understanding of neural information processing (van Gerven). These networks can take the form of the rate-based models that are used in AI or more biologically plausible models that make use of spiking neurons (Brette, 2015). The objective of this special issue is to explore the use of ANNs in the context of computational neuroscience from various perspectives.

233 citations


Journal ArticleDOI
TL;DR: This paper proposes a novel top-down saliency model that jointly learns a Conditional Random Field (CRF) and a discriminative dictionary and proposes a max-margin approach to train the dictionary modulated by CRF, and meanwhile a CRF with sparse coding.
Abstract: Top-down visual saliency is an important module of visual attention. In this work, we propose a novel top-down saliency model that jointly learns a Conditional Random Field (CRF) and a visual dictionary. The proposed model incorporates a layered structure from top to bottom: CRF, sparse coding and image patches. With sparse coding as an intermediate layer, CRF is learned in a feature-adaptive manner; meanwhile with CRF as the output layer, the dictionary is learned under structured supervision. For efficient and effective joint learning, we develop a max-margin approach via a stochastic gradient descent algorithm. Experimental results on the Graz-02 and PASCAL VOC datasets show that our model performs favorably against state-of-the-art top-down saliency methods for target object localization. In addition, the dictionary update significantly improves the performance of our model. We demonstrate the merits of the proposed top-down saliency model by applying it to prioritizing object proposals for detection and predicting human fixations.

213 citations


Journal ArticleDOI
08 Mar 2017-Neuron
TL;DR: It is noted that the brain functions at multiple scales and that causal dependencies may be best inferred with perturbation tools that interface with the system at the appropriate scale, and a geometric framework is developed to facilitate the interpretation of causal experiments when brain perturbations do or do not respect the intrinsic patterns of brain activity.

211 citations


Journal ArticleDOI
08 Feb 2017-Neuron
TL;DR: This work proposes a new framework centered on redefining the neural code as the neural features that carry sensory information used by the animal to drive appropriate behavior; that is, the features that have an intersection between sensory and choice information.

204 citations


Journal ArticleDOI
TL;DR: The presentation draws on computational neuroscience and pharmacologic and genetic studies in animals and humans to illustrate principles of network regulation that give rise to features of neural dysfunction associated with schizophrenia.

153 citations


Journal ArticleDOI
TL;DR: Evidence is presented for a neural mechanism for feature binding in working memory, based on encoding of visual information by neurons that respond to the conjunction of features, that finds clear evidence that nonspatial features are bound via space.
Abstract: Binding refers to the operation that groups different features together into objects. We propose a neural architecture for feature binding in visual working memory that employs populations of neurons with conjunction responses. We tested this model using cued recall tasks, in which subjects had to memorize object arrays composed of simple visual features (color, orientation, and location). After a brief delay, one feature of one item was given as a cue, and the observer had to report, on a continuous scale, one or two other features of the cued item. Binding failure in this task is associated with swap errors, in which observers report an item other than the one indicated by the cue. We observed that the probability of swapping two items strongly correlated with the items' similarity in the cue feature dimension, and found a strong correlation between swap errors occurring in spatial and nonspatial report. The neural model explains both swap errors and response variability as results of decoding noisy neural activity, and can account for the behavioral results in quantitative detail. We then used the model to compare alternative mechanisms for binding nonspatial features. We found the behavioral results fully consistent with a model in which nonspatial features are bound exclusively via their shared location, with no indication of direct binding between color and orientation. These results provide evidence for a special role of location in feature binding, and the model explains how this special role could be realized in the neural system.SIGNIFICANCE STATEMENT The problem of feature binding is of central importance in understanding the mechanisms of working memory. How do we remember not only that we saw a red and a round object, but that these features belong together to a single object rather than to different objects in our environment? Here we present evidence for a neural mechanism for feature binding in working memory, based on encoding of visual information by neurons that respond to the conjunction of features. We find clear evidence that nonspatial features are bound via space: we memorize directly where a color or an orientation appeared, but we memorize which color belonged with which orientation only indirectly by virtue of their shared location.

147 citations


Journal ArticleDOI
21 Jun 2017-Neuron
TL;DR: Investigation of rate and temporal coding of hippocampal CA1 neurons in rats performing a cue-combination task that requires the integration of sequentially provided sound and odor cues helps to update the conceptual framework for space encoding toward a more general model of episodic event representations in the hippocampus.

Journal ArticleDOI
19 Jul 2017-Neuron
TL;DR: It is suggested that heterogeneous, often transient sensory responses distributed across the fronto-parietal cortex may support working memory on behavioral timescales.

Journal ArticleDOI
TL;DR: Large-scale recordings in the striatum and orbitofrontal cortex of mice trained on a stimulus–reward association task involving a delay period and a machine-learning algorithm are used to quantify how well populations of simultaneously recorded neurons encoded elapsed time from stimulus onset suggest that the striatal may refine the code for time by integrating information from multiple inputs.
Abstract: Telling time is fundamental to many forms of learning and behavior, including the anticipation of rewarding events. While the neural mechanisms underlying timing remain unknown, computational models have proposed that the brain represents time in the dynamics of neural networks. Consistent with this hypothesis, dynamically changing patterns of neural activity in a number of brain areas—including the striatum and cortex—has been shown to encode elapsed time. To date, however, no studies have explicitly quantified and contrasted how well different areas encode time, by recording large numbers of units simultaneously from more than one area. Here we performed large-scale extracellular recordings in the striatum and orbitofrontal cortex of mice that learned the temporal relationship between a stimulus and a reward, and reported their response with anticipatory licking. We used a machine-learning algorithm to quantify how well populations of neurons encoded elapsed time from stimulus onset. Both the striatal and cortical networks encoded time, but the striatal network outperformed the orbitofrontal cortex—a finding replicated both in simultaneously and non-simultaneously recorded cortical-striatal data sets. The striatal network was also more reliable in predicting when the animals would lick, up to around one second before the actual lick occurred. Our results are consistent with the hypothesis that temporal information is encoded in a widely distributed manner throughout multiple brain areas, but that the striatum may have a privileged role in timing because it has a more accurate “clock” as it integrates information across multiple cortical areas. Significance Statement: The neural representation of time is thought to be distributed across multiple functionally specialized brain structures, including the striatum and cortex. However, until now the neural code for time has not been quantitatively compared between these areas. We carried out large-scale recordings in the striatum and orbitofrontal cortex of mice trained on a stimulus-reward association task involving a delay period, and used a machine-learning algorithm to quantify how well populations of simultaneously recorded neurons encoded elapsed time from stimulus onset. We found that while both areas encoded time, the striatum consistently outperformed the orbitofrontal cortex. These results suggest that the striatum may refine the code for time by integrating information from multiple inputs.

Proceedings ArticleDOI
21 Jul 2017
TL;DR: In this paper, weakly-supervised joint convolutional sparse coding is proposed to simultaneously solve the problems of super-resolution (SR) and cross-modality image synthesis, which requires only a few registered multimodal image pairs as the training set.
Abstract: Magnetic Resonance Imaging (MRI) offers high-resolution in vivo imaging and rich functional and anatomical multimodality tissue contrast. In practice, however, there are challenges associated with considerations of scanning costs, patient comfort, and scanning time that constrain how much data can be acquired in clinical or research studies. In this paper, we explore the possibility of generating high-resolution and multimodal images from low-resolution single-modality imagery. We propose the weakly-supervised joint convolutional sparse coding to simultaneously solve the problems of super-resolution (SR) and cross-modality image synthesis. The learning process requires only a few registered multimodal image pairs as the training set. Additionally, the quality of the joint dictionary learning can be improved using a larger set of unpaired images. To combine unpaired data from different image resolutions/modalities, a hetero-domain image alignment term is proposed. Local image neighborhoods are naturally preserved by operating on the whole image domain (as opposed to image patches) and using joint convolutional sparse coding. The paired images are enhanced in the joint learning process with unpaired data and an additional maximum mean discrepancy term, which minimizes the dissimilarity between their feature distributions. Experiments show that the proposed method outperforms state-of-the-art techniques on both SR reconstruction and simultaneous SR and cross-modality synthesis.

Journal ArticleDOI
TL;DR: A novel goal function is designed and analyzed, called ‘coding with synergy’, which builds on combining external input and prior knowledge in a synergistic manner, and it is suggested that this novel goal functions may be highly useful in neural information processing.

Proceedings ArticleDOI
01 Oct 2017
TL;DR: This work shows how one can efficiently solve the convolutional sparse pursuit problem and train the filters involved, while operating locally on image patches, and provides an intuitive algorithm that can leverage standard techniques from the sparse representations field.
Abstract: Convolutional sparse coding is an increasingly popular model in the signal and image processing communities, tackling some of the limitations of traditional patch-based sparse representations. Although several works have addressed the dictionary learning problem under this model, these relied on an ADMM formulation in the Fourier domain, losing the sense of locality and the relation to the traditional patch-based sparse pursuit. A recent work suggested a novel theoretical analysis of this global model, providing guarantees that rely on a localized sparsity measure. Herein, we extend this local-global relation by showing how one can efficiently solve the convolutional sparse pursuit problem and train the filters involved, while operating locally on image patches. Our approach provides an intuitive algorithm that can leverage standard techniques from the sparse representations field. The proposed method is fast to train, simple to implement, and flexible enough that it can be easily deployed in a variety of applications. We demonstrate the proposed training scheme for image inpainting and image separation, achieving state-of-the-art results.

Journal ArticleDOI
30 Aug 2017-Neuron
TL;DR: Recordings in awake animals provide experimental support for the "Stabilized Supralinear Network," a model that explains diverse cortical phenomena, and suggest that a decreasing E/I ratio with increasing cortical drive could contribute to many different cortical computations.

Journal ArticleDOI
TL;DR: This paper analyzes an important component of HTM, the HTM spatial pooler (SP), and describes a number of key properties, including fast adaptation to changing input statistics, improved noise robustness through learning, efficient use of cells, and robustness to cell death.
Abstract: Hierarchical temporal memory (HTM) provides a theoretical framework that models several key computational principles of the neocortex. In this paper, we analyze an important component of HTM, the HTM spatial pooler (SP). The SP models how neurons learn feedforward connections and form efficient representations of the input. It converts arbitrary binary input patterns into sparse distributed representations (SDRs) using a combination of competitive Hebbian learning rules and homeostatic excitability control. We describe a number of key properties of the SP, including fast adaptation to changing input statistics, improved noise robustness through learning, efficient use of cells, and robustness to cell death. In order to quantify these properties we develop a set of metrics that can be directly computed from the SP outputs. We show how the properties are met using these metrics and targeted artificial simulations. We then demonstrate the value of the SP in a complete end-to-end real-world HTM system. We discuss the relationship with neuroscience and previous studies of sparse coding. The HTM spatial pooler represents a neurally inspired algorithm for learning sparse representations from noisy data streams in an online fashion.

Journal ArticleDOI
TL;DR: This paper uses a discriminative sparse coding method in the coding layer along with spatial pyramid representation in the pooling layer, which makes it easier to distinguish the target to be tracked from its background in the presence of appearance variations.
Abstract: In this paper, we propose a biologically inspired appearance model for robust visual tracking. Motivated in part by the success of the hierarchical organization of the primary visual cortex (area V1), we establish an architecture consisting of five layers: whitening, rectification, normalization, coding, and pooling. The first three layers stem from the models developed for object recognition. In this paper, our attention focuses on the coding and pooling layers. In particular, we use a discriminative sparse coding method in the coding layer along with spatial pyramid representation in the pooling layer, which makes it easier to distinguish the target to be tracked from its background in the presence of appearance variations. An extensive experimental study shows that the proposed method has higher tracking accuracy than several state-of-the-art trackers.

Journal ArticleDOI
TL;DR: This work presents a unique finding, demonstrating that a neuromodulator, in this case acetylcholine, can produce specific changes in the correlation structure of the cortical network that ultimately increase the encoding capacity of the network.
Abstract: A primary function of the brain is to form representations of the sensory world. Its capacity to do so depends on the relationship between signal correlations, associated with neuronal receptive fields, and noise correlations, associated with neuronal response variability. It was recently shown that the behavioral relevance of sensory stimuli can modify the relationship between signal and noise correlations, presumably increasing the encoding capacity of the brain. In this work, we use data from the visual cortex of the awake mouse watching naturalistic stimuli and show that a similar modification is observed under heightened cholinergic modulation. Increasing cholinergic levels in the cortex through optogenetic stimulation of basal forebrain cholinergic neurons decreases the dependency that is commonly observed between signal and noise correlations. Simulations of correlated neural networks with realistic firing statistics indicate that this change in the correlation structure increases the encoding capacity of the network.

01 Jan 2017
TL;DR: The main contribution of the paper is an effective real-time system for one-shot action modeling and recognition; the paper highlights the effectiveness of sparse coding techniques to represent 3D actions.
Abstract: Sparsity has been showed to be one of the most important properties for visual recognition purposes. In this paper we show that sparse representation plays a fundamental role in achieving one-shot learning and real-time recognition of actions. We start off from RGBD images, combine motion and appearance cues and extract state-of-the-art features in a computationally efficient way. The proposed method relies on descriptors based on 3D Histograms of Scene Flow (3DHOFs) and Global Histograms of Oriented Gradient (GHOGs); adaptive sparse coding is applied to capture high-level patterns from data. We then propose a simultaneous on-line video segmentation and recognition of actions using linear SVMs. The main contribution of the paper is an effective real-time system for one-shot action modeling and recognition; the paper highlights the effectiveness of sparse coding techniques to represent 3D actions. We obtain very good results on three different data sets: a benchmark data set for one-shot action learning (the ChaLearn Gesture Data Set), an in-house data set acquired by a Kinect sensor including complex actions and gestures differing by small details, and a data set created for human-robot interaction purposes. Finally we demonstrate that our system is effective also in a human-robot interaction setting and propose a memory game, "All Gestures You Can", to be played against a humanoid robot.

Journal ArticleDOI
TL;DR: The experimental results demonstrate that the proposed algorithm can effectively utilize multi-scale and contextual spatial information of medical images, reduce the semantic gap in a large degree and improve medical image classification performance.

Journal ArticleDOI
TL;DR: The results show that sparse coding is an effective way to define spectral features of the cardiac cycle and its sub-cycles for the purpose of classification and can be combined with additional feature extraction methods to improve classification accuracy.
Abstract: Objective: This paper builds upon work submitted as part of the 2016 PhysioNet/CinC Challenge, which used sparse coding as a feature extraction tool on audio PCG data for heart sound classification Approach: In sparse coding, preprocessed data is decomposed into a dictionary matrix and a sparse coefficient matrix The dictionary matrix represents statistically important features of the audio segments The sparse coefficient matrix is a mapping that represents which features are used by each segment Working in the sparse domain, we train support vector machines (SVMs) for each audio segment (S1, systole, S2, diastole) and the full cardiac cycle We train a sixth SVM to combine the results from the preliminary SVMs into a single binary label for the entire PCG recording In addition to classifying heart sounds using sparse coding, this paper presents two novel modifications The first uses a matrix norm in the dictionary update step of sparse coding to encourage the dictionary to learn discriminating features from the abnormal heart recordings The second combines the sparse coding features with time-domain features in the final SVM stage Main results: The original algorithm submitted to the challenge achieved a cross-validated mean accuracy (MAcc) score of 08652 (Se = 08669 and Sp = 08634) After incorporating the modifications new to this paper, we report an improved cross-validated MAcc of 08926 (Se = 09007 and Sp = 08845) Significance: Our results show that sparse coding is an effective way to define spectral features of the cardiac cycle and its sub-cycles for the purpose of classification In addition, we demonstrate that sparse coding can be combined with additional feature extraction methods to improve classification accuracy

Journal ArticleDOI
TL;DR: These methods could enable deciphering the neural code and also be used to understand the pathophysiology of and design novel therapies for neurological and mental diseases.
Abstract: The neural code that relates the firing of neurons to the generation of behavior and mental states must be implemented by spatiotemporal patterns of activity across neuronal populations. These patterns engage selective groups of neurons, called neuronal ensembles, which are emergent building blocks of neural circuits. We review optical and computational methods, based on two-photon calcium imaging and two-photon optogenetics, to detect, characterize, and manipulate neuronal ensembles in three dimensions. We review data using these methods in the mammalian cortex that demonstrate the existence of neuronal ensembles in the spontaneous and evoked cortical activity in vitro and in vivo. Moreover, two-photon optogenetics enable the possibility of artificially imprinting neuronal ensembles into awake, behaving animals and of later recalling those ensembles selectively by stimulating individual cells. These methods could enable deciphering the neural code and also be used to understand the pathophysiology of and...

Journal ArticleDOI
TL;DR: The toolbox includes newly developed algorithms and interactive tools for image pre-processing and segmentation, estimation of significant single-neuron single-trial signals, mapping event-related neuronal responses, detection of activity-correlated neuronal clusters, exploration of population dynamics, and analysis of clusters' features against surrogate control datasets.
Abstract: The development of new imaging and optogenetics techniques to study the dynamics of large neuronal circuits is generating datasets of unprecedented volume and complexity, demanding the development of appropriate analysis tools. We present a comprehensive computational workflow for the analysis of neuronal population calcium dynamics. The toolbox includes newly developed algorithms and interactive tools for image pre-processing and segmentation, estimation of significant single-neuron single-trial signals, mapping event-related neuronal responses, detection of activity-correlated neuronal clusters, exploration of population dynamics, and analysis of clusters' features against surrogate control datasets. The modules are integrated in a modular and versatile processing pipeline, adaptable to different needs. The clustering module is capable of detecting flexible, dynamically activated neuronal assemblies, consistent with the distributed population coding of the brain. We demonstrate the suitability of the toolbox for a variety of calcium imaging datasets. The toolbox open-source code, a step-by-step tutorial and a case study dataset are available at https://github.com/zebrain-lab/Toolbox-Romano-et-al.

Journal ArticleDOI
TL;DR: Curto et al. proved that a code has no local obstructions if and only if it contains certain "mandatory" intersections of maximal codewords, and give a new criterion for an intersection of maximalcodewords to be non-mandatory, and prove that it classifies all such non-Mandatory codeword for codes on up to 5 neurons.

Journal ArticleDOI
Liangli Zhen1, Dezhong Peng1, Zhang Yi1, Yong Xiang2, Peng Chen1 
TL;DR: This paper proposes an effective approach to discover some 1-D subspaces from the set consisting of all the time-frequency representation vectors of observed mixture signals that are associated with TF points where only single source possesses dominant energy.
Abstract: In an underdetermined mixture system with $n$ unknown sources, it is a challenging task to separate these sources from their $m$ observed mixture signals, where $m . By exploiting the technique of sparse coding, we propose an effective approach to discover some 1-D subspaces from the set consisting of all the time-frequency (TF) representation vectors of observed mixture signals. We show that these 1-D subspaces are associated with TF points where only single source possesses dominant energy. By grouping the vectors in these subspaces via hierarchical clustering algorithm, we obtain the estimation of the mixing matrix. Finally, the source signals could be recovered by solving a series of least squares problems. Since the sparse coding strategy considers the linear representation relations among all the TF representation vectors of mixing signals, the proposed algorithm can provide an accurate estimation of the mixing matrix and is robust to the noises compared with the existing underdetermined blind source separation approaches. Theoretical analysis and experimental results demonstrate the effectiveness of the proposed method.

Journal ArticleDOI
TL;DR: It is shown that visual cortical neurons have improved sensory encoding accuracy as well as improved perceptual performance during periods of local population desynchrony, demonstrating that the structure of variability in local cortical populations is not noise but rather controls how sensory information is optimally integrated with ongoing processes to guide network coding and behavior.
Abstract: Cortical activity changes continuously during the course of the day. At a global scale, population activity varies between the ‘synchronized’ state during sleep and ‘desynchronized’ state during waking. However, whether local fluctuations in population synchrony during wakefulness modulate the accuracy of sensory encoding and behavioral performance is poorly understood. Here, we show that populations of cells in monkey visual cortex exhibit rapid fluctuations in synchrony ranging from desynchronized responses, indicative of high alertness, to highly synchronized responses. These fluctuations are local and control the trial variability in population coding accuracy and behavioral performance in a discrimination task. When local population activity is desynchronized, the correlated variability between neurons is reduced, and network and behavioral performance are enhanced. These findings demonstrate that the structure of variability in local cortical populations is not noise but rather controls how sensory information is optimally integrated with ongoing processes to guide network coding and behavior. Changes in synchrony of cortical populations are observed across the sleep-wake cycle, however the effect of fluctuations in synchrony during wakefulness is not understood. Here the authors show that visual cortical neurons have improved sensory encoding accuracy as well as improved perceptual performance during periods of local population desynchrony.

Journal ArticleDOI
TL;DR: This work provides a complete characterization of local obstructions to convexity and defines max intersection-complete codes, a family guaranteed to have noLocal obstructions, a significant advance in understanding the intrinsic combinatorial properties of convex codes.
Abstract: Neural codes allow the brain to represent, process, and store information about the world. Combinatorial codes, comprised of binary patterns of neural activity, encode information via the collective behavior of populations of neurons. A code is called convex if its codewords correspond to regions defined by an arrangement of convex open sets in Euclidean space. Convex codes have been observed experimentally in many brain areas, including sensory cortices and the hippocampus, where neurons exhibit convex receptive fields. What makes a neural code convex? That is, how can we tell from the intrinsic structure of a code if there exists a corresponding arrangement of convex open sets? In this work, we provide a complete characterization of local obstructions to convexity. This motivates us to define max intersection-complete codes, a family guaranteed to have no local obstructions. We then show how our characterization enables one to use free resolutions of Stanley--Reisner ideals in order to detect violations...

Journal ArticleDOI
08 Mar 2017-Neuron
TL;DR: It is shown that different OB output layers display unique context-dependent long-term ensemble plasticity, allowing parallel transfer of non-redundant sensory information to downstream centers.