scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Functional parcellation of mouse visual cortex using statistical techniques reveals response-dependent clustering of cortical processing areas.

04 Feb 2021-PLOS Computational Biology (Public Library of Science (PLoS))-Vol. 17, Iss: 2
TL;DR: In this paper, six core visual areas have responses that are functionally distinct from each other for a given visual stimulus set, by applying machine learning techniques to distinguish the areas based on their activity patterns.
Abstract: The visual cortex of the mouse brain can be divided into ten or more areas that each contain complete or partial retinotopic maps of the contralateral visual field. It is generally assumed that these areas represent discrete processing regions. In contrast to the conventional input-output characterizations of neuronal responses to standard visual stimuli, here we asked whether six of the core visual areas have responses that are functionally distinct from each other for a given visual stimulus set, by applying machine learning techniques to distinguish the areas based on their activity patterns. Visual areas defined by retinotopic mapping were examined using supervised classifiers applied to responses elicited by a range of stimuli. Using two distinct datasets obtained using wide-field and two-photon imaging, we show that the area labels predicted by the classifiers were highly consistent with the labels obtained using retinotopy. Furthermore, the classifiers were able to model the boundaries of visual areas using resting state cortical responses obtained without any overt stimulus, in both datasets. With the wide-field dataset, clustering neuronal responses using a constrained semi-supervised classifier showed graceful degradation of accuracy. The results suggest that responses from visual cortical areas can be classified effectively using data-driven models. These responses likely reflect unique circuits within each area that give rise to activity with stronger intra-areal than inter-areal correlations, and their responses to controlled visual stimuli across trials drive higher areal classification accuracy than resting state responses.
Citations
More filters
Journal ArticleDOI
TL;DR: This review discusses the field's evolving understanding of how cortical and subcortical regions in placental mammals interact cooperatively, and proposes that certain regions prioritize specific types of information over others, but the subnetworks they form are the basis of true specialization.
Abstract: Flexibly selecting appropriate actions in response to complex, ever-changing environments requires both cortical and subcortical regions, which are typically described as participating in a strict hierarchy. In this traditional view, highly specialized subcortical circuits allow for efficient responses to salient stimuli, at the cost of adaptability and context-specificity, which are attributed to the neocortex. Their interactions are often described as the cortex providing top-down command signals for subcortical structures to implement; however, as available technologies develop, studies increasingly demonstrate that behavior is represented by brain-wide activity and that even subcortical structures contain early signals of choice, suggesting that behavioral functions emerge as a result of different regions interacting as truly collaborative networks. In this review, we discuss the field's evolving understanding of how cortical and subcortical regions in placental mammals interact cooperatively- not only via top-down cortical-subcortical inputs, but through bottom-up interactions, especially via the thalamus. We describe our current understanding of the circuitry of both the cortex and two exemplar subcortical structures- the superior colliculus and striatum- to identify which information is prioritized by which regions. We then describe the functional circuits these regions form with one another- and the thalamus- to create parallel loops and complex networks for brain-wide information flow. Finally, we challenge the classic view that functional modules are contained within specific brain regions; instead, we propose that certain regions prioritize specific types of information over others, but the subnetworks they form- defined by their anatomical connections and functional dynamics- are the basis of true specialization.

6 citations

Posted ContentDOI
17 Sep 2022-bioRxiv
TL;DR:
Abstract: How communication between neurons gives rise to natural vision remains a matter of intense investigation. The mid-level visual areas along the ventral stream, as studies in primates have shown, are selective to a common class of natural images—textures—but a circuit-level understanding of this selectivity and its link to perception remain unclear. We addressed these questions in mice, first showing that they can perceptually discriminate between texture types and statistically simpler spectrally matched stimuli. Then, at the neural level, we found that the secondary visual area (LM), more than the primary one (V1), was selective for the higher-order statistics of textures, both at the mesoscopic and single-cell levels. At the circuit level, textures were encoded in neural activity subspaces whose relative distances correlated with the statistical complexity of the images and with the mice’s ability to discriminate between them. These dependencies were more significant in LM, in which the texture-related subspaces were smaller and closer to each other, enabling better stimulus decoding in this area. Together, our results demonstrate texture vision in mice, finding a linking framework between stimulus statistics, neural representations, and perceptual sensitivity—a distinct hallmark of efficient coding computations.

1 citations

References
More filters
Journal ArticleDOI
TL;DR: Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.
Abstract: LIBSVM is a library for Support Vector Machines (SVMs). We have been actively developing this package since the year 2000. The goal is to help users to easily apply SVM to their applications. LIBSVM has gained wide popularity in machine learning and many other areas. In this article, we present all implementation details of LIBSVM. Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail.

40,826 citations

Journal Article
TL;DR: A new technique called t-SNE that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map, a variation of Stochastic Neighbor Embedding that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map.
Abstract: We present a new technique called “t-SNE” that visualizes high-dimensional data by giving each datapoint a location in a two or three-dimensional map. The technique is a variation of Stochastic Neighbor Embedding (Hinton and Roweis, 2002) that is much easier to optimize, and produces significantly better visualizations by reducing the tendency to crowd points together in the center of the map. t-SNE is better than existing techniques at creating a single map that reveals structure at many different scales. This is particularly important for high-dimensional data that lie on several different, but related, low-dimensional manifolds, such as images of objects from multiple classes seen from multiple viewpoints. For visualizing the structure of very large datasets, we show how t-SNE can use random walks on neighborhood graphs to allow the implicit structure of all of the data to influence the way in which a subset of the data is displayed. We illustrate the performance of t-SNE on a wide variety of datasets and compare it with many other non-parametric visualization techniques, including Sammon mapping, Isomap, and Locally Linear Embedding. The visualizations produced by t-SNE are significantly better than those produced by the other techniques on almost all of the datasets.

30,124 citations

Book
Christopher M. Bishop1
17 Aug 2006
TL;DR: Probability Distributions, linear models for Regression, Linear Models for Classification, Neural Networks, Graphical Models, Mixture Models and EM, Sampling Methods, Continuous Latent Variables, Sequential Data are studied.
Abstract: Probability Distributions.- Linear Models for Regression.- Linear Models for Classification.- Neural Networks.- Kernel Methods.- Sparse Kernel Machines.- Graphical Models.- Mixture Models and EM.- Approximate Inference.- Sampling Methods.- Continuous Latent Variables.- Sequential Data.- Combining Models.

22,840 citations

Journal ArticleDOI
TL;DR: There are several arguments which support the observed high accuracy of SVMs, which are reviewed and numerous examples and proofs of most of the key theorems are given.
Abstract: The tutorial starts with an overview of the concepts of VC dimension and structural risk minimization. We then describe linear Support Vector Machines (SVMs) for separable and non-separable data, working through a non-trivial example in detail. We describe a mechanical analogy, and discuss when SVM solutions are unique and when they are global. We describe how support vector training can be practically implemented, and discuss in detail the kernel mapping technique which is used to construct SVM solutions which are nonlinear in the data. We show how Support Vector machines can have very large (even infinite) VC dimension by computing the VC dimension for homogeneous polynomial and Gaussian radial basis function kernels. While very high VC dimension would normally bode ill for generalization performance, and while at present there exists no theory which shows that good generalization performance is guaranteed for SVMs, there are several arguments which support the observed high accuracy of SVMs, which we review. Results of some experiments which were inspired by these arguments are also presented. We give numerous examples and proofs of most of the key theorems. There is new material, and I hope that the reader will find that even old material is cast in a fresh light.

15,696 citations

Journal ArticleDOI
TL;DR: This paper is concerned with the construction of planes of closest fit to systems of points in space and the relationships between these planes and the planes themselves.
Abstract: (1901). LIII. On lines and planes of closest fit to systems of points in space. The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science: Vol. 2, No. 11, pp. 559-572.

10,656 citations