scispace - formally typeset
Search or ask a question
Author

Gabriel Kreiman

Bio: Gabriel Kreiman is an academic researcher from Harvard University. The author has contributed to research in topics: Visual cortex & Cognitive neuroscience of visual object recognition. The author has an hindex of 37, co-authored 156 publications receiving 14957 citations. Previous affiliations of Gabriel Kreiman include Boston Children's Hospital & California Institute of Technology.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, high-density oligonucleotide arrays offer the opportunity to examine patterns of gene expression on a genome scale, and the authors have designed custom arrays that interrogate the expression of the vast majority of proteinencoding human and mouse genes and have used them to profile a panel of 79 human and 61 mouse tissues.
Abstract: The tissue-specific pattern of mRNA expression can indicate important clues about gene function. High-density oligonucleotide arrays offer the opportunity to examine patterns of gene expression on a genome scale. Toward this end, we have designed custom arrays that interrogate the expression of the vast majority of protein-encoding human and mouse genes and have used them to profile a panel of 79 human and 61 mouse tissues. The resulting data set provides the expression patterns for thousands of predicted genes, as well as known and poorly characterized genes, from mice and humans. We have explored this data set for global trends in gene expression, evaluated commonly used lines of evidence in gene prediction methodologies, and investigated patterns indicative of chromosomal organization of transcription. We describe hundreds of regions of correlated transcription and show that some are subject to both tissue and parental allele-specific expression, suggesting a link between spatial expression and imprinting.

3,513 citations

Journal ArticleDOI
13 May 2010-Nature
TL;DR: It is revealed that a widespread mechanism of enhancer activation involves RNAPII binding and eRNA synthesis, which occurs specifically at enhancers that are actively engaged in promoting mRNA synthesis.
Abstract: We used genome-wide sequencing methods to study stimulus-dependent enhancer function in mouse cortical neurons. We identified approximately 12,000 neuronal activity-regulated enhancers that are bound by the general transcriptional co-activator CBP in an activity-dependent manner. A function of CBP at enhancers may be to recruit RNA polymerase II (RNAPII), as we also observed activity-regulated RNAPII binding to thousands of enhancers. Notably, RNAPII at enhancers transcribes bi-directionally a novel class of enhancer RNAs (eRNAs) within enhancer domains defined by the presence of histone H3 monomethylated at lysine 4. The level of eRNA expression at neuronal enhancers positively correlates with the level of messenger RNA synthesis at nearby genes, suggesting that eRNA synthesis occurs specifically at enhancers that are actively engaged in promoting mRNA synthesis. These findings reveal that a widespread mechanism of enhancer activation involves RNAPII binding and eRNA synthesis.

2,177 citations

Journal ArticleDOI
23 Jun 2005-Nature
TL;DR: A remarkable subset of MTL neurons are selectively activated by strikingly different pictures of given individuals, landmarks or objects and in some cases even by letter strings with their names, which suggest an invariant, sparse and explicit code, which might be important in the transformation of complex visual percepts into long-term and more abstract memories.
Abstract: It takes a fraction of a second to recognize a person or an object even when seen under strikingly different conditions. How such a robust, high-level representation is achieved by neurons in the human brain is still unclear. In monkeys, neurons in the upper stages of the ventral visual pathway respond to complex images such as faces and objects and show some degree of invariance to metric properties such as the stimulus size, position and viewing angle. We have previously shown that neurons in the human medial temporal lobe (MTL) fire selectively to images of faces, animals, objects or scenes. Here we report on a remarkable subset of MTL neurons that are selectively activated by strikingly different pictures of given individuals, landmarks or objects and in some cases even by letter strings with their names. These results suggest an invariant, sparse and explicit code, which might be important in the transformation of complex visual percepts into long-term and more abstract memories.

1,626 citations

Journal ArticleDOI
04 Nov 2005-Science
TL;DR: A biologically plausible, classifier-based readout technique was used to investigate the neural coding of selectivity and invariance at the IT population level and found unexpectedly accurate and robust information about both object “identity” and “category.”
Abstract: Understanding the brain computations leading to object recognition requires quantitative characterization of the information represented in inferior temporal (IT) cortex. We used a biologically plausible, classifier-based readout technique to investigate the neural coding of selectivity and invariance at the IT population level. The activity of small neuronal populations (approximately 100 randomly selected cells) over very short time intervals (as small as 12.5 milliseconds) contained unexpectedly accurate and robust information about both object "identity" and "category." This information generalized over a range of object positions and scales, even for novel objects. Coarse information about position and scale could also be read out from the same population.

870 citations

Journal ArticleDOI
TL;DR: Functional neuroimaging in humans and electrophysiology in awake mokeys indicate that there are important differences between striate and extrastriate visual cortex in how well neural activity correlates with consciousness.
Abstract: The directness and vivid quality of conscious experience belies the complexity of the underlying neural mechanisms, which remain incompletely understood. Recent work has focused on identifying the brain structures and patterns of neural activity within the primate visual system that are correlated with the content of visual consciousness. Functional neuroimaging in humans and electrophysiology in awake mokeys indicate that there are important differences between striate and extrastriate visual cortex in how well neural activity correlates with consciousness. Moreover, recent neuroimaging studies indicate that, in addition to these ventral areas of visual cortex, dorsal prefrontal and parietal areas might contribute to conscious visual experience.

663 citations


Cited by
More filters
Book
18 Nov 2016
TL;DR: Deep learning as mentioned in this paper is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts, and it is used in many applications such as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames.
Abstract: Deep learning is a form of machine learning that enables computers to learn from experience and understand the world in terms of a hierarchy of concepts. Because the computer gathers knowledge from experience, there is no need for a human computer operator to formally specify all the knowledge that the computer needs. The hierarchy of concepts allows the computer to learn complicated concepts by building them out of simpler ones; a graph of these hierarchies would be many layers deep. This book introduces a broad range of topics in deep learning. The text offers mathematical and conceptual background, covering relevant concepts in linear algebra, probability theory and information theory, numerical computation, and machine learning. It describes deep learning techniques used by practitioners in industry, including deep feedforward networks, regularization, optimization algorithms, convolutional networks, sequence modeling, and practical methodology; and it surveys such applications as natural language processing, speech recognition, computer vision, online recommendation systems, bioinformatics, and videogames. Finally, the book offers research perspectives, covering such theoretical topics as linear factor models, autoencoders, representation learning, structured probabilistic models, Monte Carlo methods, the partition function, approximate inference, and deep generative models. Deep Learning can be used by undergraduate or graduate students planning careers in either industry or research, and by software engineers who want to begin using deep learning in their products or platforms. A website offers supplementary material for both readers and instructors.

38,208 citations

28 Jul 2005
TL;DR: PfPMP1)与感染红细胞、树突状组胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作�ly.
Abstract: 抗原变异可使得多种致病微生物易于逃避宿主免疫应答。表达在感染红细胞表面的恶性疟原虫红细胞表面蛋白1(PfPMP1)与感染红细胞、内皮细胞、树突状细胞以及胎盘的单个或多个受体作用,在黏附及免疫逃避中起关键的作用。每个单倍体基因组var基因家族编码约60种成员,通过启动转录不同的var基因变异体为抗原变异提供了分子基础。

18,940 citations

Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations

Book ChapterDOI
TL;DR: This chapter demonstrates the functional importance of dopamine to working memory function in several ways and demonstrates that a network of brain regions, including the prefrontal cortex, is critical for the active maintenance of internal representations.
Abstract: Publisher Summary This chapter focuses on the modern notion of short-term memory, called working memory. Working memory refers to the temporary maintenance of information that was just experienced or just retrieved from long-term memory but no longer exists in the external environment. These internal representations are short-lived, but can be maintained for longer periods of time through active rehearsal strategies, and can be subjected to various operations that manipulate the information in such a way that makes it useful for goal-directed behavior. Working memory is a system that is critically important in cognition and seems necessary in the course of performing many other cognitive functions, such as reasoning, language comprehension, planning, and spatial processing. This chapter demonstrates the functional importance of dopamine to working memory function in several ways. Elucidation of the cognitive and neural mechanisms underlying human working memory is an important focus of cognitive neuroscience and neurology for much of the past decade. One conclusion that arises from research is that working memory, a faculty that enables temporary storage and manipulation of information in the service of behavioral goals, can be viewed as neither a unitary, nor a dedicated system. Data from numerous neuropsychological and neurophysiological studies in animals and humans demonstrates that a network of brain regions, including the prefrontal cortex, is critical for the active maintenance of internal representations.

10,081 citations

Book
01 Jan 2009
TL;DR: The motivations and principles regarding learning algorithms for deep architectures, in particular those exploiting as building blocks unsupervised learning of single-layer modelssuch as Restricted Boltzmann Machines, used to construct deeper models such as Deep Belief Networks are discussed.
Abstract: Can machine learning deliver AI? Theoretical results, inspiration from the brain and cognition, as well as machine learning experiments suggest that in order to learn the kind of complicated functions that can represent high-level abstractions (e.g. in vision, language, and other AI-level tasks), one would need deep architectures. Deep architectures are composed of multiple levels of non-linear operations, such as in neural nets with many hidden layers, graphical models with many levels of latent variables, or in complicated propositional formulae re-using many sub-formulae. Each level of the architecture represents features at a different level of abstraction, defined as a composition of lower-level features. Searching the parameter space of deep architectures is a difficult task, but new algorithms have been discovered and a new sub-area has emerged in the machine learning community since 2006, following these discoveries. Learning algorithms such as those for Deep Belief Networks and other related unsupervised learning algorithms have recently been proposed to train deep architectures, yielding exciting results and beating the state-of-the-art in certain areas. Learning Deep Architectures for AI discusses the motivations for and principles of learning algorithms for deep architectures. By analyzing and comparing recent results with different learning algorithms for deep architectures, explanations for their success are proposed and discussed, highlighting challenges and suggesting avenues for future explorations in this area.

7,767 citations