scispace - formally typeset
Search or ask a question

Showing papers on "Visual perception published in 2012"


Journal ArticleDOI
TL;DR: An integrated review of the neural mechanisms involved in contour grouping, border ownership, and figure-ground perception is concluded by evaluating what modern vision science has offered compared to traditional Gestalt psychology, whether the authors can speak of a Gestalt revival, and where the remaining limitations and challenges lie.
Abstract: In 1912, Max Wertheimer published his paper on phi motion, widely recognized as the start of Gestalt psychology. Because of its continued relevance in modern psychology, this centennial anniversary is an excellent opportunity to take stock of what Gestalt psychology has offered and how it has changed since its inception. We first introduce the key findings and ideas in the Berlin school of Gestalt psychology, and then briefly sketch its development, rise, and fall. Next, we discuss its empirical and conceptual problems, and indicate how they are addressed in contemporary research on perceptual grouping and figure–ground organization. In particular, we review the principles of grouping, both classical (e.g., proximity, similarity, common fate, good continuation, closure, symmetry, parallelism) and new (e.g., synchrony, common region, element and uniform connectedness), and their role in contour integration and completion. We then review classic and new image-based principles of figure–ground organization, how it is influenced by past experience and attention, and how it relates to shape and depth perception. After an integrated review of the neural mechanisms involved in contour grouping, border ownership, and figure–ground perception, we conclude by evaluating what modern vision science has offered compared to traditional Gestalt psychology, whether we can speak of a Gestalt revival, and where the remaining limitations and challenges lie. A better integration of this research tradition with the rest of vision science requires further progress regarding the conceptual and theoretical foundations of the Gestalt approach, which is the focus of a second review article.

1,047 citations



Journal ArticleDOI
TL;DR: The present 3 year longitudinal study shows that prereading attentional orienting--assessed by serial search performance and spatial cueing facilitation--captures future reading acquisition skills in grades 1 and 2 after controlling for age, nonverbal IQ, speech-sound processing, and nonalphabetic cross-modal mapping.

448 citations


Journal ArticleDOI
TL;DR: Wagemans et al. as mentioned in this paper reviewed contemporary formulations of holism within an information-processing framework, allowing for operational definitions (e.g., integral dimensions, emergent features, configural superiority, global precedence, primacy of holistic/configural properties) and a refined understanding of its psychological implications.
Abstract: Our first review article (Wagemans et al., 2012) on the occasion of the centennial anniversary of Gestalt psychology focused on perceptual grouping and figure-ground organization. It concluded that further progress requires a reconsideration of the conceptual and theoretical foundations of the Gestalt approach, which is provided here. In particular, we review contemporary formulations of holism within an information-processing framework, allowing for operational definitions (e.g., integral dimensions, emergent features, configural superiority, global precedence, primacy of holistic/configural properties) and a refined understanding of its psychological implications (e.g., at the level of attention, perception, and decision). We also review 4 lines of theoretical progress regarding the law of Pragnanz-the brain's tendency of being attracted towards states corresponding to the simplest possible organization, given the available stimulation. The first considers the brain as a complex adaptive system and explains how self-organization solves the conundrum of trading between robustness and flexibility of perceptual states. The second specifies the economy principle in terms of optimization of neural resources, showing that elementary sensors working independently to minimize uncertainty can respond optimally at the system level. The third considers how Gestalt percepts (e.g., groups, objects) are optimal given the available stimulation, with optimality specified in Bayesian terms. Fourth, structural information theory explains how a Gestaltist visual system that focuses on internal coding efficiency yields external veridicality as a side effect. To answer the fundamental question of why things look as they do, a further synthesis of these complementary perspectives is required.

400 citations


Journal ArticleDOI
TL;DR: The mathematical combination of visual and vestibular inputs by single neurons is generally consistent with recent theories of optimal probabilistic computation in neural circuits, providing direct evidence for a neural mechanism mediating a simple and widespread form of statistical inference.
Abstract: Integration of multiple sensory cues is essential for precise and accurate perception and behavioral performance, yet the reliability of sensory signals can vary across modalities and viewing conditions. Human observers typically employ the optimal strategy of weighting each cue in proportion to its reliability, but the neural basis of this computation remains poorly understood. We trained monkeys to perform a heading discrimination task from visual and vestibular cues, varying cue reliability randomly. The monkeys appropriately placed greater weight on the more reliable cue, and population decoding of neural responses in the dorsal medial superior temporal area closely predicted behavioral cue weighting, including modest deviations from optimality. We found that the mathematical combination of visual and vestibular inputs by single neurons is generally consistent with recent theories of optimal probabilistic computation in neural circuits. These results provide direct evidence for a neural mechanism mediating a simple and widespread form of statistical inference.

382 citations


Journal ArticleDOI
TL;DR: A high density of the cortical graph was found that exceeded that shown previously in monkey and the community structure was strongly dependent on the location of the source area: outputs from medial/anterior extrastriate areas were more strongly linked to parietal, motor, and limbic cortices, whereas lateral extrastiates were preferentially connected to temporal and parahippocampal cortices.
Abstract: Much of the information used for visual perception and visually guided actions is processed in complex networks of connections within the cortex. To understand how this works in the normal brain and to determine the impact of disease, mice are promising models. In primate visual cortex, information is processed in a dorsal stream specialized for visuospatial processing and guided action and a ventral stream for object recognition. Here, we traced the outputs of 10 visual areas and used quantitative graph analytic tools of modern network science to determine, from the projection strengths in 39 cortical targets, the community structure of the network. We found a high density of the cortical graph that exceeded that shown previously in monkey. Each source area showed a unique distribution of projection weights across its targets (i.e., connectivity profile) that was well fit by a lognormal function. Importantly, the community structure was strongly dependent on the location of the source area: outputs from medial/anterior extrastriate areas were more strongly linked to parietal, motor, and limbic cortices, whereas lateral extrastriate areas were preferentially connected to temporal and parahippocampal cortices. These two subnetworks resemble dorsal and ventral cortical streams in primates, demonstrating that the basic layout of cortical networks is conserved across species.

346 citations


Journal ArticleDOI
TL;DR: It is shown that when 18-month old infants interacted with objects in play with their parents, they created moments in which a single object was visually dominant, and learning occurred when bottom-up visual information was clean and uncluttered.

338 citations


Journal ArticleDOI
TL;DR: This paper surveys research on attention and visual perception, with a specific focus on results that have direct relevance to visualization and visual analytics.
Abstract: A fundamental goal of visualization is to produce images of data that support visual analysis, exploration, and discovery of novel insights. An important consideration during visualization design is the role of human visual perception. How we "see” details in an image can directly impact a viewer's efficiency and effectiveness. This paper surveys research on attention and visual perception, with a specific focus on results that have direct relevance to visualization and visual analytics. We discuss theories of low-level visual perception, then show how these findings form a foundation for more recent work on visual memory and visual attention. We conclude with a brief overview of how knowledge of visual attention and visual memory is being applied in visualization and graphics. We also discuss how challenges in visualization are motivating research in psychophysics.

330 citations


Journal ArticleDOI
TL;DR: This paper investigated the influence of different visual properties on nonsymbolic number processes and showed that the current assumptions about the relation between number and its visual characteristics are incorrect, and that people do not extract number from a visual scene independent of its visual cues.
Abstract: To date, researchers investigating nonsymbolic number processes devoted little attention to the visual properties of their stimuli. This is unexpected, as nonsymbolic number is defined by its visual characteristics. When number changes, its visual properties change accordingly. In this study, we investigated the influence of different visual properties on nonsymbolic number processes and show that the current assumptions about the relation between number and its visual characteristics are incorrect. Similar to previous studies, we controlled the visual cues: Each visual cue was not predictive of number. Nevertheless, participants showed congruency effects induced by the visual properties of the stimuli. These congruency effects scaled with the number of visual cues manipulated, implicating that people do not extract number from a visual scene independent of its visual cues. Instead, number judgments are based on the integration of information from multiple visual cues. Consequently, current ways to control the visual cues of the number stimuli are insufficient, as they control only a single variable at the time. And, more important, the existence of an approximate number system that can extract number independent of the visual cues appears unlikely. We therefore propose that number judgment is the result of the weighing of several distinct visual cues.

299 citations


Journal ArticleDOI
TL;DR: The studies reviewed here demonstrate that perception and neural processing of facial expressions are substantially modified by contextual information, including verbal, visual, and auditory information presented together with the face as well as knowledge or processing biases already present in the observer.
Abstract: Facial expressions are of eminent importance for social interaction as they convey information about other individuals’ emotions and social intentions. According to the predominant “basic emotion” approach, the perception of emotion in faces is based on the rapid, automatic categorization of prototypical, universal expressions. Consequently, the perception of facial expressions has typically been investigated using isolated, de-contextualized, static pictures of facial expressions that maximize the distinction between categories. However, in everyday life, an individual’s face is not perceived in isolation, but almost always appears within a situational context, which may arise from other people, the physical environment surrounding the face, as well as multichannel information from the sender. Furthermore, situational context may be provided by the perceiver, including already present social information gained from affective learning and implicit processing biases such as race bias. Thus, the perception of facial expressions is presumably always influenced by contextual variables. In this comprehensive review, we aim at (1) systematizing the contextual variables that may influence the perception of facial expressions and (2) summarizing experimental paradigms and findings that have been used to investigate these influences. The studies reviewed here demonstrate that perception and neural processing of facial expressions are substantially modified by contextual information, including verbal, visual, and auditory information presented together with the face as well as knowledge or processing biases already present in the observer. These findings further challenge the assumption of automatic, hardwired categorical emotion extraction mechanisms predicted by basic emotion theories. Taking into account a recent model on face processing, we discuss where and when these different contextual influences may take place, thus outlining potential avenues in future research.

294 citations


Journal ArticleDOI
TL;DR: The Enhanced Perceptual Functioning Model proposes that enhanced autistic performance in basic perceptual tasks results from stronger engagement of sensory processing mechanisms, a situation that may facilitate an atypically prominent role for perceptual mechanisms in supporting cognition.
Abstract: Autistics often exhibit enhanced perceptual abilities when engaged in visual search, visual discrimination, and embedded figure detection. In similar fashion, while performing a range of perceptual or cognitive tasks, autistics display stronger physiological engagement of the visual system than do non-autistics. To account for these findings, the Enhanced Perceptual Functioning Model proposes that enhanced autistic performance in basic perceptual tasks results from stronger engagement of sensory processing mechanisms, a situation that may facilitate an atypically prominent role for perceptual mechanisms in supporting cognition. Using quantitative meta-analysis of published functional imaging studies from which Activation Likelihood Estimation maps were computed, we asked whether autism is associated with enhanced task-related activity for a broad range of visual tasks. To determine whether atypical engagement of visual processing is a general or domain-specific phenomenon, we examined three different visual processing domains: faces, objects, and words. Overall, we observed more activity in autistics compared to non-autistics in temporal, occipital, and parietal regions. In contrast, autistics exhibited less activity in frontal cortex. The spatial distribution of the observed differential between-group patterns varied across processing domains. Autism may be characterized by enhanced functional resource allocation in regions associated with visual processing and expertise. Atypical adult organizational patterns may reflect underlying differences in developmental neural plasticity that can result in aspects of the autistic phenotype, including enhanced visual skills, atypical face processing, and hyperlexia.

Journal ArticleDOI
TL;DR: Psychometric functions for contrast sensitivity fitted for the regular and irregular conditions indicated that temporal expectation modulates perceptual processing by enhancing the contrast sensitivity of visual targets, and these effects support the idea that temporal structure of external events can entrain the attentional focus and psychophysical data, optimizing the processing of relevant sensory information.
Abstract: It is increasingly clear that we extract patterns of temporal regularity between events to optimize information processing. Whereas some of the mechanisms for facilitating action preparation and execution have been well documented, much less is understood about whether and how temporal expectations influence visual perception. We used a psychophysical paradigm and computational modeling to investigate the mechanisms by which temporal expectation can modulate visual perception. Visual targets appeared in a stream of noise-patches separated by a fixed (400 ms regular condition) or jittered (200/300/400/500/600 ms irregular condition) intervals. Targets were visual gratings tilted 45° clockwise or counter-clockwise, presented at one of seven contrast levels. Human observers were required to perform an orientation discrimination (i.e., left or right). Psychometric functions for contrast sensitivity fitted for the regular and irregular conditions indicated that temporal expectation modulates perceptual processing by enhancing the contrast sensitivity of visual targets. This increase in the signal strength was accompanied by a reduction in reaction times. A diffusion model indicated that rhythmic temporal expectation enhanced the signal-to-noise gain of the sensory evidence upon which decisions were made. These effects support the idea that temporal structure of external events can entrain the attentional focus and psychophysical data, optimizing the processing of relevant sensory information.

Journal ArticleDOI
26 Jul 2012-Neuron
TL;DR: The evidence in favor of traveling waves is summarized, suggesting that their substrate may lie in long-range horizontal connections and that their functional role may involve the integration of information over large regions of space.

Journal ArticleDOI
TL;DR: It is proposed that gaze behavior while determining a person’s identity, emotional state, or gender can be explained as an adaptive brain strategy to learn eye movements that optimize performance in these evolutionarily important perceptual tasks.
Abstract: When viewing a human face, people often look toward the eyes. Maintaining good eye contact carries significant social value and allows for the extraction of information about gaze direction. When identifying faces, humans also look toward the eyes, but it is unclear whether this behavior is solely a byproduct of the socially important eye movement behavior or whether it has functional importance in basic perceptual tasks. Here, we propose that gaze behavior while determining a person’s identity, emotional state, or gender can be explained as an adaptive brain strategy to learn eye movement plans that optimize performance in these evolutionarily important perceptual tasks. We show that humans move their eyes to locations that maximize perceptual performance determining the identity, gender, and emotional state of a face. These optimal fixation points, which differ moderately across tasks, are predicted correctly by a Bayesian ideal observer that integrates information optimally across the face but is constrained by the decrease in resolution and sensitivity from the fovea toward the visual periphery (foveated ideal observer). Neither a model that disregards the foveated nature of the visual system and makes fixations on the local region with maximal information, nor a model that makes center-of-gravity fixations correctly predict human eye movements. Extension of the foveated ideal observer framework to a large database of real-world faces shows that the optimality of these strategies generalizes across the population. These results suggest that the human visual system optimizes face recognition performance through guidance of eye movements not only toward but, more precisely, just below the eyes.

Journal ArticleDOI
TL;DR: It is shown that cross-modal phase locking of oscillatory visual cortex activity can arise in the human brain to affect perceptual and EEG measures of visual processing in a cyclical manner, consistent with occipital alpha oscillations underlying a rapid cycling of neural excitability in visual areas.

Journal ArticleDOI
TL;DR: Evidence linking individual differences in multisensory temporal processes to differences in the individual's audiovisual integration of illusory stimuli is provided, providing strong evidence that the temporal processing of multiple sensory signals and the merging of multiple signals into a single, unified perception, are highly related.
Abstract: Human multisensory systems are known to bind inputs from the different sensory modalities into a unified percept, a process that leads to measurable behavioral benefits. This integrative process can be observed through multisensory illusions, including the McGurk effect and the sound-induced flash illusion, both of which demonstrate the ability of one sensory modality to modulate perception in a second modality. Such multisensory integration is highly dependent upon the temporal relationship of the different sensory inputs, with perceptual binding occurring within a limited range of asynchronies known as the temporal binding window (TBW). Previous studies have shown that this window is highly variable across individuals, but it is unclear how these variations in the TBW relate to an individual’s ability to integrate multisensory cues. Here we provide evidence linking individual differences in multisensory temporal processes to differences in the individual’s audiovisual integration of illusory stimuli. Our data provide strong evidence that the temporal processing of multiple sensory signals and the merging of multiple signals into a single, unified perception, are highly related. Specifically, the width of right side of an individuals’ TBW, where the auditory stimulus follows the visual, is significantly correlated with the strength of illusory percepts, as indexed via both an increase in the strength of binding synchronous sensory signals and in an improvement in correctly dissociating asynchronous signals. These findings are discussed in terms of their possible neurobiological basis, relevance to the development of sensory integration, and possible importance for clinical conditions in which there is growing evidence that multisensory integration is compromised.

Journal ArticleDOI
TL;DR: It is suggested that the hippocampus processes complex conjunctions of spatial features, and that it may be more appropriate to consider the representations for which this structure is critical, rather than the cognitive processes that it mediates.
Abstract: In this review, we will discuss the idea that the hippocampus may be involved in both memory and perception, contrary to theories that posit functional and neuroanatomical segregation of these processes. This suggestion is based on a number of recent neuropsychological and functional neuroimaging studies that have demonstrated that the hippocampus is involved in the visual discrimination of complex spatial scene stimuli. We argue that these findings cannot be explained by long-term memory or working memory processing or, in the case of patient findings, dysfunction beyond the medial temporal lobe (MTL). Instead, these studies point toward a role for the hippocampus in higher-order spatial perception. We suggest that the hippocampus processes complex conjunctions of spatial features, and that it may be more appropriate to consider the representations for which this structure is critical, rather than the cognitive processes that it mediates.

Journal ArticleDOI
TL;DR: The results support the view that imagery and perception are based on similar neural representations and a shared representation of location in low-level and high-level ventral visual cortex is found.
Abstract: Visual imagery allows us to vividly imagine scenes in the absence of visual stimulation. The likeness of visual imagery to visual perception suggests that they might share neural mechanisms in the brain. Here, we directly investigated whether perception and visual imagery share cortical representations. Specifically, we used a combination of functional magnetic resonance imaging (fMRI) and multivariate pattern classification to assess whether imagery and perception encode the ‘‘category’’ of objects and their ‘‘location’’ in a similar fashion. Our results indicate that the fMRI response patterns for different categories of imagined objects can be used to predict the fMRI response patters for seen objects. Similarly, we found a shared representation of location in low-level and high-level ventral visual cortex. Thus, our results support the view that imagery and perception are based on similar neural representations.

Journal ArticleDOI
TL;DR: This study maps the connectivity of the human rostral temporal lobe in vivo for the first time using diffusion-weighted imaging probabilistic tractography and indicates that convergence of sensory information in the temporal lobe is in fact a graded process that occurs along both its longitudinal and lateral axes and culminates in the most rostrals limits.
Abstract: In recent years, multiple independent neuroscience investigations have implicated critical roles for the rostral temporal lobe in auditory and visual perception, language, and semantic memory. Although arising in the context of different cognitive functions, most of these suggest that there is a gradual convergence of sensory information in the temporal lobe that culminates in modality-and perceptually invariant representations at the most rostral aspect. Currently, however, too little is known regarding connectivity within the human temporal lobe to be sure of exactly how and where convergence occurs; existing hypotheses are primarily derived on the basis of cross-species generalizations from invasive nonhuman primate studies, the validity of which is unclear, especially where language function is concerned. In this study, we map the connectivity of the human rostral temporal lobe in vivo for the first time using diffusion-weighted imaging probabilistic tractography. The results indicate that convergence of sensory information in the temporal lobe is in fact a graded process that occurs along both its longitudinal and lateral axes and culminates in the most rostral limits. We highlight the consistency of our results with those of prior functional neuroimaging, computational modeling, and patient studies. By going beyond simple fasciculus reconstruction, we systematically explored the connectivity of specific temporal lobe areas to frontal and parietal language regions. In contrast to the graded within-temporal lobe connectivity, this intertemporal connectivity was found to dissociate across caudal, mid, and rostral subregions. Furthermore, we identified a basal rostral temporal region with very limited connectivity to areas outside the temporal lobe, which aligns with recent evidence that this subregion underpins the extraction of modality-and context-invariant semantic representations.

Journal ArticleDOI
TL;DR: The results suggest that while imagery and perception have similar neural substrates, they involve different network dynamics, resolving the tension between previous imaging and neuropsychological studies.

Journal ArticleDOI
TL;DR: A model that predicts that this gaze response will lead to the transfer of visual attention between crowd members is developed, but it is not sufficiently strong to produce a tipping point or critical mass of gaze-following that has previously been predicted for crowd dynamics.
Abstract: Pedestrian crowds can form the substrate of important socially contagious behaviors, including propagation of visual attention, violence, opinions, and emotional state. However, relating individual to collective behavior is often difficult, and quantitative studies have largely used laboratory experimentation. We present two studies in which we tracked the motion and head direction of 3,325 pedestrians in natural crowds to quantify the extent, influence, and context dependence of socially transmitted visual attention. In our first study, we instructed stimulus groups of confederates within a crowd to gaze up to a single point atop of a building. Analysis of passersby shows that visual attention spreads unevenly in space and that the probability of pedestrians adopting this behavior increases as a function of stimulus group size before saturating for larger groups. We develop a model that predicts that this gaze response will lead to the transfer of visual attention between crowd members, but it is not sufficiently strong to produce a tipping point or critical mass of gaze-following that has previously been predicted for crowd dynamics. A second experiment, in which passersby were presented with two stimulus confederates performing suspicious/irregular activity, supports the predictions of our model. This experiment reveals that visual interactions between pedestrians occur primarily within a 2-m range and that gaze-copying, although relatively weak, can facilitate response to relevant stimuli. Although the above aspects of gaze-following response are reproduced robustly between experimental setups, the overall tendency to respond to a stimulus is dependent on spatial features, social context, and sex of the passerby.

Journal ArticleDOI
TL;DR: This study failed to replicate previous findings in that subjects' accuracy was remarkably lower and visualizations exhibited no measurable benefit, but suggests that visualizations are more effective when the text is given without numerical values.
Abstract: People have difficulty understanding statistical information and are unaware of their wrong judgments, particularly in Bayesian reasoning. Psychology studies suggest that the way Bayesian problems are represented can impact comprehension, but few visual designs have been evaluated and only populations with a specific background have been involved. In this study, a textual and six visual representations for three classic problems were compared using a diverse subject pool through crowdsourcing. Visualizations included area-proportional Euler diagrams, glyph representations, and hybrid diagrams combining both. Our study failed to replicate previous findings in that subjects' accuracy was remarkably lower and visualizations exhibited no measurable benefit. A second experiment confirmed that simply adding a visualization to a textual Bayesian problem is of little help, even when the text refers to the visualization, but suggests that visualizations are more effective when the text is given without numerical values. We discuss our findings and the need for more such experiments to be carried out on heterogeneous populations of non-experts.

Book ChapterDOI
TL;DR: In this paper, the authors investigate whether there can be visual arguments and what a visual argument would look like if we encountered one, and if they are possible in a non-metaphorical way, are there any visual arguments?
Abstract: The chapter investigates the extension of argument into the realm of visual expression. Although images can be influential in affecting attitudes and beliefs it does not follow that such images are arguments. So we should at the outset investigate whether there can be visual arguments. To do so, we need to know what a visual argument would look like if we encountered one. How, if at all, are visual and verbal arguments related? An account of a concept of visual argument serves to establish the possibility that they exist. If they are possible in a non-metaphorical way, are there any visual arguments? Examples show that they do exist: in paintings and sculpture, in print advertisements, in TV commercials and in political cartoons. But visual arguments are not distinct in essence from verbal arguments. The argument is always a propositional entity, merely expressed differently in the two cases. And the effectiveness in much visual persuasion is not due to any arguments conveyed.

Journal ArticleDOI
TL;DR: It is shown that top-down attention also has a separate influence on the background coupling between visual areas: adopting different attentional goals resulted in specific patterns of noise correlations in the visual system, whereby intrinsic activity in the same set of low-level areas was shared with only those high- level areas relevant to the current goal.
Abstract: Top-down attention is an essential cognitive ability, allowing our finite brains to process complex natural environments by prioritizing information relevant to our goals. Previous evidence suggests that top-down attention operates by modulating stimulus-evoked neural activity within visual areas specialized for processing goal-relevant information. We show that top-down attention also has a separate influence on the background coupling between visual areas: adopting different attentional goals resulted in specific patterns of noise correlations in the visual system, whereby intrinsic activity in the same set of low-level areas was shared with only those high-level areas relevant to the current goal. These changes occurred independently of evoked activity, persisted without visual stimulation, and predicted behavioral success in deploying attention better than the modulation of evoked activity. This attentional switching of background connectivity suggests that attention may help synchronize different levels of the visual processing hierarchy, forming state-dependent functional pathways in human visual cortex to prioritize goal-relevant information.

Journal ArticleDOI
17 May 2012-PLOS ONE
TL;DR: It is proposed that humans estimate numerosity by weighing the different visual cues present in the stimuli, which suggests that the existence of an approximate number system that can extract numerosity independently of the visual cues is unlikely.
Abstract: Mainstream theory suggests that the approximate number system supports our non-symbolic number abilities (e.g. estimating or comparing different sets of items). It is argued that this system can extract number independently of the visual cues present in the stimulus (diameter, aggregate surface, etc.). However, in a recent report we argue that this might not be the case. We showed that participants combined information from different visual cues to derive their answers. While numerosity comparison requires a rough comparison of two sets of items (smaller versus larger), numerosity estimation requires a more precise mechanism. It could therefore be that numerosity estimation, in contrast to numerosity comparison, might rely on the approximate number system. To test this hypothesis, we conducted a numerosity estimation experiment. We controlled for the visual cues according to current standards: each single visual property was not informative about numerosity. Nevertheless, the results reveal that participants were influenced by the visual properties of the dot arrays. They gave a larger estimate when the dot arrays consisted of dots with, on average, a smaller diameter, aggregate surface or density but a larger convex hull. The reliance on visual cues to estimate numerosity suggests that the existence of an approximate number system that can extract numerosity independently of the visual cues is unlikely. Instead, we propose that humans estimate numerosity by weighing the different visual cues present in the stimuli.

Journal ArticleDOI
TL;DR: It is found that information about the emotional content of unattended faces presented at the periphery of the visual field is rapidly processed and stored in a predictive memory representation by the visual system, and shows a 'negativity bias' under unattended conditions.

Journal ArticleDOI
07 Jun 2012-Neuron
TL;DR: The results suggest that the activity of neuronal populations in at least two association cortical areas represents the content of conscious visual perception.

Journal ArticleDOI
TL;DR: Results show that attentional capture by salient distractors can be inhibited for short-duration search displays, in which it would interfere with target processing, and demonstrate that salience-driven capture is not a purely bottom–up phenomenon but is subject to top–down control.
Abstract: The question whether attentional capture by salient but task-irrelevant visual stimuli is triggered in a bottom-up fashion or depends on top-down task settings is still unresolved. Strong support for bottom-up capture was obtained in the additional singleton task, in which search arrays were visible until response onset. Equally strong evidence for top-down control of attentional capture was obtained in spatial cueing experiments in which display durations were very brief. To demonstrate the critical role of temporal task demands on salience-driven attentional capture, we measured ERP indicators of capture by task-irrelevant color singletons in search arrays that could also contain a shape target. In Experiment 1, all displays were visible until response onset. In Experiment 2, display duration was limited to 200 msec. With long display durations, color singleton distractors elicited an N2pc component that was followed by a late Pd component, suggesting that they triggered attentional capture, which was later replaced by location-specific inhibition. When search arrays were visible for only 200 msec, the distractor-elicited N2pc was eliminated and was replaced by a Pd component in the same time range, indicative of rapid suppression of capture. Results show that attentional capture by salient distractors can be inhibited for short-duration search displays, in which it would interfere with target processing. They demonstrate that salience-driven capture is not a purely bottom-up phenomenon but is subject to top-down control.

BookDOI
06 Dec 2012
TL;DR: This book presents evidence for a Visual-Processing-Deficit Subtype Among Disabled Readers among Disabled Readers, as well as a review of Visual Processes in Reading: Directions for Research and Theory.
Abstract: Contents: Preface. K.E. Stanovich, Introduction. Part I: Background. R.L. Venezky, History of Interest in the Visual Component of Reading. D.M. Willows, M. Terepocki, The Relation of Reversal Errors to Reading Disabilities. M.C. Corballis, I.L. Beale, Orton Revisited: Dyslexia, Laterality, and Left-Right Confusion. Part II: Neuropsychological Bases of Visual Processes. S. Lehmkuhle, Neurological Basis of Visual Processes in Reading. B.G. Breitmeyer, Sustained (P) and Transient (M) Channels in Vision: A Review and Implications for Reading. M.J. Riddoch, G.W. Humphreys, Visual Aspects of Neglect Dyslexia. Part III: Visual Processes in Reading. D.W. Massaro, T. Sanocki, Visual Information Processing in Reading. E. Corcos, D.M. Willows, The Processing of Orthographic Information. A. Pollatsek, Eye Movements in Reading. A. Kennedy, Eye Movement Control and Visual Display Units. L.B. Feldman, Bi-Alphabetism and the Design of a Reading Mechanism. Part IV: Visual Factors in Reading Disabilities. D.M. Willows, R.S. Kruk, E. Corcos, Are There Differences Between Disabled and Normal Readers in Their Processing of Visual Information? C. Watson, D.M. Willows, Evidence for a Visual-Processing-Deficit Subtype Among Disabled Readers. W.J. Lovegrove, M.C. Williams, Visual Temporal Processing Deficits in Specific Reading Disability. J.F. Stein, Visuospatial Perception in Disabled Readers. P.H.K. Seymour, H.M. Evans, The Visual (Orthographic) Processor and Developmental Dyslexia. R.K. Olson, H. Forsberg, Disabled and Normal Readers' Eye Movements in Reading and Nonreading Tasks. P.G. Aaron, J-C. Guillemard, Artists as Dyslexics. Part V: Parameters Affecting Visual Processing. R.P. Garzia, Optometric Factors in Reading Disability. A. Wilkins, Reading and Visual Discomfort. R.S. Kruk, Processing Text on Monitors. Part VI: Conclusions and Future Directions. K. Rayner, Visual Processes in Reading: Directions for Research and Theory.

Journal ArticleDOI
TL;DR: It is concluded that visual experience is necessary for the neural development of normal spatial cognition through the maturation of multisensory neurons for spatial tasks.