scispace - formally typeset
Search or ask a question

Showing papers in "Perception in 2004"


Journal ArticleDOI
TL;DR: Observations show that basic-level ‘everyday’ object recognition in normal conditions is facilitated by the presence of color information, and support a ‘shape + surface’ model of object recognition, for which color is an integral part of the object representation.
Abstract: Theories of object recognition differ to the extent that they consider object representations as being mediated only by the shape of the object, or shape and surface details, if surface details are part of the representation. In particular, it has been suggested that color information may be helpful at recognizing objects only in very special cases, but not during basic-level object recognition in good viewing conditions. In this study, we collected normative data (naming agreement, familiarity, complexity, and imagery judgments) for Snodgrass and Vanderwart's object database of 260 black-and-white line drawings, and then compared the data to exactly the same shapes but with added gray-level texture and surface details (set 2), and color (set 3). Naming latencies were also recorded. Whereas the addition of texture and shading without color only slightly improved naming agreement scores for the objects, the addition of color information unambiguously improved naming accuracy and speeded correct response times. As shown in previous studies, the advantage provided by color was larger for objects with a diagnostic color, and structurally similar shapes, such as fruits and vegetables, but was also observed for man-made objects with and without a single diagnostic color. These observations show that basic-level 'everyday' object recognition in normal conditions is facilitated by the presence of color information, and support a 'shape + surface' model of object recognition, for which color is an integral part of the object representation. In addition, the new stimuli (sets 2 and 3) and the corresponding normative data provide valuable materials for a wide range of experimental and clinical studies of object recognition.

878 citations


Journal ArticleDOI
TL;DR: Exaggeration of body movement enhanced recognition accuracy and produced higher emotional-intensity ratings, regardless of lighting condition, for movies but to a lesser extent for stills, indicating that intensity judgments of body gestures rely more on movement (or form-from-movement) than static form information.
Abstract: Research on emotion recognition has been dominated by studies of photographs of facial expressions. A full understanding of emotion perception and its neural substrate will require investigations that employ dynamic displays and means of expression other than the face. Our aims were: (i) to develop a set of dynamic and static whole-body expressions of basic emotions for systematic investigations of clinical populations, and for use in functional-imaging studies; (ii) to assess forced-choice emotion-classification performance with these stimuli relative to the results of previous studies; and (iii) to test the hypotheses that more exaggerated whole-body movements would produce (a) more accurate emotion classification and (b) higher ratings of emotional intensity. Ten actors portrayed 5 emotions (anger, disgust, fear, happiness, and sadness) at 3 levels of exaggeration, with their faces covered. Two identical sets of 150 emotion portrayals (full-light and point-light) were created from the same digital footage, along with corresponding static images of the 'peak' of each emotion portrayal. Recognition tasks confirmed previous findings that basic emotions are readily identifiable from body movements, even when static form information is minimised by use of point-light displays, and that full-light and even point-light displays can convey identifiable emotions, though rather less efficiently than dynamic displays. Recognition success differed for individual emotions, corroborating earlier results about the importance of distinguishing differences in movement characteristics for different emotional expressions. The patterns of misclassifications were in keeping with earlier findings on emotional clustering. Exaggeration of body movement (a) enhanced recognition accuracy, especially for the dynamic point-light displays, but notably not for sadness, and (b) produced higher emotional-intensity ratings, regardless of lighting condition, for movies but to a lesser extent for stills, indicating that intensity judgments of body gestures rely more on movement (or form-from-movement) than static form information.

670 citations


Journal ArticleDOI
TL;DR: The proposed two-point model is able to account for four interesting aspects of steering behavior: curve negotiation with occluded visual regions, corrective steering after a lateral drift, lane changing, and individual differences.
Abstract: When steering down a winding road, drivers have been shown to use both near and far regions of the road for guidance during steering. We propose a model of steering that explicitly embodies this idea, using both a 'near point' to maintain a central lane position and a 'far point' to account for the upcoming roadway. Unlike control models that integrate near and far information to compute curvature or more complex features, our model relies solely on one perceptually plausible feature of the near and far points, namely the visual direction to each point. The resulting parsimonious model can be run in simulation within a realistic highway environment to facilitate direct comparison between model and human behavior. Using such simulations, we demonstrate that the proposed two-point model is able to account for four interesting aspects of steering behavior: curve negotiation with occluded visual regions, corrective steering after a lateral drift, lane changing, and individual differences.

406 citations


Journal ArticleDOI
TL;DR: The novel aspect of the algorithm is the initial separation of the image into luminance and chromatic image planes that correspond to the luminance, red–green, and blue–yellow channels of the primate visual system.
Abstract: We present an algorithm for separating the shading and reflectance images of photographed natural scenes. The algorithm exploits the constraint that in natural scenes chromatic and luminance variations that are co-aligned mainly arise from changes in surface reflectance, whereas near-pure luminance variations mainly arise from shading and shadows. The novel aspect of the algorithm is the initial separation of the image into luminance and chromatic image planes that correspond to the luminance, red-green, and blue-yellow channels of the primate visual system. The red-green and blue-yellow image planes are analysed to provide a map of the changes in surface reflectance, which is then used to separate the reflectance from shading changes in both the luminance and chromatic image planes. The final reflectance image is obtained by reconstructing the chromatic and luminance-reflectance-change maps, while the shading image is obtained by subtracting the reconstructed luminance-reflectance image from the original luminance image. A number of image examples are included to illustrate the successes and limitations of the algorithm.

324 citations


Journal ArticleDOI
TL;DR: Perceiving egocentric distance is shown that, as the effort associated with walking increases, perceived distance increases if the perceiver intends to walk the extent, but not if the perception intends to throw.
Abstract: Perceiving egocentric distance is not only a function of the optical variables to which it relates, but also a function of people's current physiological potential to perform intended actions. In a set of experiments, we showed that, as the effort associated with walking increases, perceived distance increases if the perceiver intends to walk the extent, but not if the perceiver intends to throw. Conversely, as the effort associated with throwing increases, perceived distance increases if people intend to throw to the target, but not if they intend to walk. Perceiving distance combines the geometry of the world with our behavior goals and the potential of our body to achieve these goals.

283 citations


Journal ArticleDOI
TL;DR: Results suggest that apparent health of facial skin is correlated both with ratings of male facial attractiveness and with being a visual cue for judgments of the attractiveness of male faces, consistent with the proposal that attractive physical traits are those that positively influence others' perceptions of an individual's health.
Abstract: Whilst the relationship between aspects of facial shape and attractiveness has been extensively studied, few studies have investigated which characteristics of the surface of faces positively influence attractiveness judgments. As many researchers have proposed a link between attractiveness and traits that appear healthy, apparent health of facial skin might be a property of the surface of faces that positively influences attractiveness judgments. In experiment 1 we tested for a positive correlation between ratings of the apparent health of small skin patches (extracted from the left and right cheeks of digital face images) and ratings of the attractiveness of male faces. By using computer-graphics faces, in experiment 2 we aimed to establish if apparent health of skin influences male facial attractiveness independently of shape information. Results suggest that apparent health of facial skin is correlated both with ratings of male facial attractiveness (experiment 1) and with being a visual cue for judgments of the attractiveness of male faces (experiment 2). These findings underline the importance of controlling for the influence of visible skin condition in studies of facial attractiveness and are consistent with the proposal that attractive physical traits are those that positively influence others' perceptions of an individual's health.

234 citations


Journal ArticleDOI
TL;DR: A consistent negative relationship between odour–colour degree-of-fit ratings and the difference between the odour scores and the colour scores on one of the emotion dimensions (pleasure) suggests that emotional associations may partly underlie odour-colour correspondences.
Abstract: To facilitate communication about fragrances, one can use the colours people tend to associate with their smells. We investigated to what extent odour-colour correspondences for fine fragrances can be accounted for by underlying emotional associations. Odour-colour matches and degree-of-fit judgments revealed that odours were matched to colours non-randomly. Matching colours differed mainly on blackness (brightness), and less on chromaticness (saturation) and hue. Furthermore, we found a consistent negative relationship between odour-colour degree-of-fit ratings and the difference between the odour scores and the colour scores on one of the emotion dimensions (pleasure). This suggests that emotional associations may partly underlie odour-colour correspondences.

145 citations


Journal ArticleDOI
TL;DR: It is concluded that humans use an interception strategy based on the egocentric direction of a moving target, which can be explained by trying to maintain a constant target-heading angle while trying to walk a straight path with transient steering dynamics.
Abstract: How do people walk to a moving target, and what visual information do they use to do so? Under a pursuit strategy, one would head toward the target's current position, whereas under an interception strategy, one would lead the target, ideally by maintaining a constant target-heading angle (or constant bearing angle). Either strategy may be guided by the egocentric direction of the target, local optic flow from the target, or global optic flow from the background. In four experiments, participants walked through a virtual environment to reach a target moving at a constant velocity. Regardless of the initial conditions, they walked ahead of the target for most of a trial at a fairly constant speed, consistent with an interception strategy (experiment 1). This behavior can be explained by trying to maintain a constant target-heading angle while trying to walk a straight path, with transient steering dynamics. In contrast to previous results for stationary targets, manipulation of the local optic flow from the target (experiment 2) and the global optic flow of the background (experiments 3 and 4) failed to influence interception behavior. Relative motion between the target and the background did affect the path slightly, presumably owing to its effect on perceived target motion. We conclude that humans use an interception strategy based on the egocentric direction of a moving target.

135 citations


Journal ArticleDOI
TL;DR: Exposure to familiar and unfamiliar faces that were morphed from a happy to an angry expression within a given identity suggests that representations of familiar faces for recognition preserve some information about typical emotional expressions.
Abstract: Face recognition has been assumed to be independent of facial expression. We used familiar and unfamiliar faces that were morphed from a happy to an angry expression within a given identity. Participants performed speeded two-choice decisions according to whether or not a face was familiar. Consistent with earlier findings, reaction times for classifications of unfamiliar faces were independent of facial expressions. In contrast, expression clearly influenced the recognition of familiar faces, with fastest recognition for moderately happy expressions. This suggests that representations of familiar faces for recognition preserve some information about typical emotional expressions.

124 citations


Journal ArticleDOI
TL;DR: The results suggest that episodic familiarity affects attractiveness ratings independently of general or structural familiarity, and the implications for the ‘face-space’ model are discussed.
Abstract: Several studies have shown that facial attractiveness is positively correlated with both familiarity and typicality. Here we manipulated the familiarity of typical and distinctive faces to measure the effect on attractiveness. In our first experiment, we collected ratings of attractiveness, distinctiveness, and familiarity using three different groups of participants. Our stimuli included 84 images of female faces, presented in a full-face view. We replicated the finding that attractiveness ratings negatively correlate with distinctiveness ratings. In addition, we showed that attractiveness ratings were positively correlated with familiarity ratings. In our second experiment, we demonstrated that increasing exposure to faces increases their attractiveness, although there was no differential effect of exposure on typical and distinctive faces. Our results suggest that episodic familiarity affects attractiveness ratings independently of general or structural familiarity. The implications of our findings for the 'face-space' model are discussed.

99 citations


Journal ArticleDOI
TL;DR: The findings provide support for the SSIP hypothesis and reveal, surprisingly, that the phenomenal visual space is not unique but depends on how optic information is selected.
Abstract: On the basis of the finding that a common and homogeneous ground surface is vital for accurate egocentric distance judgments (Sinai et al, 1998 Nature 395 497-500), we propose a sequential-surface-integration-process (SSIP) hypothesis to elucidate how the visual system constructs a representation of the ground-surface in the intermediate distance range. According to the SSIP hypothesis, a near ground-surface representation is formed from near depth cues, and is utilized as an anchor to integrate the more distant surfaces by using texture-gradient information as the depth cue. The SSIP hypothesis provides an explanation for the finding that egocentric distance judgment is underestimated when a texture boundary exists on the ground surface that commonly supports the observer and target. We tested the prediction that the fidelity of the visually represented ground-surface reference frame depends on how the visual system selects the surface information for integration. Specifically, if information is selected along a direct route between the observer and target where the ground surface is disrupted by an occluding object, the ground surface will be inaccurately represented. In experiments 1-3 we used a perceptual task and two different visually directed tasks to show that this leads to egocentric distance underestimation. Judgment is accurate however, when the observer selects the continuous ground information bypassing the occluding object (indirect route), as found in experiments 4 and 5 with a visually directed task. Altogether, our findings provide support for the SSIP hypothesis and reveal, surprisingly, that the phenomenal visual space is not unique but depends on how optic information is selected.

Journal ArticleDOI
TL;DR: The current results suggest that if a sufficiently large vection advantage can be produced when participants are expecting to experience self-motion, it is likely to persist in object-motion-bias conditions.
Abstract: Both coherent perspective jitter and explicit changing-size cues have been shown to improve the vection induced by radially expanding optic flow. We examined whether these stimulus-based vection advantages could be modified by altering cognitions and/or expectations about both the likelihood of self-motion perception and the purpose of the experiment. In the main experiment, participants were randomly assigned into two groups-one where the cognitive conditions biased participants towards self-motion perception and another where the cognitive conditions biased them towards object-motion perception. Contrary to earlier findings by Lepecq et al (1995 Perception 24 435-449), we found that identical visual displays were less likely to induce vection in 'object-motion-bias' conditions than in 'self-motion bias' conditions. However, significant jitter and size advantages for vection were still found in both cognitive conditions (cognitive bias effects were greatest for non-jittering same-size control displays). The current results suggest that if a sufficiently large vection advantage can be produced when participants are expecting to experience self-motion, it is likely to persist in object-motion-bias conditions.

Journal ArticleDOI
TL;DR: The perceptually bistable character of point-light walkers has been examined in three experiments and the effects of disambiguating the stimulus by introducing a local depth cue (occlusion) or a more global depth cues (applying perspective projection) were explored.
Abstract: The perceptually bistable character of point-light walkers has been examined in three experiments. A point-light figure without explicit depth cues constitutes a perfectly ambiguous stimulus: from all viewpoints, multiple interpretations are possible concerning the depth orientation of the figure. In the first experiment, it is shown that non-lateral views of the walker are indeed interpreted in two orientations, either as facing towards the viewer or as facing away from the viewer, but that the interpretation in which the walker is oriented towards the viewer is reported more frequently. In the second experiment the point-light figure was walking backwards, making the global orientation of the point-light figure opposite to the direction of global motion. The interpretation in which the walker was facing the viewer was again reported more frequently. The robustness of these findings was examined in the final experiment, in which the effects of disambiguating the stimulus by introducing a local depth cue (occlusion) or a more global depth cue (applying perspective projection) were explored.

Journal ArticleDOI
TL;DR: It is found that the availability of visual information during locomotion (particularly optic flow) led to an ‘under-perception’ of movement relative to conditions in which visual information was absent During locomotion.
Abstract: By systematically varying cue availability in the stimulus and response phases of a series of same-modality and cross-modality distance matching tasks, we examined the contributions of static visual information, idiothetic information, and optic flow information. The experiment was conducted in a large-scale, open, outdoor environment. Subjects were presented with information about a distance and were then required to turn 1808 before producing a distance estimate. Distance encoding and responding occurred via: (i) visually perceived target distance, or (ii) traversed distance through either blindfolded locomotion or during sighted locomotion. The results demon- strated that subjects performed with similar accuracy across all conditions. In conditions in which the stimulus and the response were delivered in the same mode, when visual information was absent, constant error was minimal; whereas, when visual information was present, overestimation was observed. In conditions in which the stimulus and response modes differed, a consistent error pattern was observed. By systematically comparing complementary conditions, we found that the availability of visual information during locomotion (particularly optic flow) led to an 'under-perception' of movement relative to conditions in which visual information was absent during locomotion.

Journal ArticleDOI
TL;DR: Data from fifty naïve and fifty professional observers suggest that there are key types of behaviour (particularly gestures and body position) that allow predictions to be made and the performance of naïve observers is comparable to that of experts.
Abstract: Can potentially antisocial or criminal behaviour be predicted? Our study aimed to ascertain (a) whether observers can successfully predict the onset of such behaviour when viewing real recordings from CCTV; (b) where, in the sequence of events, it is possible to make this prediction; and (c) whether there may be a difference between naive and professional observers. We used 100 sample scenes from UK urban locations. Of these, 18 led to criminal behaviour (fights or vandalism). A further 18 scenes were matched as closely as possible to the crime examples, but did not lead to any crime, and 64 were neutral scenes chosen from a wide variety of noncriminal situations. A signal-detection paradigm was used in conjunction with a 6-point rating scale. Data from fifty naive and fifty professional observers suggest that (a) observers can distinguish crime sequences from neutral sequences and from matches; (b) there are key types of behaviour (particularly gestures and body position) that allow predictions to be made; (c) the performance of naive observers is comparable to that of experts. However, because the experts were predominantly male, the absence of an effect of experience may have been due to gender differences, which were investigated in a subsidiary experiment. The results of experiment 2 leave open the possibility that females perform better than males at such tasks.

Journal ArticleDOI
TL;DR: It is suggested that general, nonsensory difficulties may underlie the poor performance of dyslexic groups on many psychophysical tasks.
Abstract: Dyslexic groups have been reported to display poorer mean performance than groups of normal readers on a variety of psychophysical tasks. However, inspection of the distribution of individual scores for each group typically reveals that the majority of dyslexic observers actually perform within the normal range. Differences between group means often reflect the influence of a small number of dyslexic individuals who perform very poorly. While such findings are typically interpreted as evidence for specific perceptual deficiencies in dyslexia, caution in this approach is necessary. In this study we examined how general difficulties with task completion might manifest themselves in group psychophysical studies. Simulations of the effect of errant or inattentive trials on performance produced patterns of variability similar to those seen in dyslexic groups. Additionally, predicted relationships between the relative variability in dyslexic and control groups, and the magnitude of group differences bore close resemblance to the outcomes of a meta-analysis of empirical studies. These results suggest that general, nonsensory difficulties may underlie the poor performance of dyslexic groups on many psychophysical tasks. Implications and recommendations for future research are discussed.

Journal ArticleDOI
TL;DR: In two different experiments a visual habituation/dishabituation procedure was used to test groups of 3–10-month-old infants for their ability to discriminate the role reversal of two abstract figures chasing each other on a computer screen, showing changes in perceptual-cognitive development.
Abstract: In two different experiments a visual habituation/dishabituation procedure was used to test groups of 3-10-month-old infants for their ability to discriminate the role reversal of two abstract figures (discs of different colors) chasing each other on a computer screen. Results of the first experiment point to a reliable age effect. Only 8-10-month-old infants tended to dishabituate to a role reversal between chaser and chasee. A second experiment shows that in dishabituating to the role reversal, 8-10-month-olds do base this discrimination on relational information between the two discs and not merely on the contrast between their respective vitality or discrete dynamic. By the age of 8-10 months, infants demonstrate sensitivity to information specifying what one disc does to the other, at a distance. These findings point to important changes in perceptual-cognitive development and are discussed in the context of a well described key transition in social-cognitive development occurring at around 9 months of age.

Journal ArticleDOI
TL;DR: The major result was that relative target–distractor salience and target-distractor similarity affected search performance independently, and performance was better in cases where the irrelevant distractor was not a salient item in the search display and did not look similar to the target.
Abstract: Previous research suggests that the allocation of attention is largely controlled either in a stimulus-driven or in a goal-driven manner. To date, few studies have systematically manipulated variables affecting stimulus-driven and goal-driven selection independently in order to investigate how both manners of control interrelate and affect performance in visual search. In the present study observers were presented with search displays consisting of an array of line segments rotated at various orientations. The task of observers was to indicate the presence or absence of a vertical line segment (the target) presented amongst a series of nontargets and possibly one distractor. By varying the absolute differences in orientation between the target, nontargets, and distractors, relative target-distractor salience and target-distractor similarity were independently manipulated to investigate the contribution of stimulus-driven and goal-driven control. The major result was that relative target-distractor salience and target-distractor similarity affected search performance independently. Performance was better in cases where the irrelevant distractor was not a salient item in the search display and did not look similar to the target. The results are discussed in terms of models of attentional control.

Journal ArticleDOI
TL;DR: It is shown that cast shadows can have a significant influence on the speed of visual search and results support the proposal that an early-level system rapidly identifies regions as shadows and then discounts them, making their shapes more difficult to access.
Abstract: We show that cast shadows can have a significant influence on the speed of visual search. In particular, we find that search based on the shape of a region is affected when the region is darker than the background and corresponds to a shadow formed by lighting from above. Results support the proposal that an early-level system rapidly identifies regions as shadows and then discounts them, making their shapes more difficult to access. Several constraints used by this system are mapped out, including constraints on the luminance and texture of the shadow region, and on the nature of the item casting the shadow. Among other things, this system is found to distinguish between line elements (items containing only edges) and surface elements (items containing visible surfaces), with only the latter deemed capable of casting a shadow.

Journal ArticleDOI
TL;DR: A novel illusion, wherein the perception of causality affects the perceived spatial relations among two objects involved in a collision event: observers systematically underestimate the amount of overlap between two items in an event which is seen as a causal collision.
Abstract: When an object A moves toward an object B until they are adjacent, at which point A stops and B starts moving, we often see a collisionie we see A as the cause of B's motion The spatiotemporal parameters which mediate the perception of causality have been explored in many studies, but this work is seldom related to other aspects of perception Here we report a novel illusion, wherein the perception of causality affects the perceived spatial relations among two objects involved in a collision event: observers systematically underestimate the amount of overlap between two items in an event which is seen as a causal collision This occurs even when the causal nature of the event is induced by a surrounding context, such that estimates of the amount of overlap in the very same event are much improved when the event is displayed in isolation, without a 'causal' interpretation This illusion implies that the perception of causality does not proceed completely independently of other visual processes, but can affect the perception of other spatial properties

Journal ArticleDOI
TL;DR: The results suggest that the orientation-dependent face mechanism has a rapid whole-face processing capacity specific to the internal second-order (coordinate) spatial relations of facial features.
Abstract: Faces are perceived via an orientation-dependent expert mechanism. We previously showed that inversion impaired perception of the spatial relations of features more in the lower face than in the (more salient) upper face, suggests a failure to rapidly process this type of structural data from the entire face. In this study we wished to determine if this interaction between inversion and regional salience, which we consider a marker for efficient whole-face processing, was specific to second-order (coordinate) spatial relations or also affected other types of structural information in faces. We used an oddity paradigm to test the ability of seventeen subjects to discriminate changes in feature size, feature spatial relations, and external contour in both the upper and lower face. We also tested fourteen subjects on perception of two different types of spatial relations: second-order changes that create plausible alternative faces, and illegal spatial changes that transgress normal rules of facial geometry. In both experiments we examined for asymmetries between upper-face and lower-face perceptual accuracy with brief stimulus presentations. While all structural changes were less easily discerned in inverted faces, only changes to spatial relations showed a marked asymmetry between the upper and lower face, with far worse performance in the mouth region. Furthermore, this asymmetry was found only for second-order spatial relations and not illegal spatial changes. These results suggest that the orientation-dependent face mechanism has a rapid whole-face processing capacity specific to the internal second-order (coordinate) spatial relations of facial features.

Journal ArticleDOI
TL;DR: It is found that when the objects were presented in a sparse array, search times to find the target were similar for displays composed of simple and compound objects, but when the sameObjects were presented as dense clutter, search functions were steeper for shows composed of compound objects.
Abstract: An airport security worker searching a suitcase for a weapon is engaging in an especially difcult search task: the target is not well-specied, it is not salient and it is not predicted by its context. Under these conditions, search may proceed item-byitem. The experiment reported here tested whether the items for this form of search are whole familiar objects. Our displays were composed of color photographs of ordinary objects that were either uniform in color and texture (simple), or had two or more parts with different colors or textures (compound). The observer’s task was to detect the presence of a target belonging to a broad category (food). We found that when the objects were presented in a sparse array, search times to nd the target were similar for displays composed of simple and compound objects. But when the same objects were presented as dense clutter, search functions were steeper for displays composed of compound objects. We attribute this difference to the difculty of segmenting compound objects in clutter: compared with simple objects, bottom-up grouping processes are less likely to organize compound objects into a single item. Our results indicate that while search rates in a sparse display may be determined by the number of objects, search rates in clutter are also affected by the number of object parts.

Journal ArticleDOI
TL;DR: The results indicate that the cortical structures involved in the processing of global form achieve functional maturity between 6 and 9 years of age.
Abstract: We studied the development of sensitivity to global form in 6-year-olds, 9-year-olds, and adults (n = 24 in each group) using Glass patterns with varying ratios of paired signal dots to noise dots. The developmental pattern was similar whether the global structure within the Glass patterns was concentric or parallel. Thresholds were equally immature for both types of pattern at 6 years of age (about twice the adult value) but were adult-like at 9 years of age. Together, the results indicate that the cortical structures involved in the processing of global form achieve functional maturity between 6 and 9 years of age. During middle childhood, the mechanisms mediating sensitivity to concentric structure develop at the same rate as those mediating sensitivity to parallel structure.

Journal ArticleDOI
TL;DR: In this work, it is shown that the estimate of the light source position is affected by a gradual luminance ramp added to the image, and that observers process impossible shadow images as if they ignored the local features of the objects.
Abstract: Shadows cast by objects contain potentially useful information about the location of these objects in the scene as well as their surface reflectance. However, before the visual system can use this information, it has to solve the shadow correspondence problem, that is to match the objects with their respective shadows. In the first experiment, it is shown that the estimate of the light source position is affected by a gradual luminance ramp added to the image. In the second experiment, it is shown that observers process impossible shadow images as if they ignored the local features of the objects. All together, the results suggest that the visual system solves the shadow correspondence problem by relying on a coarse representation of the scene.

Journal ArticleDOI
TL;DR: It is concluded that deficits in luminance, spatial resolution, curvature, line orientation, and contrast at low spatial frequencies are unlikely to contribute to apperceptive prosopagnosia, and more relevant may be contrast sensitivity at higher spatial frequencies and the analysis of object spatial structure.
Abstract: Some patients with prosopagnosia may have an apperceptive basis to their recognition defect. Perceptual abnormalities have been reported in single cases or small series, but the causal link of such deficits to prosopagnosia is unclear. Our goal was to identify candidate perceptual processes that might contribute to prosopagnosia, by subjecting several prosopagnosic patients to a battery of functions that may be necessary for accurate facial perception.We tested seven prosopagnosic patients. Three had unilateral right occipitotemporal lesions, two had bilateral posterior occipitotemporal lesions, and one had right anterior-to-occipital temporal damage along with a small left temporal lesion. These lesions all included the fusiform face area, in contrast to one patient with bilateral anterior temporal lesions. Most patients had impaired performance on face-matching tests and difficulty with subcategory judgments for non-face objects.The most consistent deficits in patients with lesions involving the fusifor...

Journal ArticleDOI
TL;DR: The results show that in this task males are on average less context-sensitive than females, that the overlap is large, and that subjects with very high or very low context sensitivity tend to have the sex and profession predicted by the above hypotheses.
Abstract: Context sensitivity of size perception has previously been used to study individual differences related to the distinction between local, analytic, or field-independent and global, holistic, or field-dependent perceptual styles. For example, it has been used in several recent studies of autistic spectrum disorders, which may involve an excessive bias toward local processing. Autism is much more common in males, and there is evidence that this may be in part because males in general tend to be less context-sensitive than females, and thus are more affected by conditions that further reduce context sensitivity. There is also evidence that a bias to local processing is more common in professions that require attention to detail. Context sensitivity of size perception was therefore studied as a function of sex and academic discipline in sixty-four university staff and students by a simple, sensitive, and specific psychophysical measure based on the Ebbinghaus illusion. The results show that in this task males are on average less context-sensitive than females, that the overlap is large, and that subjects with very high or very low context sensitivity tend to have the sex and profession predicted by the above hypotheses.

Journal ArticleDOI
TL;DR: A model is presented in which the direction of perceptual ‘up’ is determined from the sum of three weighted vectors corresponding to the vision, gravity, and body-orientation cues, which contributes to the understanding of how shape-from-shading is deduced, and also predicts the confidence with which the ‘ up’ direction is perceived.
Abstract: The perception of shading-defined form results from an interaction between shading cues and the frames of reference within which those cues are interpreted. In the absence of a clear source of illumination, the definition of 'up' becomes critical to deducing the perceived shape from a particular pattern of shading. In our experiments, twelve subjects adjusted the orientation of a planar disc painted with a linear luminance gradient from one side to the other, until the disc appeared maximally convexthat is, until the luminance gradient induced the maximum perception of a three-dimensional shape. The vision, gravity, and body-orientation cues were altered relative to each other. Visual cues were manipulated by the York Tilted Room facility, and body cues were altered by simply lying on one side. The orientation of the disc that appeared maximally convex varied in a systematic fashion with these manipulations. We present a model in which the direction of perceptual 'up' is determined from the sum of three weighted vectors corresponding to the vision, gravity, and body-orientation cues. The model predicts the perceived direction of 'up', contributes to our understanding of how shape-from-shading is deduced, and also predicts the confidence with which the 'up' direction is perceived.

Journal ArticleDOI
TL;DR: It is concluded that cases of both oblique and frontal viewing are very similar in that perception simply follows what is indicated by the proximal stimulus, even though this may imply that the (perceived) physical and pictorial spaces segregate.
Abstract: The eyes of portrayed people are often noticed to 'follow you' when you move with respect to a flat painting or photograph. We investigated this well-known effect through extensive measurements of pictorial relief and apparent orientation of the picture surface for a number of viewing conditions, including frontal and oblique views. We conclude that cases of both oblique and frontal viewing are very similar in that perception simply follows what is indicated by the proximal stimulus, even though this may imply that the (perceived) physical and pictorial spaces segregate. The effect of foreshortening then causes an apparent narrowing of pictorial objects. We find no evidence for any 'correction' mechanisms that might be specifically active in oblique viewing conditions.

Journal ArticleDOI
Shinki Ando1
TL;DR: The results suggest that one mechanism for gaze judgment is based on a simple analysis of the local luminance ratio between the eye and the surrounding region.
Abstract: Changing the luminance of one side of the sclera induces an apparent shift of the perceived direction of gaze toward the darker side of the sclera (Ando, 2002 Perception 31 657-674). However, when both the sclera and the skin surrounding it were darkened simultaneously as if by a cast shadow, the apparent direction of gaze shifted less than when only the sclera was darkened. The results suggest that one mechanism for gaze judgment is based on a simple analysis of the local luminance ratio between the eye and the surrounding region.

Journal ArticleDOI
TL;DR: The results revealed that intentional car following reduced the spread of search and increased fixation durations, with a dramatic increase in the time spent processing the vehicle ahead (controlled for exposure), which was most pronounced during nighttime drives.
Abstract: Does intentional car following capture visual attention to the extent that driving may be impaired? We tested fifteen participants on a rudimentary driving simulator. Participants were either instructed to follow a vehicle ahead through a simulated version of London, or were given verbal instructions on where to turn during the route. The presence or absence of pedestrians, and the simulated time of the drive (day or night) were varied across the trials. Eye movements were recorded along with behavioural measures including give-way violations, give-way accidents, and kerb impacts. The results revealed that intentional car following reduced the spread of search and increased fixation durations, with a dramatic increase in the time spent processing the vehicle ahead (controlled for exposure). The effects were most pronounced during nighttime drives. During the car-following trials participants were also less aware of pedestrians, produced more give-way violations, and were involved in more give-way accidents. The results draw attention to the problems encountered during car following, and we relate this to the cognitive demands placed on drivers, especially police drivers who often engage in intentional car following and pursuits.