scispace - formally typeset
Search or ask a question

Showing papers in "Psychological Research-psychologische Forschung in 2007"


Journal ArticleDOI
TL;DR: It is shown that increasing the number of targets in the stream can lead to remarkable improvements as long as there are no intervening distractors and a strong role for attentional control processes that may be overzealously applied.
Abstract: The identification of the second of two targets presented in close succession is often impaired--a phenomenon referred to as the attentional blink. Extending earlier work (Di Lollo, Kawahara, Ghorashi, and Enns, in Psychological Research 69:191-200, 2005), the present study shows that increasing the number of targets in the stream can lead to remarkable improvements as long as there are no intervening distractors. In addition, items may even recover from an already induced blink whenever they are preceded by another target. It is shown that limited memory resources contribute to overall performance, but independent of the attentional blink. The findings argue against a limited-capacity account of the blink and suggest a strong role for attentional control processes that may be overzealously applied.

205 citations


Journal ArticleDOI
TL;DR: These findings support the theory of event coding, which claims that perceptual codes and action plans share a common representational medium, which presumably involves the human premotor cortex.
Abstract: Neurophysiological observations suggest that attending to a particular perceptual dimension, such as location or shape, engages dimension-related action, such as reaching and prehension networks. Here we reversed the perspective and hypothesized that activating action systems may prime the processing of stimuli defined on perceptual dimensions related to these actions. Subjects prepared for a reaching or grasping action and, before carrying it out, were presented with location- or size-defined stimulus events. As predicted, performance on the stimulus event varied with action preparation: planning a reaching action facilitated detecting deviants in location sequences whereas planning a grasping action facilitated detecting deviants in size sequences. These findings support the theory of event coding, which claims that perceptual codes and action plans share a common representational medium, which presumably involves the human premotor cortex.

182 citations


Journal ArticleDOI
TL;DR: The present findings suggest that attentional focus on body sway, induced by the instructions, promoted the use of less automatic control process and hampered the efficiency for controlling posture during quiet standing.
Abstract: The purpose of this study was to investigate how attentional focus on body sway affects postural control during quiet standing. To address this issue, sixteen young healthy adults were asked to stand upright as immobile as possible on a force platform in both Control and Attention conditions. In the latter condition, participants were instructed to deliberately focus their attention on their body sways and to increase their active intervention into postural control. The critical analysis was focused on elementary motions computed from the centre of pressure (CoP) trajectories: (1) the vertical projection of the centre of gravity (CoGv) and (2) the difference between CoP and CoGv (CoP–CoGv). The former is recognised as an index of performance in this postural task, whilst the latter constitutes a fair expression of the ankle joint stiffness and is linked to the level of neuromuscular activity of the lower limb muscles required for controlling posture. A frequency-domain analysis showed increased amplitudes and frequencies of CoP–CoGv motions in the Attention relative to the Control condition, whereas non-significant changes were observed for the CoGv motions. Altogether, the present findings suggest that attentional focus on body sway, induced by the instructions, promoted the use of less automatic control process and hampered the efficiency for controlling posture during quiet standing.

167 citations


Journal ArticleDOI
TL;DR: It is found that Spanish-dominant bilinguals failed to show sensitivity to the /ɛ/–/e/ contrast, whereas Catalan-dominants were sensitive to the phonemic contrast, and when the stimuli were presented only visually, none of the two groups presented clear signs of discrimination.
Abstract: We investigated the effects of visual speech information (articulatory gestures) on the perception of second language (L2) sounds. Previous studies have demonstrated that listeners often fail to hear the difference between certain non-native phonemic contrasts, such as in the case of Spanish native speakers regarding the Catalan sounds /epsilon/ and /e/. Here, we tested whether adding visual information about the articulatory gestures (i.e., lip movements) could enhance this perceptual ability. We found that, for auditory-only presentations, Spanish-dominant bilinguals failed to show sensitivity to the /epsilon/-/e/ contrast, whereas Catalan-dominant bilinguals did. Yet, when the same speech events were presented audiovisually, Spanish-dominants (as well as Catalan-dominants) were sensitive to the phonemic contrast. Finally, when the stimuli were presented only visually (in the absence of sound), none of the two groups presented clear signs of discrimination. Our results suggest that visual speech gestures enhance second language perception at the level of phonological processing especially by way of multisensory integration.

134 citations


Journal ArticleDOI
TL;DR: The findings suggest that task-relevant stimulus and response features are spontaneously integrated into independent, local event files, each linking one stimulus to one response feature, thereby increasing the likelihood to repeat a response if one or more stimulus features are repeated.
Abstract: Five experiments investigated the spontaneous integration of stimulus and response features. Participants performed simple, prepared responses (R1) to the mere presence of Go signals (S1) before carrying out another, freely chosen response (R2) to another stimulus (S2), the main question being whether the likelihood of repeating a response depends on whether or not the stimulus, or some of its features, are repeated. Indeed, participants were more likely to repeat the previous response if stimulus form or color was repeated than if it was alternated. The same was true for stimulus location, but only if location was made task-relevant, whether by defining the response set in terms of location, by requiring the report of S2 location, or by having S1 to be selected against a distractor. These findings suggest that task-relevant stimulus and response features are spontaneously integrated into independent, local event files, each linking one stimulus to one response feature. Upon reactivation of one member of the binary link activation is spread to the other, thereby increasing the likelihood to repeat a response if one or more stimulus features are repeated. These findings support the idea that both perceptual events and action plans are cognitively represented in terms of their features, and that feature-integration processes cross borders between perception and action.

123 citations


Journal ArticleDOI
TL;DR: A rapid pointing paradigm was used to assess automatic/obligatory spatial updating after visually displayed upright rotations with or without concomitant physical rotations using a motion platform, and visual stimuli displaying a natural, subject-known scene proved sufficient for enabling automatic and obligatory spatial updating, irrespective of concurrent physical motions.
Abstract: Robust and effortless spatial orientation critically relies on "automatic and obligatory spatial updating", a largely automatized and reflex-like process that transforms our mental egocentric representation of the immediate surroundings during ego-motions. A rapid pointing paradigm was used to assess automatic/obligatory spatial updating after visually displayed upright rotations with or without concomitant physical rotations using a motion platform. Visual stimuli displaying a natural, subject-known scene proved sufficient for enabling automatic and obligatory spatial updating, irrespective of concurrent physical motions. This challenges the prevailing notion that visual cues alone are insufficient for enabling such spatial updating of rotations, and that vestibular/proprioceptive cues are both required and sufficient. Displaying optic flow devoid of landmarks during the motion and pointing phase was insufficient for enabling automatic spatial updating, but could not be entirely ignored either. Interestingly, additional physical motion cues hardly improved performance, and were insufficient for affording automatic spatial updating. The results are discussed in the context of the mental transformation hypothesis and the sensorimotor interference hypothesis, which associates difficulties in imagined perspective switches to interference between the sensorimotor and cognitive (to-be-imagined) perspective.

121 citations


Journal ArticleDOI
TL;DR: This work manipulated full form videos to obtain precise control of the perceived kinematics of a box lifting action, and uses this technique to explore the kinematic cues that affect weight judgment, finding that observers rely most on the duration of the lifting movement to judge weight.
Abstract: When accepting a parcel from another person, we are able to use information about that person’s movement to estimate in advance the weight of the parcel, that is, to judge its weight from observed action. Perceptual weight judgment provides a powerful method to study our interpretation of other people’s actions, but it is not known what sources of information are used in judging weight. We have manipulated full form videos to obtain precise control of the perceived kinematics of a box lifting action, and use this technique to explore the kinematic cues that affect weight judgment. We find that observers rely most on the duration of the lifting movement to judge weight, and make less use of the durations of the grasp phase, when the box is first gripped, or the place phase, when the box is put down. These findings can be compared to the kinematics of natural box lifting behaviour, where we find that the duration of the grasp component is the best predictor of true box weight. The lack of accord between the optimal cues predicted by the natural behaviour and the cues actually used in the perceptual task has implications for our understanding of action observation in terms of a motor simulation. The differences between perceptual and motor behaviour are evidence against a strong version of the motor simulation hypothesis.

114 citations


Journal ArticleDOI
TL;DR: Findings are consistent with the idea that a large portion of the congruency effects stems from direct S–R associations and they do not support a sole mediation by task-set activation in working memory.
Abstract: When people frequently alternate between simple cognitive tasks, performance on stimuli which are assigned the same response in both tasks is typically faster and more accurate than on stimuli which require different responses for both tasks, thus indicating stimulus processing according to the stimulus–response (S–R) rules of the currently irrelevant task. It is currently under debate whether such response congruency effects are mediated by the activation of an abstract representation of the irrelevant task in working memory or by “direct” associations between specific stimuli and responses. We contrasted these views by manipulating concurrent memory load (Experiment 1) and the frequency of specific S–R associations (Experiment 2). While between-task response congruency effects were not affected by the amount of concurrent memory load, they were much stronger for stimuli that were processed frequently in the context of a competitor task. These findings are consistent with the idea that a large portion of the congruency effects stems from direct S–R associations and they do not support a sole mediation by task-set activation in working memory.

114 citations


Journal ArticleDOI
TL;DR: Results supported the prediction that the need for high levels of cognitive control can be alleviated to some degree by making if–then plans that specify how one responds to that critical stimuli.
Abstract: Two tasks where failures of cognitive control are especially prevalent are task-switching and spatial Simon task paradigms. Both tasks require considerable strategic control for the participant to avoid the costs associated with switching tasks (task-switching paradigm) and to minimize the influence of spatial location (Simon task). In the current study, we assessed whether the use of a self-regulatory strategy known as "implementation intentions" would have any beneficial effects on performance in each of these task domains. Forming an implementation intention (i.e., an if-then plan) is a self-regulatory strategy in which a mental link is created between a pre-specified future cue and a desired goal-directed response, resulting in facilitated goal attainment (Gollwitzer in European Review of Social Psychology, 4, 141-185, 1993, American Psychologist, 54, 493-503, 1999). In Experiment 1, forming implementation intentions in the context of a task-switching paradigm led to a reduction in switch costs. In Experiment 2, forming implementation intentions reduced the effects of spatial location in a Simon task for the stimulus specified in the implementation intention. Results supported the prediction that the need for high levels of cognitive control can be alleviated to some degree by making if-then plans that specify how one responds to that critical stimuli.

98 citations


Journal ArticleDOI
TL;DR: Results indicated that both the number of carry/borrow operations and the value of the carry increased problem difficulty, resulting in higher reliance on phonological and executive working-memory components.
Abstract: The present study analyzed the role of phonological and executive components of working memory in the borrow operation in complex subtractions (Experiments 1 and 2) and in the carry operation in complex multiplications (Experiments 3 and 4). The number of carry and borrow operations as well as the value of the carry were manipulated. Results indicated that both the number of carry/borrow operations and the value of the carry increased problem difficulty, resulting in higher reliance on phonological and executive working-memory components. Present results are compared with those obtained for the carry operation in complex addition and are further discussed in the broader framework of working-memory functions.

89 citations


Journal ArticleDOI
TL;DR: To the authors' knowledge, the present study is the first to find evidence that the simultaneous aspects of VSWM play a fundamental role in learning from maps.
Abstract: Recently, increasing attention has been devoted to the study of the role of visuo-spatial working memory (VSWM) in environmental learning and spatial navigation. The present research was aimed at investigating the role of VSWM in map learning using a map drawing paradigm. In the first study, a dual task methodology was used. Results showed that map drawing was selectively impaired by a spatial tapping task that was executed during the map learning phase, hence supporting the hypothesis that VSWM plays an essential role in learning from maps. In the second study, using a correlational methodology, it was shown that performance in simultaneous VSWM tasks, but not in sequential VSWM tasks, predicted map drawing skills. These skills “in turn” correlated with map learning abilities. Finally, in the third study, we replicated the results of the second study, by using a different map. To our knowledge, the present study is the first to find evidence that the simultaneous aspects of VSWM play a fundamental role in learning from maps.

Journal ArticleDOI
TL;DR: The present data show that the instructed S–R mappings influence performance on the embedded B-task, even when they have never been practiced, and are irrelevant with respect to the B- task.
Abstract: In order to test whether or not instructions specifying the stimulus–response (S–R) mappings for a new task suffice to create bindings between specified stimulus and response features, we developed a dual task paradigm of the ABBA type in which participants saw new S–R instructions for the A-task in the beginning of each trial. Immediately after the A-task instructions, participants had to perform a logically independent B-task. The imperative stimulus for the A-task was presented after the B-task had been executed. The present data show that the instructed S–R mappings influence performance on the embedded B-task, even when they (1) have never been practiced, and (2) are irrelevant with respect to the B-task. These results imply that instructions can induce bindings between S- and R-features without prior execution of the task at hand.

Journal ArticleDOI
TL;DR: Introducing task rules at the beginning of the experiment lead to slower RTs when simple stimuli had to be processed and this detrimental effect disappeared with more complex stimuli, and results will be discussed with respect to cognitive control.
Abstract: Switch costs occur whenever participants are asked to switch between two or more task sets. In a typical task switching experiment, participants have to switch between two task sets composed of up to four different stimuli per task set. These 2 (task sets) × 4 (stimuli) contain only 8 different stimulus–response (S–R) mappings, and the question is why participants base their task performance on task sets instead of S–R mappings. The current experiments compared task performance based on task rules with performance based on single stimulus–response mappings. Participants were led to learn eight different S–R mappings with or without foreknowledge about two underlying task sets. Without task set information no difference between shifts and repetitions occurred, whereas introducing task sets at the beginning led to significant switch costs. Most importantly, introducing task sets in the middle of the experiment also resulted in significant switch costs. Furthermore, introducing task rules at the beginning of the experiment lead to slower RTs when simple stimuli (Experiment 1) had to be processed. This detrimental effect disappeared with more complex stimuli (Experiment 2). Results will be discussed with respect to cognitive control.

Journal ArticleDOI
TL;DR: The results with blindfolded-sighted participants demonstrate that accurate learning and wayfinding performance is possible using verbal descriptions and that it is sufficient to describe only local geometric detail.
Abstract: This work investigates whether large-scale indoor layouts can be learned and navigated non-visually, using verbal descriptions of layout geometry that are updated, e.g. contingent on a participant's location in a building. In previous research, verbal information has been used to facilitate route following, not to support free exploration and wayfinding. Our results with blindfolded-sighted participants demonstrate that accurate learning and wayfinding performance is possible using verbal descriptions and that it is sufficient to describe only local geometric detail. In addition, no differences in learning or navigation performance were observed between the verbal study and a control study using visual input. Verbal learning was also compared to the performance of a random walk model, demonstrating that human search behavior is not based on chance decision-making. However, the model performed more like human participants after adding a constraint that biased it against reversing direction.

Journal ArticleDOI
TL;DR: The results showed very few differences among groups in the accuracy of the spatial memories acquired, and the improved pointing accuracy of participants who had access to proprioceptive information relative to that of participants in the other conditions.
Abstract: Although many previous studies have shown that body-based sensory modalities such as vestibular, kinesthetic, and efferent information are useful for acquiring spatial information about one's immediate environment, relatively little work has examined how these modalities affect the acquisition of long-term spatial memory Three groups of participants learned locations along a 146 m indoor route, and subsequently pointed to these locations, estimated distances between them, and constructed maps of the environment One group had access to visual, proprioceptive, and inertial information, another had access to matched visual and matched inertial information, and another had access only to matched visual information In contrast to previous findings examining transient, online spatial representations, our results showed very few differences among groups in the accuracy of the spatial memories acquired The only difference was the improved pointing accuracy of participants who had access to proprioceptive information relative to that of participants in the other conditions Results are discussed in terms of differential sensory contributions to transient and enduring spatial representations

Journal ArticleDOI
TL;DR: The gaze behaviour of four hereditary prosopagnosics was studied in comparison to matched control subjects to determine whether not only face recognition and neuronal processing but also the perceptual acquisition of facial information is specific to prosopgnosia.
Abstract: Prosopagnosia is the inability to recognize someone by the face alone in the absence of sensory or intellectual impairment. In contrast to the acquired form of prosopagnosia we studied the congenital form. Since we could recently show that this form is inherited as a simple monogenic trait we called it hereditary form. To determine whether not only face recognition and neuronal processing but also the perceptual acquisition of facial information is specific to prosopagnosia, we studied the gaze behaviour of four hereditary prosopagnosics in comparison to matched control subjects. This rarely studied form of prosopagnosia ensures that deficits are limited to face recognition. Whereas the control participants focused their gaze on the central facial features, the hereditary prosopagnosics showed a significantly different gaze behaviour. They had a more dispersed gaze and also fixated external facial features. Thus, the face recognition impairment of the hereditary prosopagnosics is reflected in their gaze behaviour.

Journal ArticleDOI
TL;DR: The results support the prediction that a singleton with respect to luminance contrast receives attentional prioritization and extend the biased-competition account to include size contrast, because a large singleton also receives attentionAL prioritization.
Abstract: The biased-competition theory of attention proposes that objects compete for cortical representation in a mutually inhibitory network; competition is biased in favor of the attended item. Here we test two predictions derived from the biased-competition theory. First we assessed whether increasing an object’s relative brightness (luminance contrast) biased competition in favor of (i.e., prioritized) the brighter object. Second we assessed whether increasing an object’s size biased competition in favor of the larger object. In fulfillment of these aims we used an attentional capture paradigm to test whether a featural singleton (an item unique with respect to a feature such as size or brightness) can impact attentional priority even when those features are irrelevant to finding the target. The results support the prediction that a singleton with respect to luminance contrast receives attentional prioritization and extend the biased-competition account to include size contrast, because a large singleton also receives attentional prioritization.

Journal ArticleDOI
Endel Põder1
TL;DR: It was found that facilitation nearly equal to that of differently coloured targets and flankers can be observed with a differently coloured background blob in the location of the target.
Abstract: The crowding effect of adjacent objects on the recognition of a target can be reduced when target and flankers differ in some feature, that is irrelevant to the recognition task. In this study, the mechanisms of this effect were explored using targets and flankers of the same and different colours. It was found that facilitation nearly equal to that of differently coloured targets and flankers can be observed with a differently coloured background blob in the location of the target. The different-colour effect does not require advance knowledge of the target and flanker colours, but the effect increases in the course of three trials with constant mapping of colours. The results are consistent with the notion of exogenous attention that facilitates the processing at the most salient locations in the visual field.

Journal ArticleDOI
TL;DR: Results showed that, even at long SOAs, where IOR is usually observed, facilitation was observed for infrequent targets at the same time that IOR was measured for frequent targets, and this explanation is offered by which the different cuing effects can be considered as different manifestations of attentional capture on target processing, depending on the task set.
Abstract: Orienting attention exogenously to a location can have two different consequences on processing subsequent stimuli appearing at that location: positive (facilitation) at short intervals and negative (inhibition of return) at long ones. In the present experiments, we manipulated the frequency of targets and responses associated with them. Results showed that, even at long SOAs, where IOR is usually observed, facilitation was observed for infrequent targets at the same time that IOR was measured for frequent targets. These results are difficult to explain on the basis of either task set modulation of attentional capture or task set modulation of subsequent orienting processes. In contrast, we offer an explanation by which the different cuing effects can be considered as different manifestations of attentional capture on target processing, depending on the task set.

Journal ArticleDOI
TL;DR: It is argued that these results clarify the processes of the construction of a spatial mental model, and confirm that the visuo-spatial working memory is involved in mental imagery.
Abstract: The paper investigates the involvement of verbal and visuo-spatial working memory during the processing of spatial texts via a dual-task paradigm. Subjects were presented with three texts describing locations from a route perspective, and had either to imagine themselves moving along a route in surroundings or to rehearse verbal information. Concurrently they had to perform a spatial tapping task, an articulatory task, or no secondary task. Performance on a verification test used to assess the product of comprehension showed that the concurrent tapping task impaired performance in the imagery instructions group but not in the repetition instructions group, and caused the beneficial effect of imagery instructions to vanish. This result was not observed with the articulatory task, where interference effects were similar in both instructions groups. Performance on the concurrent tasks confirmed the pattern obtained with the verification test. In addition, results seem partly dependent on the capacity of spatial working memory as measured by the Corsi Blocks Test. We argue that these results clarify the processes of the construction of a spatial mental model, and confirm that the visuo-spatial working memory is involved in mental imagery.

Journal ArticleDOI
TL;DR: The results suggest that while humans have at least two distinct navigational strategies available to them, unlike ants, a computationally-simpler landmark strategy dominates during novel shortcut navigation.
Abstract: Using a metric shortcut paradigm, we have found that like honeybees (Dyer in Animal Behaviour 41:239-246, 1991), humans do not seem to build a metric "cognitive map" from path integration. Instead, observers take novel shortcuts based on visual landmarks whenever they are available and reliable (Foo, Warren, Duchon, & Tarr in Journal of Experimental Psychology-Learning Memory and Cognition 31(2):195-215, 2005). In the present experiment we examine whether humans, like ants (Wolf & Wehner in Journal of Experimental Biology 203:857-868, 2000), first use survey-type path knowledge, built up from path integration, and then subsequently shift to reliance on landmarks. In our study participants walked in an immersive virtual environment while head position and orientation were recorded. During training, participants learned two legs of a triangle with feedback: paths from Home to Red and Home to Blue. A configuration of colored posts surrounded the Red location. To test reliance on landmarks, these posts were covertly translated, rotated, or left unchanged during six probe trials. These probe trials were interspersed during the training procedure to measure changes over learning. Dependence on visual landmarks was immediate and sustained during training, and no significant learning effects were observed other than a decrease in hesitation time. Our results suggest that while humans have at least two distinct navigational strategies available to them, unlike ants, a computationally-simpler landmark strategy dominates during novel shortcut navigation.

Journal ArticleDOI
Thomas Kammer1
TL;DR: The psychophysical characterization of TMS masking, the dependence on stimulus onset asynchrony between visual stimulus and TMS pulse, and the topography of masking within the visual field are considered are considered.
Abstract: Transcranial magnetic stimulation (TMS) applied over the occipital pole can suppress visual perception. Since its first description in 1989 by Amassian et al., this technique has widely been used to investigate visual processing at the cortical level. This article presents a review of experiments masking visual stimuli by TMS. The psychophysical characterization of TMS masking, the dependence on stimulus onset asynchrony between visual stimulus and TMS pulse, and the topography of masking within the visual field are considered. The relation between visual masking and the generation of phosphenes is discussed as well as the underlying physiological mechanisms.

Journal ArticleDOI
TL;DR: It is concluded that attending to target items on the basis of attentional set, but not active ignoring of nontargets items, is sufficient for the occurrence of sustained inattentional blindness.
Abstract: When participants are attending to a subset of visual targets or events and ignoring irrelevant distractors (“selective looking”), they often fail to detect the appearance of an unexpected visual object or event even when the object is visible for several seconds (“sustained inattentional blindness”). An important factor influencing detection rates in selective looking is the attentional set of the participant: the more similar the features of the unexpected object are to the attended ones, the more probably it will be detected. We examined the possible contribution of active ignoring to this similarity effect by studying the role of the distractor objects in sustained inattentional blindness. First we showed the similarity effect for chromatic colors and then we manipulated the similarity of the unexpected object in relation to the distractor objects and did not find any effects. Moreover, we found that inattentional blindness was present even when the displays did not contain any irrelevant to-be-ignored objects. We conclude that attending to target items on the basis of attentional set, but not active ignoring of nontargets items, is sufficient for the occurrence of sustained inattentional blindness.

Journal ArticleDOI
TL;DR: It is argued that the 2:1 mapping potentially leads to an underestimation of “pure” task-switch costs, and a new study in which “transition cues” are used that indicate the identity of the current task based on the Identity of the preceding task.
Abstract: In the explicit cuing version of the task-switching paradigm, each individual task is indicated by a unique task cue. Consequently, a task switch is accompanied by a cue switch. Recently, it has been proposed that priming of cue encoding contributes to the empirically observed switch costs. This proposal was experimentally supported by using a 2:1 mapping of cues to tasks, so that a cue switch does not necessarily imply a task switch. The results indeed suggested a substantial contribution of "cue-switch costs" to task-switch costs. Here we argue that the 2:1 mapping potentially leads to an underestimation of "pure" task-switch costs. To support this argument, we report the results of a new study in which we used "transition cues" that indicate the identity of the current task based on the identity of the preceding task. This new type of cue allows a full factorial manipulation of cue switches and task switches because it includes the condition in which a cue repetition can also indicate a task switch (i.e., when the "switch" cue is repeated). We discuss the methodological implications and argue that the present approach has merits relative to the previously used 2:1 mapping of cues to tasks.

Journal ArticleDOI
TL;DR: The results show that phase correction in synchronization depends not merely on asynchronies but on perceptual monitoring of multiple temporal references within a metrical hierarchy.
Abstract: A local phase perturbation in an auditory sequence during synchronized finger tapping elicits an automatic phase correction response (PCR). The stimulus for the PCR is usually considered to be the most recent tap-tone asynchrony. In this study, participants tapped on target tones (“beats”) of isochronous tone sequences consisting of beats and subdivisions (1:n tapping). A phase perturbation was introduced either on a beat or on a subdivision. Both types of perturbation elicited a PCR, even though there was no asynchrony associated with a subdivision. Moreover, the PCR to a perturbed beat was smaller when an unperturbed subdivision followed than when there was no subdivision. The relative size of the PCRs to perturbed beats and subdivisions depended on tempo, on whether the subdivision was local or present throughout the sequence, and on whether or not participants engaged in mental subdivision, but not on whether or not taps were made on the subdivision level. The results show that phase correction in synchronization depends not merely on asynchronies but on perceptual monitoring of multiple temporal references within a metrical hierarchy.

Journal ArticleDOI
TL;DR: Evidence is provided that perception of environmental scenes elicits automatic affective responses and influences recognition of facial expressions and support for a view that the priming effect by environmental pictures is due to the primes representing environmental scenes and not to the presence of certain low-level colour or shape information in thePrimes.
Abstract: An affective priming paradigm with pictures of environmental scenes and facial expressions as primes and targets, respectively, was employed in order to investigate the role of natural (e.g., vegetation) and built elements (e.g., buildings) in eliciting rapid affective responses. In Experiment 1, images of environmental scenes were digitally manipulated to make continua of priming pictures with a gradual increase of natural elements (and a decrease of built elements). The primes were followed by presentations of facial expressions of happiness and disgust as to-be-recognized target stimuli. The recognition times of happy faces decreased and the recognition times of disgusted faces increased as the quantity of natural/built material present in the primes increased/decreased. The physical changes also influenced the evaluated restorativeness and affective valence of the primes. In Experiment 2, the primes used in Experiment 1 were manipulated in such a way that they were void of any recognizable natural or built elements but contained either similar colours or similar shapes as primes in Experiment 1. This time the results showed no effect of priming. These results were interpreted to give support for a view that the priming effect by environmental pictures is due to the primes representing environmental scenes and not due to the presence of certain low-level colour or shape information in the primes. In all, the present results provide evidence that perception of environmental scenes elicits automatic affective responses and influences recognition of facial expressions.

Journal ArticleDOI
TL;DR: The notion of limited capacity memory processes in search, caused by the necessity of having to remember a target-allocating memory for the upcoming target may consume memory capacity that may otherwise be available for the tagging of distractors.
Abstract: Gibson, Li, Skow, Brown, and Cooke (Psychological Science, 11, 324–327, 2000) had participants carry out a search task in which they were required to detect the presence of one or two targets. In order to successfully perform such a multiple-target visual search task, participants had to remember the location of the first target while searching for the second target. In two experiments we investigated the cost of remembering this target location. In Experiment 1, we compared performance on the Gibson et al. task with performance on a more conventional present–absent search task. The comparison suggests a substantial performance cost as measured by reaction time, number of fixations and slope of the search functions. In Experiment 2, we looked in detail at refixations of distractors, which are a direct measure of attentional deployment. We demonstrated that the cost in this multiple-target visual search task was due to an increased number of refixations on previously visited distractors. Such refixations were present right from the start of the search. This change in search behaviour may be caused by the necessity of having to remember a target-allocating memory for the upcoming target may consume memory capacity that may otherwise be available for the tagging of distractors. These results support the notion of limited capacity memory processes in search.

Journal ArticleDOI
TL;DR: The present experiment focuses on the eye movements following a new shot with or without a reversed-angle camera position, finding almost no evidence for confusion and/or for activities to restore the spatial arrangement following the reversal of the left–right positions.
Abstract: First-order editing violations in film refer either to small displacements of the camera position or to small changes of the image size. Second-order editing violations follow from a reversal of the camera position (reversed-angle shot), leading to a change of the left-right position of the main actors (or objects) and a complete change of the background. With third-order editing violations, the linear sequence of actions in the narrative story is not obeyed. The present experiment focuses on the eye movements following a new shot with or without a reversed-angle camera position. The findings minimize the importance of editing rules which require perceptually smooth transitions between shots; there is also no evidence that changes in the left-right orientation of objects in the scene disturb the visual processing of successive shots. The observed eye movements are due either to the redirecting of attention to the most informative part on the scene or to attention shifts by motion transients in the shot. There is almost no evidence for confusion and/or for activities to restore the spatial arrangement following the reversal of the left-right positions.

Journal ArticleDOI
TL;DR: It was concluded that visuomotor information transmission occurs whenever there is an overlap between the spatial stimulus feature and parameters of the motor representation of the response.
Abstract: Recent findings indicate that two distinct mechanisms can contribute to a Simon effect: a visuomotor information transmission on the one hand and a cognitive code interference on the other hand (see for eg, Wiegand & Wascher, in Journal of Experimental Psychology: Human Perception and Performance 2005a) Furthermore, it was proposed that the occurrence of one or the other mechanism strongly depends on the way responses are coded Visuomotor information transmission seems to depend on a correspondence between stimulus position and spatial anatomical status of the effector, whereas cognitive code interference is thought to be based on relative response location codes To further test the spatial anatomic coding hypothesis, three experiments were conducted, in which the Simon effect with unimanual responses was investigated for horizontal (Experiment 1 and 2) and vertical (Experiment 3) stimulus-response (S-R) relations Based on the finding of a decreasing effect function (indicating the presence of visuomotor information transmission) for horizontal and vertical S-R relations, it was concluded that visuomotor information transmission occurs whenever there is an overlap between the spatial stimulus feature and parameters of the motor representation of the response Furthermore, the specific motor representation seems to be task dependent, that is, it entails those response parameters that clearly differentiate between the two response alternatives in a given task situation

Journal ArticleDOI
TL;DR: To understand standard task-switching phenomena it is critical to consider links between lower level stimulus/response parameters and task sets, and it is demonstrated that when each task was associated with unique locations, error switch costs, stimulus–response congruency effects, as well as the characteristic task-switch × repetition-priming interaction were eliminated, and global selection costs were substantially reduced.
Abstract: Response-time and accuracy costs as assessed in the context of the task-switching paradigm are usually thought to represent processes involved in the selection of abstract task sets. However, task sets are also applied to specific stimulus and response constellations, which in turn may become associated with task-set representations. To explore the consequence of such associations, we used a task-switching paradigm in which subjects had to select between two tasks (color or orientation discrimination) that were either associated with shared or unique stimulus/response locations on a touchscreen. When each task was associated with unique locations, error switch costs, stimulus-response congruency effects, as well as the characteristic task-switch x repetition-priming interaction were eliminated, and global selection costs were substantially reduced. These results demonstrate that to understand standard task-switching phenomena it is critical to consider links between lower level stimulus/response parameters and task sets.