scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Experimental Psychology: Human Perception and Performance in 1984"


Journal ArticleDOI
TL;DR: In this paper, the effect of temporal discontinuity on visual search was assessed by presenting a display in which one item had an abrupt onset, while other items were introduced by gradually removing line segments that camouflaged them.
Abstract: The effect of temporal discontinuity on visual search was assessed by presenting a display in which one item had an abrupt onset, while other items were introduced by gradually removing line segments that camouflaged them. We hypothesized that an abrupt onset in a visual display would capture visual attention, giving this item a processing advantage over items lacking an abrupt leading edge. This prediction was confirmed in Experiment 1. We designed a second experiment to ensure that this finding was due to attentional factors rather than to sensory or perceptual ones. Experiment 3 replicated Experiment 1 and demonstrated that the procedure used to avoid abrupt onset--camouflage removal--did not require a gradual waveform. Implications of these findings for theories of attention are discussed.

1,378 citations


Journal ArticleDOI
TL;DR: Four experiments on the ability to inhibit responses in simple and choice reaction time (RT) tasks were reported, and different methods of selecting stop-signal delays were compared to equate the probability of inhibition in the two tasks.
Abstract: This article reports four experiments on the ability to inhibit responses in simple and choice reaction time (RT) tasks. Subjects responding to visually presented letters were occasionally presented with a stop signal (a tone) that told them not to respond on that trial. The major dependent variables were (a) the probability of inhibiting a response when the signal occurred, (b) mean and standard deviation (SD) of RT on no-signal trials, (c) mean RT on trials on which the signal occurred but subjects failed to inhibit, and (d) estimated RT to the stop signal. A model was proposed to estimated RT to the stop signal and to account for the relations among the variables. Its main assumption is that the RT process and the stopping process race, and response inhibition depends on which process finishes first. The model allows us to account for differences in response inhibition between tasks in terms of transformations of stop-signal delay that represent the relative finishing times of the RT process and the stopping process. The transformations specified by the model were successful in group data and in data from individual subjects, regardless of how delays were selected. The experiments also compared different methods of selecting stop-signal delays to equate the probability of inhibition in the two tasks.

1,371 citations


Journal ArticleDOI
TL;DR: It is argued that decision processes having little to do with lexical access accentuate the word-frequency effect in the lexical decision task and that results from this task have questionable value in testing the assumption that word frequency orders the lexicon, thereby affecting time to access the mental lexicon.
Abstract: Three experiments investigated the impact of five lexical variables (instance dominance, category dominance, word frequency, word length in letters, and word length in syllables) on performance in three different tasks involving word recognition: category verification, lexical decision, and pronunciation. Although the same set of words was used in each task, the relationship of the lexical variables to reaction time varied significantly with the task within which the words were embedded. In particular, the effect of word frequency was minimal in the category verification task, whereas it was significantly larger in the pronunciation task and significantly larger yet in the lexical decision task. It is argued that decision processes having little to do with lexical access accentuate the word-frequency effect in the lexical decision task and that results from this task have questionable value in testing the assumption that word frequency orders the lexicon, thereby affecting time to access the mental lexicon. A simple two-stage model is outlined to account for the role of word frequency and other variables in lexical decision. The model is applied to the results of the reported experiments and some of the most important findings in other studies of lexical decision and pronunciation.

1,194 citations


Journal ArticleDOI
TL;DR: It is concluded that perception for the control of action reflects the underlying dynamics of the animal-environment system.
Abstract: How do animals visually guide their activities in a cluttered environment? Gibson (1979) proposed that they perceive what environmental objects offer or afford for action. An analysis of affordances in terms of the dynamics of an animal-environment system is presented. Critical points, corresponding to phase transitions in behavior, and optimal points, corresponding to stable, preferred regions of minimum energy expenditure, emerge from variation in the animal-environment fit. It is hypothesized that these points are constants across physically similar systems and that they provide a natural basis for perceptual categories and preferences. In three experiments these hypotheses are examined for the activity of human stair climbing, by varying riser height with respect to leg length. The perceptual category boundary between "climbable" and "unclimbable" stairs is predicted by a biomechanical model, and visually preferred riser height is predicted from measurements of minimum energy expenditure during climbing. It is concluded that perception for the control of action reflects the underlying dynamics of the animal-environment system.

1,077 citations


Journal ArticleDOI
TL;DR: The absence of overadditive interactions in these experiments, and also the effects of manipulating first-task factors in Experiment 5, seems to argue against capacity sharing as the source of the slowing in this task combination.
Abstract: This article examines the attentional limits responsible for task slowing in the overlapping task (refractory period) paradigm. Five experiments are reported in which stimulus factors were manipulated in visual search tasks performed in isolation or temporally overlapping with another task. Bottleneck models suggest that secondtask slowing is caused by postponement of "attention-demanding" stages of the second task, while earlier "automatic" stages proceed unhindered. A prediction was derived from this class of models, namely that in the overlapping task condition the effect of second task factors that slow automatic stages should be reduced, whereas the effect of factors slowing later nonautomatic stages should be unchanged. The data (Experiments 1-4) exhibit such a pattern and suggest that encoding and comparison stages of the second task, but not response selection, occur in parallel with work on the first task. The absence of overadditive interactions in these experiments, and also the effects of manipulating first-task factors in Experiment 5, seems to argue against capacity sharing as the source of the slowing in this task combination. Some implications of these results for attention theory are discussed.

561 citations


Journal ArticleDOI
TL;DR: In an experiment in which the numbers of the two distractors were unconfounded, evidence is found that subjects can search through specified subsets of stimuli, and implications of selective search are discussed.
Abstract: It has recently been proposed that in searching for a target defined as a conjunction of two or more separable features, attention must be paid serially to each stimulus in a display. Support for this comes from studies in which subjects searched for a target that shared a single feature with each of two different kinds of distractor items (e.g., a red O in a field of black Os and red Ns). Reaction time increased linearly with display size. We argue that this design may obscure evidence of selectivity in search. In an experiment in which the numbers of the two distractors were unconfounded, we find evidence that subjects can search through specified subsets of stimuli. For example, subjects told to search through just the Os to find the red O target do so without searching through Ns. Implications of selective search are discussed. Language: en

524 citations


Journal ArticleDOI
TL;DR: In this article, the time course of picture-word interferences was analyzed by a systematically varied stimulus onset asynchrony (SOA) of the two stimulus components in the picture-naming, word reading, picture categorizing, and word-categorizing tasks and the results argue against the relative speed hypothesis and suggest a functional internal processing asymmetry between inhibition-immune recoding, effective in word reading and picture categorising, and inhibition-susceptible recoding in picture naming and word categorizing.
Abstract: If a word is printed inside the outline drawing of a concrete object, interference patterns as in Stroop research are obtained under the instruction to name the picture or to read the word. Smith and Magee (1980) have shown that these patterns change fundamentally if the naming or reading task is replaced by a categorizing task. Their results seem to corroborate the relative speed hypothesis, which explains Stroop-like interferences by faster processing of the distractor than the target. Two experiments are reported here in which the time course of picture-word interferences was analyzed by a systematically varied stimulus onset asynchrony (SOA) of the two stimulus components in the picture-naming, word-reading, picture-categorizing, and word-categorizing tasks. The results argue against the relative speed hypothesis and suggest a functional internal processing asymmetry between inhibition-immune recoding, effective in word reading and picture categorizing, and inhibition-susceptible recoding in picture naming and word categorizing.

516 citations


Journal ArticleDOI
Robert Earl Morrison1
TL;DR: A model with direct control and parallel programming of saccades is proposed to explain the data and eye movements in reading in general and indicates that fixation duration is under direct control.
Abstract: On-line eye movement recording of 12 subjects who read short stories on a cathode ray tube enabled a test of direct control and preprogramming models of eye movements in reading. Contingent upon eye position, a mask was displayed in place of the letters in central vision after each saccade, delaying the onset of the stimulus in each eye fixation. The duration of the delay was manipulated in fixed or randomized blocks. Although the length of the delay strongly affected the duration of the fixations, there was no difference due to the conditions of delay manipulation, indicating that fixation duration is under direct control. However, not all fixations were lengthened by the period of the delay. Some ended while the mask was still present, suggesting they had been preprogrammed. But these "anticipation" eye movements could not have been completely determined before the fixation was processed because their fixation durations and saccade lengths were affected by the spatial extent of the mask, which varied randomly. Neither preprogramming nor existing serial direct control models of eye guidance can adequately account for these data. Instead, a model with direct control and parallel programming of saccades is proposed to explain the data and eye movements in reading in general.

453 citations


Journal ArticleDOI
TL;DR: It is shown that articulatory patterns in response to jaw perturbations are specific to the utterance produced, and this provides evidence for flexibly assembled coordinative structures in speech production.
Abstract: In three experiments we show that articulatory patterns in response to jaw perturbations are specific to the utterance produced. In Experiments 1 and 2, an unexpected constant force load (5.88 N) applied during upward jaw motion for final /b/ closure in the utterance /baeb/ revealed nearly immediate compensation in upper and lower lips, but not the tongue, on the first perturbation trial. The same perturbation applied during the utterance /baez/ evoked rapid and increased tongue-muscle activity for /z/ frication, but no active lip compensation. Although jaw perturbation represented a threat to both utterances, no perceptible distortion of speech occurred. In Experiment 3, the phase of the jaw perturbation was varied during the production of bilabial consonants. Remote reactions in the upper lip were observed only when the jaw was perturbed during the closing phase of motion. These findings provide evidence for flexibly assembled coordinative structures in speech production.

424 citations


Journal ArticleDOI
TL;DR: By manipulating the internal lexical structure of the words, it is shown that at least part of the fixation location effect is caused by mechanisms related to ongoing lexical processing.
Abstract: When a word is visually presented in a naming or comparison task in such a way that the eye is initially fixated at different locations within the word, a very strong effect of fixation location is found. The effect appears as a U-shaped curve. Naming time and total fixation time (gaze duration) have a minimum for an initial fixation location between the third and fifth letter of the word (for words that are 5-11 letters long). When initial fixation location deviates from this optimum position, times increase at the surprisingly fast rate of 20-30 ms per letter of deviation. By manipulating the internal lexical structure of the words, we show that at least part of the fixation location effect is caused by mechanisms related to ongoing lexical processing. This is demonstrated by the fact that the fixation location effect takes a different form when the most informative part of a word (as determined by dictionary counts) occurs at the beginning or the end of the word. Language: en

317 citations


Journal ArticleDOI
TL;DR: The perceptual processing of arrows and triangles and of their component angles and lines was explored and the results suggest that some analysis of shapes into simpler parts occurs preattentively, because these parts can recombine to form illusory conjunctions when attention is divided.
Abstract: The perceptual processing of arrows and triangles and of their component angles and lines was explored in a number of different tasks. The results suggest that some analysis of shapes into simpler parts occurs preattentively, because these parts can recombine to form illusory conjunctions when attention is divided. The presence of "emergent features," such as closure or arrow junctions, was inferred from predicted correlations in the pattern of performance across tasks and across individual subjects. Thus triangles (for most subjects) and arrows (for some subjects) behave as if they had a perceptual feature that is absent from their parts and that mediates parallel detection in search and easy texture segregation. For some subjects, circles could apparently supply the additional feature (presumably closure) required to form illusory triangles from their component lines, whereas for other subjects circles had no effect. The fact that triangle lines can form illusory conjunctions with another shape makes it unlikely that triangles are perceived holistically and strengthens the interpretation that relies on emergent features.

Journal ArticleDOI
TL;DR: The data indicate that higher order temporal properties of the acoustic signal provide information for the auditory perception of these events, and differences in average spectral frequency are not necessary for perceiving this contrast.
Abstract: Research in auditory perception has tended to emphasize the detection and processing of sound elements with quasi-stable spectral structure, such as tones, formants, and bursts of noise. In the spectral domain, these elements are distinguished by frequency peak or range, bandwidth, and amplitude. In the temporal domain, acoustic analysis has often focused on the durations of sound elements, the intervals and phase relations between them, and the influence of these on pitch and loudness perception, temporal acuity, masking, and localization. The auditory system has often been approached as an analyzer of essentially time-constant functions of frequency, amplitude, and duration, on the assumption that complex auditory percepts are compositions over sound elements having those properties, with certain temporal inter

Journal ArticleDOI
TL;DR: Whether or not the concept of automaticity is invoked, relative speed of processing the word versus the color does not provide an adequate overall explanation of the Stroop phenomenon.
Abstract: Four experiments investigated Stroop interference using geometrically transformed words. Over experiments, reading was made increasingly difficult by manipulating orientation uncertainty and the number of noncolor words. As a consequence, time to read color words aloud increased dramatically. Yet, even when reading a color word was considerably slower than naming the color of ink in which the word was printed, Stroop interference persisted virtually unaltered. This result is incompatible with the simple horse race model widely used to explain color-word interference. When reading became extremely slow, a reversed Stroop effect--interference in reading the word due to an incongruent ink color--appeared for one transformation together with the standard Stroop interference. Whether or not the concept of automaticity is invoked, relative speed of processing the word versus the color does not provide an adequate overall explanation of the Stroop phenomenon.

Journal ArticleDOI
TL;DR: In this article, a distributional model of risk is described in which it is hypothesized that people's judgments of risk are similar to the kinds of judgments made in welfare economics concerning inequality of income distributions.
Abstract: : A distributional model of risk is described in which it is hypothesized that people's judgments of risk are similar to the kinds of judgments made in welfare economics concerning inequality of income distributions. The role played by the Lorenz curve in analyzing inequality is described and it is shown how Lorenz curves can be used to describe risks. Two hypotheses are presented concerning risk: first, that representing risks with Lorenz curves will be useful in capturing the salient psychological features of risk, and second, that people's judgments of positive risks will be similar functionally to judgments of distributional inequality. Six experiments are presented that support the distributional model of risk for both preference judgments and judgments of riskiness. The implications of these experiments are described and the distributional model is compared with alternative models of risk.

Journal ArticleDOI
TL;DR: It was proposed that attention should be viewed as a general, rather than feature-specific, resource that can be voluntarily allocated to multiple regions of the visual field and improved subjects' identification and localization of the displaced letter.
Abstract: Robert Egly and Donald HoraaArizona State UniversityThree experiments investigated the identification or localization of a letter thatwas displaced from the fixation point by 1 °-3°. The subject's task was to identifya fixated letter and identify (Experiment 1) or localize (Experiments 2 and 3) thedisplaced letter. On uncued trials, the displaced letter could appear at any ofeight locations on any of three rings surrounding the fixated letter; on cued trials,the ring containing the displaced letter was specified. The results indicated thatcuing improved subjects' identification and localization of the displaced letter.Invalid cuing (Experiment 3) produced costs comparable in magnitude to thebenefits. The distance of the target from the cued ring determined cost, but costswere unaffected by the appearance of a target within the presumed beam ofattention. It was proposed that attention should be viewed as a general, ratherthan feature-specific, resource that can be voluntarily allocated to multiple regionsof the visual field.The present research addressed the questionwhether attention can be simultaneously al-located to multiple regions of the visual field,independently of the fixation point. Simplyput, is it possible to voluntarily activatemechanisms that sensitize broad regions ofthe visual field such that the identification orlocalization of a signal is enhanced?Previous research on a more restrictedversion of this question (e.g., the detection ofan expected stimulus in a single, specificregion) produced conflicting results. Mertens(1956) found that detection of a signal thatoccurred at one of four locations was notimproved by a cue that specified the locationof the signal. On the other hand, Posner andhis colleagues (Posner, Nissen, & Ogden, 1978;Posner, Snyder, & Davidson, 1980) foundthat reaction time to a signal was reduced bya prior visual cue that indicated the likelyposition of the signal. In a study that paral-leled some of the procedural details of thepresent study, Smith and Blaha (1969) foundthat detection of a stimulus that could appear

Journal ArticleDOI
TL;DR: This paper showed that the precuing advantage for same-hand responses at shorter precuing intervals is due to strategic and decision factors, not to an ability to prepare these responses more efficiently.
Abstract: Most studies that examined the precuing of motor responses have been interpreted as indicating that response specification is a variable-order process. An apparent exception to this conclusion was obtained by Miller (1982) for the preparation of discrete finger responses. Precuing was beneficial only when the precued responses were on the same hand, suggesting that response specification occurs in a fixed order, with hand specified before other aspects of the response. Three experiments examined this discrepant finding for discrete finger responses. Experiment 1 demonstrated that with sufficient time (3 s), all combinations of responses can be equally well prepared. Experiments 2 and 3 showed that the precuing advantage for same-hand responses at shorter precuing intervals is due to strategic and decision factors, not to an ability to prepare these responses more efficiently. Preparation of finger responses, thus, also appears to be variable. This conclusion poses problems for Miller's extension of the precuing procedure to the evaluation of discrete versus continuous models of information processing. Language: en

Journal ArticleDOI
TL;DR: There were no effects of shape frequency in either tachistoscopic recognition or lexical-decision tasks, regardless of the degree to which the visual shape cue was supplemented by the nonvisual factors of familiarity and expectancy.
Abstract: Current models of fluent reading often assume that fast and automatic word recognition involves the use of a supraletter feature corresponding to the envelope or shape of the word when it is printed in lowercase. The advantages of mixed case over pure case and of pure lowercase over pure uppercase have often been taken as evidence favoring the word-shape hypothesis. Alternative explanations for these phenomena are offered. Experiment 1 shows that previous demonstrations of word-shape effects during proofreading are better described as individual letter effects. Experiments 2-4 explore the possibility that word shape facilitates lexical access through uncertainty reduction. In all three experiments performance on words with rare shapes is compared to those with common shapes. There were no effects of shape frequency in either tachistoscopic recognition or lexical-decision tasks. This was true regardless of the degree to which the visual shape cue was supplemented by the nonvisual factors of familiarity and expectancy. Possible reasons why fluent readers ignore word shape are discussed within the framework of a model that assumes that automatic word recognition is mediated by the activation of abstract letter identities.

Journal ArticleDOI
TL;DR: Data from these three experiments support a perceptual model wherein phonetic categorization can operate separately from higher levels of analysis.
Abstract: To investigate the interaction in speech perception between lexical knowledge (in particular, whether a stimulus token makes a word or nonword) and phonetic categorization, sets of [bVC]-[dVC] place-of-articulation continua were constructed so that the endpoint tokens represented word-word, word-nonword, nonword-word, and nonword-nonword combinations. Experiment 1 demonstrated that ambiguous tokens were perceived in favor of the word token and supported the contention that lexical knowledge can affect the process of phonetic categorization. Experiment 2 utilized a reaction time procedure with the same stimuli and demonstrated that the effect of lexical status on phonetic categorization increased with response latency, suggesting that the lexical effect represents a perceptual process that is separate from and follows phonetic categorization. Experiment 3 utilized a different set of [b-d] continua to separate the effects of final consonant contrast and lexical status that were confounded in Experiments 1 and 2. Results demonstrated that both lexical status and contextual contrast separately affected the identification of the initial stop. Data from these three experiments support a perceptual model wherein phonetic categorization can operate separately from higher levels of analysis.

Journal ArticleDOI
TL;DR: Subjects made magnitude estimations of moving stimuli produced by a 10 X 10 factorial design of distances and durations, and Plotting subjective velocity against physical velocity with either duration or distance as the parameter resulted in families of converging psychophysical power functions.
Abstract: Subjects made magnitude estimations of moving stimuli produced by a 10 X 10 factorial design of distances and durations. Both group and individual data obeyed the bilinear interaction prediction of a simple ratio model. The relation between perceived and actual velocity, as well as the psychophysical contingencies constructed from the marginal means of the design, could be described by a power function with an exponent of about 0.63 as a representative figure. Plotting subjective velocity against physical velocity with either duration or distance as the parameter resulted, respectively, in families of converging psychophysical power functions. Some implications of the results for velocity research, especially the usefulness of specifying the correct metric structure, are discussed.

Journal ArticleDOI
TL;DR: The results demonstrate that the spatial frequency content of visual patterns can provide a valuable metric for predicting their psychological similarity, and suggest that spatial frequency models of visual processing are competitive with feature analysis models.
Abstract: Black, uppercase letters, subtending 6.0' of arc in height, were presented tachistoscopically to 6 subjects. An exposure duration was chosen to keep the subject's identification performance at about 50% correct. On each trial a single letter was presented, and the subject was required to identify the letter by verbal response. The resulting 26 X 26 confusion matrix was based on 3,900 trials (150 trials per letter). Several models of visual processing were used to generate predicted confusions among letter pairs. Models based on template overlap, geometric features, and two-dimensional spatial frequency content (Fourier transforms) were tested. The highest correlation (.70) between actual and predicted confusions was attained by the model based on the Fourier transformed letters filtered by the human contrast sensitivity function. These results demonstrate that the spatial frequency content of visual patterns can provide a valuable metric for predicting their psychological similarity. The results further suggest that spatial frequency models of visual processing are competitive with feature analysis models. Language: en

Journal ArticleDOI
TL;DR: College students read short texts from a cathode-ray tube as their eye movements were being monitored and the pattern of responses suggested that the first letter of a word is not utilized before other letters and that letters are not scanned from left to right during a fixation.
Abstract: College students read short texts from a cathode-ray tube as their eye movements were being monitored. During selected fixations, the text was briefly masked and then it reappeared with one word changed. Subjects often were unaware that the word had changed. Sometimes they reported seeing the first presented word, sometimes the second presented word, and sometimes both. When only one word was reported, two factors were found to determine which one it was: the length of time a word was present during the fixation and the predictability of a word in its context. The results suggested that visual information is utilized for reading at a crucial period during the fixation and that this crucial period can occur at different times on different fixations. The pattern of responses suggested that the first letter of a word is not utilized before other letters and that letters are not scanned from left to right during a fixation. Language: en

Journal ArticleDOI
TL;DR: Five experiments designed to test the strong late-selection theories of visual attention claimed that when multiple stimuli belonging to familiar categories are presented, their identities are computed automatically and tagged for their locations found no evidence of any such reduction in these factors' effects on reaction times or errors.
Abstract: Strong late-selection theories of visual attention assert that when multiple stimuli belonging to familiar categories are presented, their identities are computed automatically and tagged for their locations. When selection by location is required, the identities are said to be retrieved without any need to repeat the perceptual processing. Five experiments designed to test this account are reported. All included a condition in which a display of eight characters was previewed for several hundred ms; a bar probe then designated one character the target for speeded classification. Stimulus factors that slow the character encoding process were manipulated. If selection is late, then such factors should have no effect in this condition because the probe occurs after automatic encoding is complete. There was no evidence of any such reduction in these factors' effects on reaction times or errors. The results were unchanged when catch trials with postdisplay masks were included, to discourage any optional delay of encoding. Several possible accounts are considered of how the strong late-selection model may be wrong, even if parallel encoding occurs in various situations.

Journal ArticleDOI
TL;DR: The authors examined the properties of linguistic attention by recording event-related brain potentials (ERPs) to probe stimuli mixed with dichotically presented prose passages and found that stimulus selection during linguistic attention is specifically tuned to speech sounds rather than simply to constituent pure-tone frequencies or ear of entry.
Abstract: The properties of linguistic attention were examined by recording event-related brain potentials (ERPs) to probe stimuli mixed with dichotically presented prose passages. Subjects either shadowed (repeated phrase by phrase) or selectively listened to one passage while ERPs were recorded from electrodes overlying midline sites, left-hemisphere speech areas, and corresponding areas of the right hemisphere. Mixed with each voice (a male voice in one ear, a female voice in the other) were four probe stimuli: digitized speech sounds (but or /a/ as in father) produced by the same speaker and tone bursts at the mean fundamental and second formant frequencies of that voice. The ERPs elicited by the speech probes in the attended ear showed an enhanced negativity, with an onset at 50 ms-100 ms and lasting up to 800 ms-1,000 ms, whereas the ERPs to the second formant probes showed an enhanced positivity in the 200 ms-300 ms latency range. These effects were comparable for shadowing and selective listening conditions and remained stable over the course of the experiment. The attention-related negativity to the consonant-vowel-consonant probe (but) was most prominent over the left hemisphere; other probes produced no significant asymmetries. The results indicate that stimulus selection during linguistic attention is specifically tuned to speech sounds rather than simply to constituent pure-tone frequencies or ear of entry. Furthermore, it appears that both attentional set and stimulus characteristics can influence the hemispheric utilization of stimuli. Language: en

Journal ArticleDOI
TL;DR: These results, together with those of earlier studies in which these factors led to similar effects for different stimuli and transformations, suggest that these are general principles applicable to the perception of structure from both rigid and nonrigid motion.
Abstract: Parallel projections of dots on the surface of a transparent sphere rotating about a vertical axis provide strong impressions of depth and spherical shape. The hypothesis was tested that these impressions are the result of three perceptual heuristics: (a) The sinusoidal projected velocity function of each dot in the horizontal dimension tends to be perceived as a rotary motion in depth; (b) the projected velocity gradient in the vertical dimension is perceived as curvature in depth; and (c) the simultaneously visible fields of dots moving in opposite directions are perceived as surfaces separated in depth. When each factor was varied independently, all three significantly affected judgments of spherical shape and depth. Similar results were obtained with cylinders. The first factor was more important for shape judgments; the second was generally more important for depth judgments. These results, together with those of earlier studies in which these factors led to similar effects for different stimuli and transformations, suggest that these are general principles applicable to the perception of structure from both rigid and nonrigid motion. Language: en

Journal ArticleDOI
TL;DR: This study is the first to demonstrate a mental-rotation strategy when the canonical forms to be discriminated are up-down mirror images as well as when they are left-right mirror images, and it is suggested that this is the critical ingredient that induces mental rotation.
Abstract: Subjects were timed as they made judgments about ps and qs (also interpretable as ds and bs) in different angular orientations. Whether these judgments were left-right mirror-image discriminations (b vs. d or p vs. q) or up-down mirror-image discriminations (b vs. p or d vs. q), the subjects' reaction times increased sharply with the angular departure of each letter from its designated normal upright orientation, a fact implying mental rotation. This was so whether the subjects responded with the letter labels themselves (e.g., b vs. d) or with the labels left versus right or top versus bottom. It was again the case when the letters were replaced by nonletter forms, in which event there was also a left visual-field advantage in reaction time. This study is therefore the first to demonstrate a mental-rotation strategy when the canonical forms to be discriminated are up-down mirror images as well as when they are left-right mirror images. In both cases, however, the task requires the ability to tell left from right, and we suggest that this is the critical ingredient that induces mental rotation.


Journal ArticleDOI
TL;DR: In the present experiments size and case were varied in several ways, and the task was also varied to include both silent reading and reading aloud, drawing the conclusion that reading goes forward in many ways at once rather than through an orderly sequence of operations, consistent with the reader's skills and the requirements of the task.
Abstract: The role of size and case of print have provoked a number of experiments in the recent past. One strongly argued position is that the reader abstracts a canonical representation from a string of letters that renders its variations irrelevant and then carries out recognition procedures on that abstraction. An alternate view argues that the reader proceeds by analyzing the print, taking account of its manifold physical attributes such as length of words, their orientation, shape, and the like. In the present experiments size and case were varied in several ways, and the task was also varied to include both silent reading and reading aloud. Clear evidence for shape-sensing operations was brought forward, but they were shown to be optional rather than obligatory processes, used when it served the reader's purpose to do so. However, it was also shown that such skills, normally useful, could be tricked into operating even when their presence hindered the reader's performance. The conclusion is drawn that reading goes forward in many ways at once rather than through an orderly sequence of operations, consistent with the reader's skills and the requirements of the task. Overarching theories of performance seem premature in the absence of detailed analysis of task components. Language: en

Journal ArticleDOI
TL;DR: In this paper, the authors investigate the rules by which the component features of faces are combined when presented in the left or right visual field, and examine the validity of the analytic-holistic processing dichotomy, using concepts elaborated by Garner (1978, 1981) to specify stimulus properties and models of similarity relations as performance criteria.
Abstract: This study investigates the rules by which the component features of faces are combined when presented in the left or the right visual field, and it examines the validity of the analytic-holistic processing dichotomy, using concepts elaborated by Garner (1978, 1981) to specify stimulus properties and models of similarity relations as performance criteria. Latency measures of dissimilarity, obtained for the two visual fields, among a set of eight faces varying on three dimensions of two levels each, were fitted to the dominance metric model, the feature-matching model, the city-block distance metric model, and the Euclidean distance metric model. In addition to a right-visual-field superiority in different responses, a maximum likelihood estimation procedure showed that, for each subject and each visual field, the Euclidean model provided the best fit of the data, suggesting that the faces were compared in terms of their overall similarity. Moreover, the spatial representations of the results revealed interactions among the component facial features in the processing of faces. Taken together, these two findings indicate that faces initially projected to the right or to the left hemisphere were not processed analytically but in terms of their gestalt. Human information-processing capacities are the product of a highly adaptive and versatile nervous system that provides individuals with a large number of alternative means for achieving successful performance on any particular task. This versatility is partly attributed to the functional specialization of the two cerebral hemispheres whereby specific skills are alleged to be unilaterally represented, thus doubling the brain processing capacity while avoiding potential conflicts that would result from promiscuity. This specialization was initially characterized in terms of information that each hemisphere was better equipped to operate on (e.g., Milner, 1971). However, the diversity and heterogeneity of the type of information that each hemisphere could be shown to process, initially in experiments with commissurotomized patients, prompted researchers to inquire about the processes un

Journal ArticleDOI
TL;DR: The present article responds to Turvey and Solomon's misgivings by elaborating empirical methods and theoretical arguments of past work on pointing measurements of distance illusions related to esophoric shifts of eye convergence that are induced by near work.
Abstract: Contrary to the view that ambient light information unequivocally specifies phenomenal events, recent research suggests that natural event perception is determined by processes that pick up and combine visual and motor information. This thesis is challenged by Turvey and Solomon (1984). The present article responds to their misgivings by elaborating empirical methods and theoretical arguments of past work. A control experiment is also presented on pointing measurements of distance illusions related to esophoric shifts of eye convergence that are induced by near work. The soundness of both the empirical methodology and the theoretical arguments in support of the original thesis is upheld. Language: en

Journal ArticleDOI
TL;DR: It appears that subjects extract semantic representations from input and are sometimes confused about whether a particular representation has been extracted from a word or a color patch, suggesting that illusory conjunctions may occur with high-level codes as well as with perceptual features.
Abstract: According to feature-integration theory, when attention is diverted from a display, features from different objects in that display may be wrongly recombined, giving rise to "illusory conjunctions" (Treisman & Schmidt, 1982). Two experiments are reported that examine the nature of these illusory conjunctions. In displays that contain color names and adjectives printed in colored ink, subjects made two kinds of interesting and previously unreported errors. Consider, for example, a display that included the word BROWN in red ink and the word HEAVY in green ink. Subjects would sometimes incorrectly report that the word RED or the ink color brown had appeared in the display (e.g., RED in green ink or HEAVY in brown ink). It appears that subjects extract semantic representations from input and are sometimes confused about whether a particular representation has been extracted from a word or a color patch. Contrary to feature-integration theory, these findings suggest that illusory conjunctions may occur with high-level codes as well as with perceptual features. Language: en