scispace - formally typeset
Search or ask a question

Showing papers in "Attention Perception & Psychophysics in 1983"


Journal ArticleDOI
TL;DR: An adaptive psychometric procedure that places each trial at the current most probable Bayesian estimate of threshold is described, taking advantage of the common finding that the human psychometric function is invariant in form when expressed as a function of log intensity.
Abstract: An adaptive psychometric procedure that places each trial at the current most probable Bayesian estimate of threshold is described. The procedure takes advantage of the common finding that the human psychometric function is invariant in form when expressed as a function of log intensity. The procedure is simple, fast, and efficient, and may be easily implemented on any computer.

2,334 citations


Journal ArticleDOI
TL;DR: The advantage of integrated over separate presentation suggests that a “filtering cost” is incurred when two distinct perceptual objects compete for attention in filtering tasks.
Abstract: The latency of reading a single word is increased by 20 to 40 msec if another object is present in the display. The delay is affected by the spatial organization of the display: a colored frame causes less delay when it surrounds the word than when it is shown on the opposite side of fixation. A small gap in the frame is also more efficiently detected as a secondary task when the frame is around the word than when the two are spatially separate. The advantage of integrated over separate presentation suggests that a "filtering cost" is incurred when two dis­ tinct perceptual objects compete for attention. Attention in filtering tasks operates on per­ ceptually distinct objects rather than on nodes in a semantic network. The act of reading is usually assumed to be auto­ matic, in the sense that it occurs both without volun­ tary control and without requiring attentional re­ sources, if a word or letter is sufficiently clear and close to the fovea. Yet a consistent delay in naming a single letter is produced by the addition of irrel­ evant objects to the display, even when these ob­ jects are highly discriminable from letters (Eriksen & Hoffman, 1972; Eriksen & Schultz, 1978, p. 18). Distractors such as black disks or color patches add about 30 msec to the latency of letter naming. Eriksen and Schultz labeled this effect cognitive masking. We have found a similar delay in read­ ing a single word when an irrelevant but highly dis­ criminable object is added to the display. We further found that the delay in reading increases as more objects are added, although probably at a decreas­ ing rate; it can be eliminated by precuing the loca­ tion of the word on each trial, and it is reduced or eliminated, with the same displays, when the sub­ ject is asked to press a key whenever a word is shown, instead of to read it (Kahneman, Treisman, & Burkell, in press). The fact that it disappears with precuing links the delay to attention rather than to peripheral in­ terference. The fact that it is reduced in search or detection suggests that the delay arises not in find­ ing the word but in allocating attention to it and/ or in filtering out the irrelevant objects. Focused attention to the word is not required when the re­ sponse is determined directly by the detection of its presence. We suggest attention must be narrowed down to the relevant stimulus, however, when the choice of a response demands further processing. We therefore interpret the observed delay of read

286 citations


Journal ArticleDOI
TL;DR: In a first study, an asymmetry in goodness of apparent motion was found between forward and backward action sequences, supporting the hypothesis that people represent the motion implicit in a photograph.
Abstract: If the representation of movement is a fundamental organizing principle of cognition, as hypothesized here, it should be possible to find cases in which static stimuli induce a dynamic mental representation. Subjects viewed frozen-action photographs, and their memory for these scenes was tested. They found it harder to reject distractors when the distractors were photographs of the same scene shot later in time than when the distractors were photographs shot earlier in time. In a second study, an asymmetry in goodness of apparent motion was found between forward and backward action sequences. Both results support the hypothesis that people represent the motion implicit in a photograph.

270 citations


Journal ArticleDOI
TL;DR: It is shown that corollary discharge governs perception of position of a luminous point in darkness, that is, an unstructured visual field and visuomotor coordination measured with open-loop pointing and the matching of visual and auditory direction in light and in darkness.
Abstract: Visual fixation can be maintained in spite of finger pressure on the monocularly viewing eye. We measured the amount of extraocular muscle effort required to counter the eyepress as the secondary deviation of the occluded fellow eye. Using this method, without drugs or neurological lesions, we have shown that corollary discharge (CD) governs perception of position of a luminous point in darkness, that is, an unstructured visual field. CD also controls visuomotor coordination measured with open-loop pointing and the matching of visual and auditory direction in light and in darkness. The incorrectly biased CD is superseded byvisual position perception in normal structured environments, a phenomenon we call visual capture of Matin. When the structured visual field is extinguished, leaving only a luminous point, gradual release from visual capture and return to the biased CD direction follows after a delay of about 5 sec.

216 citations


Journal ArticleDOI
TL;DR: Three experiments are reported that attempted to demonstrate the existence of an integrative visual buffer, a special memory store capable of fusing the visual contents of successive fixations according to their environmental coordinates, but no evidence was found in any experiment for the fusion of visual information from successive fixation in memory, leaving the status of the integrativeVisual buffer in serious doubt.
Abstract: One of the classic problems in perception concerns how we perceive a stable, continuous visual world even though we view it via a temporally discontinuous series of eye movements. Previous investigators have suggested that our perception of a stable visual environment is due to anintegrative visual buffer, a special memory store capable of fusing the visual contents of successive fixations according to their environmental coordinates. In this paper, three experiments are reported that attempted to demonstrate the existence of an integrative visual buffer. The experimental procedure required subjects to mentally fuse two halves of a dot matrix presented in the same spatial region of a display, but separated by an eye movement so that each half was viewed only during one fixation. Thus, subjects had to integrate packets of visual information that had the same environmental coordinates, but different retinal coordinates. No evidence was found in any experiment for the fusion of visual information from successive fixations in memory, leaving the status of the integrative visual buffer in serious doubt.

200 citations


Journal ArticleDOI
TL;DR: Identification of synthetic speech varying in both acoustic featural information and phonological context allowed quantitative tests of various models of how these two sources of information are evaluated and integrated in speech perception.
Abstract: Speech perception can be viewed in terms of the listener’s integration of two sources of information: the acoustic features transduced by the auditory receptor system and the context of the linguistic message. The present research asked how these sources were evaluated and integrated in the identification of synthetic speech. A speech continuum between the glide-vowel syllables /ri/ and /li/ was generated by varying the onset frequency of the third formant. Each sound along the continuum was placed in a consonant-cluster vowel syllable after an initial consonant /p/, /t/, /s/, and /v/. In English, both /r/ and /l/ are phonologically admissible following /p/ but are not admissible following /v/. Only /l/ is admissible following /s/ and only /r/ is admissible following /t/. A third experiment used synthetic consonant-cluster vowel syllables in which the first consonant varied between /b/ and /d and the second consonant varied between /l/ and /r/. Identification of synthetic speech varying in both acoustic featural information and phonological context allowed quantitative tests of various models of how these two sources of information are evaluated and integrated in speech perception.

198 citations


Journal ArticleDOI
TL;DR: This tutorial paper expound the mathematical basis for LTG/NP and to evaluate that model against a reasonable set of criteria for a neuropsychological theory, and shows that this approach to spatial vision is closer to the mainstream of current theoretical work than might be assumed.
Abstract: The Lie transformation group model of neuropsychology (LTG/NP) purports to represent and explain how the locally smooth processes observed in the visual field, and their integration into the global field of visual phenomena, are consequences of special properties of the underlying neuronal complex. These properties are modeled by a specific set of mathematical structures that account both for local (infinitesimal) operations and for their generation of the “integral curves” that are visual contours. The purpose of this tutorial paper is to expound, as nontechnically as possible, the mathematical basis for LTG/NP, and to evaluate that model against a reasonable set of criteria for a neuropsychological theory. It is shown that this approach to spatial vision is closer to the mainstream of current theoretical work than might be assumed; recent experimental support for LTG/NP is described.

194 citations


Journal ArticleDOI
TL;DR: Using Raab’s model, but with relaxed assumptions, the present experiments show that RT to combined stimulus events is more rapid than can be accounted for by statistical facilitation, therefore, some intersensory interaction was probably occurring.
Abstract: Two experiments examined the RT to visual stimuli presented alone and when either auditory (Experiment 1) or kinesthetic (Experiment 2) stimuli followed the visual event by 50 or 65 msec, respectively. As has been found before, the RT to combined stimulus events was 20 to 40 msec shorter than to visual events alone. While such results have generally been interpreted to mean that two sensory modalities are interacting, Raab’s (1962) hypothesis of statistical facilitation— that the subject responds to that stimulus modality whose processing is completed first—is also possible. Using Raab’s model, but with relaxed assumptions, the present experiments show that RT to combined stimulus events is more rapid than can be accounted for by statistical facilitation. Therefore, some intersensory interaction was probably occurring. The nature of these possible interactions and the status of the statistical-facilitation hypothesis are discussed. Supported in part by Grant BNS 80–23125 from the National Science Foundation to the second author.

172 citations


Journal ArticleDOI
TL;DR: It was found that subjective time estimations were a decreasing function of task difficulty, and that durations for ‘empty’ intervals were estimated to be longer than those for “filled” intervals, supporting a cognitive timer model of subjective time estimation.
Abstract: Ninety-six subjects were asked to estimate durations of either “empty” or “filled” intervals during which they performed verbal tasks at three levels of difficulty. The verbal tasks were performed under three conditions of external rhythmic stimulation: fast, slow, and no external tempo. It was found that subjective time estimations were a decreasing function of task difficulty, and that durations for “empty” intervals were estimated to be longer than those for “filled” intervals. A relationship between external tempo and subjective time estimation was found. Longest time estimates were obtained under fast external tempo, and shortest time estimates were obtained under slow external tempo. Time estimates under the condition of no external tempo were found to be intermediate. The findings were interpreted as supporting a cognitive timer model of subjective time estimation.

167 citations


Journal ArticleDOI
TL;DR: A pattern of luminances equivalent to that of a traditional simultaneous lightness display was presented to observers under two conditions, and matches were obtained for both perceived reflectance and perceived illumination level of the squares and their backgrounds.
Abstract: A pattern of luminances equivalent to that of a traditional simultaneous lightness display (two equal gray squares, one on a white background and the other on an adjacent black background) was presented to observers under two conditions, and matches were obtained for both perceived reflectance and perceived illumination level of the squares and their backgrounds. In one condition, the edge dividing the two backgrounds was made to appear as the boundary between a white and a black surface, as in the traditional pattern. The squares then were perceived as almost the same shade of middle gray. In the other condition, a context was supplied that made the edge between the backgrounds appear as the boundary between two illumination levels, causing one square to appear black and the other white. These results were interpreted as a problem for local ratio theories, local edge theories, and lateral inhibition explanations of lightness constancy, but as support for the concepts of edge classification, edge integration, and the retinal image as a dual image.

162 citations


Journal ArticleDOI
TL;DR: The experiment reported here used the gating paradigm to investigate the interaction of sensory and contextual constraints during the process of recognizing spoken words, and to determine the relative contribution of two kinds of contextual constraint—syntactic and interpretative—in reducing the amount of sensory input needed for recognition.
Abstract: The experiment reported here used the gating paradigm (Grosjean, 1980) to investigate the following issues: To test the validity of the claims made by the “cohort” theory (Marslen-Wilson & Tyler, 1980; Marslen-Wilson & Welsh, 1978) for the interaction of sensory and contextual constraints during the process of recognizing spoken words, and to determine the relative contribution of two kinds of contextual constraint—syntactic and interpretative—in reducing the amount of sensory input needed for recognition The results both provide good support for the cohort model, and show that although strong syntactic constraints on form-class only marginally reduce the amount of sensory input needed, a minimal interpretative context has a substantial facilitatory effect on word recognition

Journal ArticleDOI
TL;DR: A number of control conditions demonstrated that the effect was due primarily to persistence from the phosphor of the cathode ray tube used for stimulus presentation and that little of the visual information integrated was across two fixations.
Abstract: After subjects established fixation on a target cross, 12 dots were presented parafoveally. When the dots were presented, the subjects made an eye movement to the location of the dots, and during the saccade the 12 initially presented dots were replaced by 12 other dots. The 24 dots were part of a 5 × 5 matrix, and the task of the subject was to report which dot was missing. The data were consistent with other recent studies: subjects could successfully report the location of the missing dot far above chance (54%), whereas performance in a control condition (in which the two sets of dots were presented to different spatial and retinal locations) was almost at chance level (10%). However, a number of control conditions demonstrated that the effect was due primarily to persistence from the phosphor of the cathode ray tube used for stimulus presentation and that little of the visual information integrated was across two fixations. Implications of the results for a theory of integration across saccades are discussed.

Journal ArticleDOI
TL;DR: It was concluded that the performer develops a representation of the pattern as an integrated whole, and that performance accuracy is inversely related to pattern complexity.
Abstract: Subjects were presented with two parallel pulse trains through earphones, one to each ear. The pulse trains were isochronous, and the durations of the intervals associated with the right and left trains were systematically varied, so as to give rise to both simple rhythms and polyrhythms. The subjects were required to tap with the right hand in synchrony with the train delivered to the right ear, and to tap with the left hand in synchrony with the train delivered to the left ear. Accuracy of performance in polyrhythm contexts was substantially lower than in simple rhythm contexts, and decreased with an increase in the complexity of the associated polyrhythm. It was concluded that the performer develops a representation of the pattern as an integrated whole, and that performance accuracy is inversely related to pattern complexity.

Journal ArticleDOI
TL;DR: Two apparently conflicting claims about the effect of imagery on perception are tested in a way that rules out these alternative explanations and implies that frequency information is represented in images in a form that can interact with perceptual representations.
Abstract: It has been claimed both that (1) imagery selectivelyinterferes with perception (because images can be confused with similar stimuli) and that (2) imagery selectivelyfacilitates perception (because images recruit attention for similar stimuli). However, the evidence for these claims can be accounted for without postulating either image-caused confusions or attentional set. Interference could be caused by general and modality-specific capacity demands of imaging, and facilitation, by image-caused eye fixations. The experiment reported here simultaneously tested these two apparently conflicting claims about the effect of imagery on perception in a way that rules out these alternative explanations. Subjects participated in a two-alternative forced-choice auditory signal detection task in which the target signal was either the same frequency as an auditory image or a different frequency. The possible effects of confusion and attention were separated by varying the temporal relationship between the image and the observation intervals, since an image can only be confused with a simultaneous signal. We found selective facilitation (lower thresholds) for signals of the same frequency as the image relative to signals of a different frequency, implying attention recruitment; we found no selective interference, implying the absence of confusion. These results also imply that frequency information is represented in images in a form that can interact with perceptual representations.

Journal ArticleDOI
TL;DR: Prevalent theories of pattern vision postulate mechanisms selectively sensitive to spatial frequency and position but not to contrast, which are further evidence that the spatial-frequency and spatial-position mechanisms are noisy.
Abstract: Prevalent theories of pattern vision postulate mechanisms selectively sensitive to spatial frequency and position but not to contrast. Decreased performance in the detection of visual stimuli was found when the observer was uncertain about the spatial frequency or spatial position of a patch of sinusoidal grating but not when he was uncertain about contrast. The uncertainty effects were consistent with multiple-band models in which the observer is able to monitor perfectly all relevant mechanisms. Performance deteriorates when the observer must monitor more mechanisms, because these mechanisms are noisy and give rise to false alarms. This consistency is further evidence that the spatial-frequency and spatial-position mechanisms are noisy, a conclusion previously suggested by the "probability summation" demonstrated in the thresholds for compound stimuli. Somewhat paradoxically, the Quick pooling model, which quantitatively accounts for the amount of probability summation in pattern thresholds, predicts no effects of uncertainty. It cannot, therefore, be strictly correct.

Journal ArticleDOI
TL;DR: Two experiments are reported in which observers had to utilize information from one of two structural levels of visual stimulus patterns (large letters composed of smaller ones) and found that they could utilize information more rapidly form one level only at the cost of slower utilization from the other.
Abstract: Two experiments are reported in which observers had to utilize information from one of two structural levels of visual stimulus patterns (large letters composed of smaller ones). They could utilize information more rapidly form one level only at the cost of slower utilization from the other. This trade off defines an empirical attention operating characteristic (AOC) which is consistent with a simple mathematical model of the perceptual process: when viewing a stimulus, the observer selects one of two alternative “attentional” strategies, where each strategy is optimal for utilizing information from one structural level, but less than optimal for the other.

Journal ArticleDOI
TL;DR: It was shown that feature interaction dominates at close spacing but other processes dominate at wider spacing, and at least part of the effect of perceptual grouping appears to be information provided about target location.
Abstract: It is argued that lateral masking is a composite of several processes. These processes include response competition, distribution of attention, perceptual grouping, and feature (contour) interaction. Three experiments were carried out in an attempt to isolate some of the components. In the first two experiments, it was shown that feature interaction dominates at close spacing but other processes dominate at wider spacing. The third experiment showed that at least part of the effect of perceptual grouping appears to be information provided about target location.

Journal ArticleDOI
TL;DR: Each of 30 male subjects judged, in a single session, the loudness of a 1000-Hz tone and the exertion perceived while pedaling a bicycle using a combined category-ratio scale whose upper limit was defined as “maximum sensation” and a freer magnitude-estimation scale having no verbal labels.
Abstract: Each of 30 male subjects judged, in a single session, the loudness of a 1000-Hz tone and the exertion perceived while pedaling a bicycle. Two psychophysical methods were used—one employing a combined category-ratio scale whose upper limit was defined as “maximum sensation” and the other a freer magnitude-estimation scale having no verbal labels. Both methods yielded data consistent with power functions, although the combined category-ratio scale gave slightly smaller exponents. The category-ratio estimates provided a measure of individual differences in perceived exertion: At any work level, the differences across subjects in judgment correlated with differences in heart rate (a physiological indicant of strain); this result is consistent with Borg’s hypothesis that in dynamic work, maximal sensation is at least roughly equivalent across subjects. When the magnitude and the category-ratio estimates were converted to equivalent loudness (Stevens and Marks’s method of magnitude matching), the derived loudness values also correlated with heart rate: This outcome provides evidence for the utility of the cross-modal procedure and provides further evidence consistent with Borg’s model of perceived exertion.

Journal ArticleDOI
TL;DR: Functionals for the development of associative strength and associative interference are presented and global precedence is dependent on factors tending to degrade small stimuli more than large ones.
Abstract: Because it may be deduced from the more elementary principles of visual processing, global precedence (Navon, 1977) is not a primary perceptual principle. Subjects were presented with a large letter made out of small ones and asked to make an identification response on the basis of either the large or small letter. When fixation was controlled to provide adequate stimulation from the small letter, there was no difference in reaction time (RT) between the large and small targets. Also, there was no difference in interference due to response incompatibility of the unattended letter based on target size. However, when the stimulus was presented peripherally, unpredictably to the right or left of fixation, RT was faster to the large target and interference was substantially greater for the small target. Functions for the development of associative strength and associative interference are presented. Global precedence is dependent on factors tending to degrade small stimuli more than large ones.

Journal ArticleDOI
TL;DR: Visual search of color-coded alphanumeric displays was investigated by reaction time methods, suggesting that visual search for items in the target color consisted in sequentially examining groups of same-colored items, unitized in accordance with Gestalt principles of proximity and similarity.
Abstract: Visual search of color-coded alphanumeric displays was investigated by reaction time methods. The task was to indicate the alphanumeric class of a target item, singled out by appearing in a designated color which varied across trials. Mean reaction time increased with both the number of colors and the number of items in the displays. When same-colored noise items appeared in spatial proximity (organized displays), mean reaction time was a linear function of the number of colors for each level of number of items, and effects of the two factors were additive. For displays constructed by random assignment of colors to individual noise items (scrambled displays), temporal effects of the same factors showed strong interaction. Search times for scrambled displays were predictable from search times for organized displays by use of subjective estimates of the number of phenomenally separate groups of displayed items. The results suggest that visual search for items in the target color consisted in sequentially examining groups of same-colored items, unitized in accordance with Gestalt principles of proximity and similarity, until a unit in the target color was found.

Journal ArticleDOI
TL;DR: The results suggest that with single, specified targets, differences between within-category and between-category search may be due entirely to variation in the average physical resemblance between target and nontargets.
Abstract: In the experiment of Jonides and Gleitman (1972), subjects searched displays of digits or letters for single, specified digit or letter targets. The slope of the function relating reaction time to display size was positive (mean=25 msec/item) if target and nontargets belonged to the same alphanumeric category (within-category search), but zero if target and nontargets belonged to different categories (between-category search). This held even for the target O, whose categorical relationship to nontargets was determined entirely by the name it was given. In the present paper, two attempted replications are reported, one as close as practically possible. For the unambiguous targets A, Z, 2, and 4, slopes were greater in within-category search than in between-category search, but positive and very variable in both cases. For the ambiguous target O, slopes were identical in within-category and between-category search, and again positive. The results suggest that with single, specified targets, differences between within-category and between-category search may be due entirely to variation in the average physical resemblance between target and nontargets. In line with previous findings, they show that one cannot characterize within-category search as generally “serial” and between-category search as generally “parallel.”

Journal ArticleDOI
TL;DR: Two vowel-perception experiments revealed the importance of dynamic spectral information at syllable onset and offset (in its proper temporal relation) in permitting vowel identification, and deprived of its durational variation, steady-state spectral information was a poor basis for identification.
Abstract: Traditionally, it has been held that the primary information for vowel identification is provided by formant frequencies in the quasi-steady-state portion of the spoken syllable. Recent search has advanced an alternative view that emphasizes the role of temporal factors and dynamic (time-varying) spectral information in determining the perception of vowels. Nine vowels spoken in /b/ + vowel + /b/ syllables were recorded. The syllables were modified electronically in several ways to suppress various sources of spectral and durational information. Two vowel-perception experiments were performed, testing subjects’ ability to identify vowels in these modified syllables. Results of both experiments revealed the importance of dynamic spectral information at syllable onset and offset (in its proper temporal relation) in permitting vowel identification. On the other hand, steady-state spectral information, deprived of its durational variation, was a poor basis for identification. Results constitute a challenge to traditional accounts of vowel perception and point toward important sources of dynamic information.

Journal ArticleDOI
TL;DR: The interval estimation model proposed by Hicks, Miller, and Kinsbourne (1976) provided a better account of the data than did the storage-size hypothesis of Ornstein (1969).
Abstract: Undergraduate students performed one of three levels of processing on each word (15, 30, or 45) presented during a 120-sec interval. Subjects were told in advance that they would be required to estimate the length of the presentation interval (prospective condition) or were presented with an unexpected estimation task (retrospective condition). In the prospective condition, interval estimates were an inverse function of list length when relatively deep levels of processing were required, but were an increasing function of list length when shallow processing was required. In the retrospective condition, estimates were an increasing function of list length and were unaffected by different levels of processing. The interval estimation model proposed by Hicks, Miller, and Kinsbourne (1976) provided a better account of the data than did the storage-size hypothesis of Ornstein (1969).

Journal ArticleDOI
TL;DR: The induced effect is shown to be an appropriate stereoscopic response to a zero horizontal disparity surface at the eccentricity indicated, however, since extraretinal convergence signals provide conflicting evidence about eccentricity, they may attenuate the induced effect from its mathematically predicted value.
Abstract: The induced effect is an apparent slant of a frontal plane surface around a vertical axis, resulting from vertical magnification of the image in one eye. It is potentially important in suggesting a role for vertical disparity in stereoscopic vision, as proposed by Helmholtz. The paper first discusses previous theories of the induced effect and their implications. A theory is then developed attributing the effect to the process by which the stereoscopic response to horizontal disparity is scaled for viewing distance and eccentricity. The theory is based on a mathematical analysis of vertical disparity gradients produced by surfaces at various distances and eccentricities relative to the observer. Vertical disparity is shown to be an approximately linear function of eccentricity, with a slope or gradient which decreases with observation distance. The effect of vertical magnification on such gradients is analyzed and shown to be consistent with a change in the eccentricity factor, but not the distance factor, required to scale horizontal disparity. The induced effect is shown to be an appropriate stereoscopic response to a zero horizontal disparity surface at the eccentricity indicated. However, since extraretinal convergence signals provide conflicting evidence about eccentricity, they may attenuate the induced effect from its mathematically predicted value. The discomfort associated with the induced effect is attributed to this conflict.

Journal ArticleDOI
TL;DR: The results supported an attentional interpretation of the effect of color demonstrated in Experiment 1, implying that perceptual segregation by color improved the efficiency of focusing attention on the target.
Abstract: Relations between selective attention and perceptual segregation by color were investigated in binary-choice reaction time experiments based on the non search paradigm of Eriksen and Eriksen (1974). In focused attention conditions (Experiment 1), noise letters flanking a central target letter caused less interference when they differed from the target in color, although color carried no information as to whether or not a letter was the target. When blocking of trials favored a strategy of dividing attention between target and noise letters (Experiment 2), no benefit accrued from difference between target color and noise color. The results supported an attentional interpretation of the effect of color demonstrated in Experiment 1, implying that perceptual segregation by color improved the efficiency of focusing attention on the target.

Journal ArticleDOI
TL;DR: It is proposed that observers use the efferent input to the muscle in preference to its afferent responses in judging the force of muscular contractions in order to avoid bias in force perception.
Abstract: An integrative approach emphasizing both psychological and physiological components in force perception has started to emerge in motor psychophysics. In this experiment, the relation between isometric force (produced by the elbow flexors~ and perceived force was examined over a range of forces maintained until maximal endurance. A contralateral-limb matching procedure in which subjects estimated the force of a sustained, constant force contraction by contracting their unfatigued arm at regular intervals was employed. A linear increase in perceived force was observed during the fatiguing contractions, the rate of which depended on the level of force exerted. The sensation of force at maximal endurance was also found to vary with the force exerted. Based on the similarity between these results and those derived from electromyographic studies, we propose that observers use the efferent input to the muscle in preference to its afferent responses in judging the force of muscular contractions.

Journal ArticleDOI
TL;DR: The mutual consistency among the several sets of empirical and derived data strongly supports the assumptions of loudness additivity and the two-stage model.
Abstract: The study deals mainly with absolute magnitude estimation (AME) of the component loud-nesses and the total loudness of pairs of heterofrequency, sequential tone bursts. Two kinds of relations are derived from the obtained group and individual data on the assumption of loudness additivity and a two-stage scaling model. They refer to numerical loudness estimates versus derived loudness magnitudes and to the loudness magnitudes versus tone sensation levels. The relations are validated by means of indirect and direct loudness matches. In an auxiliary experiment, the same subjects performed AMEs of subjective line lengths. The resulting group and individual relations between the numerical estimates and the underlying physical line lengths were found to be nearly the same as those between the numerical loudness estimates and the derived loudness magnitudes. The mutual consistency among the several sets of empirical and derived data strongly supports the assumptions of loudness additivity and the two-stage model.

Journal ArticleDOI
TL;DR: Evidence is provided that semantic priming probably occurred under conditions in which commensurate visual information was actually available and that McCauley et al.
Abstract: In a recent study, McCauley, Parmelee, Sperber, and Carr (1980) reported results indicating that semantic priming had been produced by visual stimuli that were backward masked at durations too brief for greater than chance report. The conclusions drawn from such an experiment are critically dependent upon whether or not the primes were actually masked below the thresh-old for identification during priming trials. The three experiments reported here provide evidence that this requirement was not met. Rather, McCauley et al.’s (1980) methodology allowed for an uncontrolled increase in light adaptation during the actual testing of prime efficacy in the priming session. This increase in light adaptation reduced the effectiveness of the backward mask and resulted in an increase in prime visibility during priming trials. Thus, semantic priming probably occurred under conditions in which commensurate visual information was actually available.

Journal ArticleDOI
TL;DR: The Weber fraction for the sweetness of sucrose was determined at six concentrations, and the results provided good support for Weber’s law except for deviation near threshold, a finding consistent with previous work, suggesting a JND-scale/category-scale convergence.
Abstract: The Weber fraction for the sweetness of sucrose was determined at six concentrations. The results provided good support for Weber’s law except for deviation near threshold, a finding consistent with previous work. Consequently, the JND scale approximated to Fechner’s law. The psychophysical function for sucrose sweetness was also obtained by category rating, with precautions taken to preclude methodological bias. This function was likewise found to conform to Fechner’s law, suggesting a JND-scale/category-scale convergence. This convergence was further supported by experiments with the taste stimuli citric acid (acid/sour), sodium chloride (salty), and caffeine (bitter), which showed that the indirectly derived JND scale provides the same measure of taste intensity as the scale obtained directly by category rating.

Journal ArticleDOI
TL;DR: Examination of laboratory training procedures designed to modify the perception of the voicing dimension in synthetic speech stimuli implies a greater degree of plasticity in the adult speech processing system than has been acknowledged in past studies.
Abstract: The present study examined the plasticity of the human perceptual system by means of laboratory training procedures designed to modify the perception of the voicing dimension in synthetic speech stimuli. Although the results of earlier laboratory training studies have been ambiguous, recently Pisoni, Aslin, Perey, and Hennessy (1982) have succeeded in altering the perception of labial stop consonants from a two-way contrast in voicing to a three-way contrast. The present study extended these initial results by demonstrating that experience gained from discrimination training on one place of articulation (e.g., labial) can be transferred to another place of articulation (e.g., alveolar) without any additional training on the specific test stimuli. Quantitative analyses of the identification functions showed that the new perceptual categories were stable and displayed well-defined labeling boundaries between categories. Taken together with the earlier findings, these results imply a greater degree of plasticity in the adult speech processing system than has generally been acknowledged in past studies.