scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Hemispheric specialization for speech perception.

01 Aug 1970-Journal of the Acoustical Society of America (Acoustical Society of America)-Vol. 48, Iss: 2, pp 579-594
TL;DR: Analysis of correct responses and errors showed that consonant features are processed independently, in agreement with the cerebral hemisphere dominant for language.
Abstract: Earlier experiments with dichotically presented nonsense syllables had suggested that perception of the sounds of speech depends upon unilateral processors located in the cerebral hemisphere dominant for language. Our aim in this study was to pull the speech signal apart to test its components in order to determine, if possible, which aspects of the perceptual process depend upon the specific language processing machinery of the dominant hemisphere. The stimuli were spoken consonant‐vowel‐consonant syllables presented in dichotic pairs which contrasted in only one phone (initial stop consonant, final stop consonant, or vowel). Significant right‐ear advantages were found for initial and final stop consonants, nonsignificant right‐ear advantages for six medial vowels, and significant right‐ear advantages for the articulatory features of voicing and place of production in stop consonants. Analysis of correct responses and errors showed that consonant features are processed independently, in agreement with ea...
Citations
More filters
Journal ArticleDOI
22 Jan 1971-Science
TL;DR: Recovery from habituation was greater for a given acoustic difference when the two stimuli were from different adult phonemic categories than when they were from the same category.
Abstract: Discriminiationi of synthetic speech sounds was studied in 1- and 4-month-old infants. The speech sounds varied along an acoustic dimension previously shown to cue phonemic distinctions among the voiced and voiceless stop consonants in adults. Discriminability was measured by an increase in conditioned response rate to a second speech sound after habituation to the first speech sound. Recovery from habituation was greater for a given acoustic difference when the two stimuli were from different adult phonemic categories than when they were from the same category. The discontinuity in discrimination at the region of the adult phonemic boundary was taken as evidence for categorical perception.

1,791 citations

Journal ArticleDOI
TL;DR: Human speech and birdsong have numerous parallels, with striking similarities in how sensory experience is internalized and used to shape vocal outputs, and how learning is enhanced during a critical period of development.
Abstract: Human speech and birdsong have numerous parallels. Both humans and songbirds learn their complex vocalizations early in life, exhibiting a strong dependence on hearing the adults they will imitate, as well as themselves as they practice, and a waning of this dependence as they mature. Innate predispositions for perceiving and learning the correct sounds exist in both groups, although more evidence of innate descriptions of species-specific signals exists in songbirds, where numerous species of vocal learners have been compared. Humans also share with songbirds an early phase of learning that is primarily perceptual, which then serves to guide later vocal production. Both humans and songbirds have evolved a complex hierarchy of specialized forebrain areas in which motor and auditory centers interact closely, and which control the lower vocal motor areas also found in nonlearners. In both these vocal learners, however, how auditory feedback of self is processed in these brain areas is surprisingly unclear. Finally, humans and songbirds have similar critical periods for vocal learning, with a much greater ability to learn early in life. In both groups, the capacity for late vocal learning may be decreased by the act of learning itself, as well as by biological factors such as the hormones of puberty. Although some features of birdsong and speech are clearly not analogous, such as the capacity of language for meaning, abstraction, and flexible associations, there are striking similarities in how sensory experience is internalized and used to shape vocal outputs, and how learning is enhanced during a critical period of development. Similar neural mechanisms may therefore be involved.

1,519 citations

Journal ArticleDOI
TL;DR: This review provides a critical framework within which two related topics are discussed: Do meaningful sex differences in verbal or spatial cerebral lateralization exist?
Abstract: Dual functional brain asymmetry refers to the notion that in most individuals the left cerebral hemisphere is specialized for language functions, whereas the right cerebral hemisphere is more important than the left for the perception, construction, and recall of stimuli that are difficult to verbalize. In the last twenty years there have been scattered reports of sex differences in degree of hemispheric specialization. This review provides a critical framework within which two related topics are discussed: Do meaningful sex differences in verbal or spatial cerebral lateralization exist? and, if so, Is the brain of one sex more symmetrically organized than the other? Data gathered on right-handed adults are examined from clinical studies of patients with unilateral brain lesions; from dichotic listening, tachistoscopic, and sensorimotor studies of functional asymmetries in non-brain-damaged subjects; from anatomical and electrophysiological investigations, as well as from the developmental literature. Retrospective and descriptive findings predominate over prospective and experimental methodologies. Nevertheless, there is an impressive accummulation of evidence suggesting that the male brain may be more asymmetrically organized than the female brain, both for verbal and nonverbal functions. These trends are rarely found in childhood but are often significant in the mature organism.

1,338 citations

Journal ArticleDOI
TL;DR: The traditional verbal/nonverbal dichotomy is inadequate for completely describing cerebral lateralization as mentioned in this paper, and evidence for a specialist left-hemisphere mechanism dedicated to the encoded speech signal is weakening, and the right hemisphere possesses considerable comprehensional powers.
Abstract: The traditional verbal/nonverbal dichotomy is inadequate for completely describing cerebral lateralization. Musical functions are not necessarily mediated by the right hemisphere; evidence for a specialist left-hemisphere mechanism dedicated to the encoded speech signal is weakening, and the right hemisphere possesses considerable comprehensional powers. Right-hemisphere processing is often said to be characterized by holistic or gestalt apprehension, and face recognition may be mediated by this hemisphere partly because of these powers, partly because of the right hemisphere's involvement in emotional affect, and possibly through the hypothesized existence of a specialist face processor or processors in the right. The latter hypothesis may, however, suffer the same fate as the one relating to a specialist encodedness processor for speech in the left. Verbal processing is largely the province of the left because of this hemisphere's possession of sequential, analytic, time-dependent mechanisms. Other distinctions (e.g., focal/diffuse and serial/parallel) are special cases of an analytic/holistic dichotomy. More fundamentally, however, the left hemisphere is characterized by its mediation of discriminations involving duration, temporal order, sequencing, and rhythm, at the sensory (tactual, visual, and, above all, auditory) level, and especially at the motor level (for fingers, limbs, and, above all, the speech apparatus). Spatial aspects characterize the right, the mapping of exteroceptive body space, and the positions of fingers, limbs, and perhaps articulators, with respect to actual and target positions. Thus there is a continuum of function between the hemispheres, rather than a rigid dichotomy, the differences being quantitative rather than qualitative, of degree rather than of kind.

738 citations


Cites background from "Hemispheric specialization for spee..."

  • ...According to Shankweiler and StuddertKennedy (1967) and Studdert-Kennedy and Shankweiler (1970), a greater REA is found for the more encoded stop consonants than for steady-state vowels (see also Darwin 1971; Haggard 1971)....

    [...]

References
More filters
Journal ArticleDOI
01 Jan 1964-WORD
TL;DR: A cross-language study of Voicing in Initial Stops: Acoustical Measurements as discussed by the authors was conducted in the early 1960s and the results showed that the initial stops were noisy.
Abstract: (1964). A Cross-Language Study of Voicing in Initial Stops: Acoustical Measurements. WORD: Vol. 20, No. 3, pp. 384-422.

2,363 citations

Journal ArticleDOI
TL;DR: In this paper, an articulatory analysis of 16 English consonants was performed over voice communication systems with frequency distortion and with random masking noise. The listeners were forced to guess at every sound and a count was made of all the different errors that resulted when one sound was confused with another.
Abstract: Sixteen English consonants were spoken over voice communication systems with frequency distortion and with random masking noise. The listeners were forced to guess at every sound and a count was made of all the different errors that resulted when one sound was confused with another. With noise or low‐pass filtering the confusions fall into consistent patterns, but with high‐pass filtering the errors are scattered quite randomly. An articulatory analysis of these 16 consonants provides a system of five articulatory features or “dimensions” that serve to characterize and distinguish the different phonemes: voicing, nasality, affrication, duration, and place of articulation. The data indicate that voicing and nasality are little affected and that place is severely affected by low‐pass and noisy systems. The indications are that the perception of any one of these five features is relatively independent of the perception of the others, so that it is as if five separate, simple channels were involved rather than a single complex channel.

1,842 citations

Journal ArticleDOI
01 Jun 1967-Cortex
TL;DR: In this paper, the authors reviewed the evidence relating lateral asymmetry in auditory perception to the asymmetrical functioning of the two hemispheres of the brain and described some applications of the dichotic listening technique to questions concerned with the development of cerebral dominance.

1,523 citations