scispace - formally typeset
Search or ask a question

Showing papers on "Voice published in 1970"


Journal ArticleDOI
TL;DR: Analysis of correct responses and errors showed that consonant features are processed independently, in agreement with the cerebral hemisphere dominant for language.
Abstract: Earlier experiments with dichotically presented nonsense syllables had suggested that perception of the sounds of speech depends upon unilateral processors located in the cerebral hemisphere dominant for language. Our aim in this study was to pull the speech signal apart to test its components in order to determine, if possible, which aspects of the perceptual process depend upon the specific language processing machinery of the dominant hemisphere. The stimuli were spoken consonant‐vowel‐consonant syllables presented in dichotic pairs which contrasted in only one phone (initial stop consonant, final stop consonant, or vowel). Significant right‐ear advantages were found for initial and final stop consonants, nonsignificant right‐ear advantages for six medial vowels, and significant right‐ear advantages for the articulatory features of voicing and place of production in stop consonants. Analysis of correct responses and errors showed that consonant features are processed independently, in agreement with ea...

728 citations



Journal ArticleDOI
TL;DR: Without serious revision chain-association theories appeared incapable of explaining these and other aspects of Spoonerisms, an alternative theory of serial order was proposed which had potential application not only to the pronunciation of words, but to the syntax of other forms of behavior and perception as well.

313 citations


Journal ArticleDOI
TL;DR: An experiment shows that this pitch change in the vowel can cue the voiced/voiceless distinction for a preceding stop consonant in English and control conditions suggest that this cue depends not upon low‐frequency energy content, but upon the pitch sensation.
Abstract: The pitch change at the onset of voicing after a period of articulatory closure for a consonant reflects the state of the glottis during that closure. In initial position, a low rising pitch indicates a closed glottis and high falling pitch a glottis that is still partly open. An experiment is reported which shows that, for about 90% of the subjects, this pitch change in the vowel can cue the voiced/voiceless distinction for a preceding stop consonant in English. Control conditions suggest that this cue depends not upon low‐frequency energy content, but upon the pitch sensation.

206 citations


Journal ArticleDOI
TL;DR: Preston et al. as discussed by the authors reported further results on a research project that was designed to examine the development of stop consonants occurring in initial position in the vocalizations of children primarily during the second year of life.
Abstract: This paper reports further results on a research project that was designed to examine the development of stop consonants occurring in initial position in the vocalizations of children primarily during the second year of life. A previous paper [M. Preston, G. Yeni‐Komshian, R. Stark, and D. Port, “Certain Aspects of the Development of Speech Production and Perception in Children,” J. Acoust. Soc. Amer. 46, 102 (A) (1969)] presented results for apical stops. This report deals with the results for labial and velar stops over the same time period. The principal measure employed in the analysis is voice onset time (VOT), a perceptually and productively relevant measure, which can be easily obtained from spectrograms. In addition, a more complete analysis of apical stops comparing VOT distributions of adults with children between 2 and 212 yr is presented. [Research supported in part by the National Institute of Child Health and Human Development.]

4 citations


Journal ArticleDOI
TL;DR: The authors reported a lag effect in which the voiceless consonant when paired with a voiced consonant is generally more intelligible in the dichotic condition than when paired alone with a non-voiced consonant.
Abstract: At the previous meeting of this Society, we reported a lag effect in which the trailing syllable in a dichotic pair was more intelligible with time staggers at 15, 30, 60, and 90 msec. This paper presents data on 12 subjects who listened to time staggers of 90, 180, 250, and 500 msec. A special condition was also added in which the periodic portions of each pair were aligned without concern for the burst onset of the consonant. We called this the “boundary condition” and included it as a result of our observation that the voiceless consonant when paired with a voiced consonant is generally more intelligible in the dichotic condition. Both ears functioned about the same beyond 180‐msec staggers. However, we found the boundary condition enhanced the right ear laterality effect and attenuated the voiceless over voiced preponderance seen in ordinary dichotic simultaneous listening. [Supported by the NIH.]

2 citations


Journal ArticleDOI
TL;DR: Abrahamson and Lisker as discussed by the authors reported that in dichotic listening to consonant-vowel utterances in natural speech, more voiceless consonants were correctly perceived than voiced consonants.
Abstract: We had previously reported [J. Acoust. Soc. Amer. 45, 299 (A) (1969)] that in dichotic listening to consonant‐vowel (CV) utterances in natural speech, more voiceless consonants were correctly perceived than voiced consonants. In that experiment, the voiceless CVs had a slightly higher fundamental frequency than the voiced; therefore, synthetic CVs with uniform fundamental frequency and duration were used in the present experiment. Twenty normal right‐handed females listened to simultaneous (±212 msec) (stop+/a/) synthetic nonsense syllables both monotically and dichotically. In addition to the expected right‐ear laterality effect in dichotic listening, we confirmed our previous finding: dichotically, voiceless consonants predominated (73% vs 48%). Monotically, voiced consonants were most often heard correctly (60% vs 47%). An explanation related to onset of change from aperiodic to periodic portions of voiceless vs voiced utterances is presented. [We are grateful to Arthur Abramson and Lee Lisker for the ...

1 citations