scispace - formally typeset
Search or ask a question
Author

Howard S. Hoffman

Bio: Howard S. Hoffman is an academic researcher. The author has contributed to research in topics: Formant & Pitch Discrimination. The author has an hindex of 3, co-authored 5 publications receiving 1495 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: Whether or not, with similar acoustic differences, a listener can better discriminate betweenSounds that lie on opposite sides of a phoneme boundary than he can between sounds that fall within the same phoneme category is examined.
Abstract: In listening to speech, one typically reduces the number and variety of the many sounds with which he is bombarded by casting them into one or another of the phoneme categories that his language allows. Thus, a listener will identify as b, for example, quite a large number of acoustically different sounds. Although these differences are likely to be many and various, some of them will occur along an acoustic continuum that contains cues for a different phoneme, such as d. This is important for the present study because it provides a basis for the question to be examined here: whether or not, with similar acoustic differences, a listener can better discriminate between sounds that lie on opposite sides of a phoneme boundary than he can between sounds that fall within the same phoneme category. There are grounds for expecting an affirmative answer to this question. The most obvious, perhaps, are to be found in the common experience that in learning a new language one often

1,443 citations

Journal ArticleDOI
TL;DR: This article showed that third-formant transitions are cues for the perception of /b,d,g, and g in synthetic speech, and that the effects of these cues depend in part on the steady-state level of the third formant, implying the existence of loci analogous to those for the first and second formants.
Abstract: Experiments using synthetic speech show that third‐formant transitions are cues for the perception of /b,d,g/. Detailed results are presented for a variety of third‐formant transitions paired with each of a number of second‐formant transitions in initial position before the vowels, /i/ and /ae/.The results obtained with various third‐formant transitions depend in part on the steady‐state level of the third formant, implying the existence of third‐formant loci analogous to those previously found for the first and second formants. The data of the present experiment are not sufficient to permit a specification of these loci.The effects of third‐formant cues are independent of the two‐formant patterns to which they are added. When a third‐formant cue enhances the perception of a particular phoneme, it typically does not do so equally at the expense of the other response alternatives.

80 citations

Journal ArticleDOI
TL;DR: In this paper, the authors examined an additional cue (burst frequency) and collected more information about how the cues act in various combinations, and found that the contribution of any one cue was largely independent of the nature and the number of the other cues.
Abstract: Previous research involving synthetic speech reveals that both the second‐ and the third‐formant transitions play a role in the perception of the voiced stops, |b|, |d|, and |g|. The present experiment examined an additional cue (burst frequency), repeated a portion of the previous research, and collected more information about how the cues act in various combinations.Synthetic speech sounds containing one cue, all possible combinations of two cues, and all possible combinations of three cues were tested on a large group of listeners.Burst frequency was found to act as a cue for the perception of the voiced stops in much the same manner as this variable affects the perception of the unvoiced stops. To the extent that the present experiment overlapped previous research, the two sets of findings were in very close agreement. When cues were combined, they shared in the control of perception in such a way that the contribution of any one cue was largely independent of the nature and the number of the other cu...

44 citations

Journal ArticleDOI
TL;DR: This paper found that at some points on the acoustic spectrum large changes in the acoustic stimulus have no effect on phoneme identification, while at other points small changes cause the listener's identification to shift abruptly from one phoneme to another.
Abstract: The use of synthesizers provides an opportunity to vary speech‐like sounds in small steps along a single acoustic continuum. We find that at some points on such a continuum large changes in the acoustic stimulus have no effect on phoneme identification, while at other points small changes cause the listener's identification to shift abruptly from one phoneme to another. Casual observation led us to suspect that there might be related discontinuities in the discriminability of these sounds—that is, that discrimination would be less sharp, other things equal, between sounds in the same phoneme category than between sounds which lie on opposite sides of a phoneme boundary. This effect might be related to the difficulties that linguists often experience in hearing certain differences among the sounds of an exotic language. Discrimination and identification functions were obtained for a series of stimuli which differed in the second‐formant transition. In one part of the experiment these stimuli were presented singly to listeners for identification as b, d, or g; in another part, the discriminability of the stimuli was measured by an ABX technique. It was found that discrimination was, indeed, better in the vicinity of phoneme boundaries than it was near the middle of a category. The obtained discrimination function was quite close to a function predictable from the identification judgments on the extreme assumption that the listeners were able to discriminate the sounds only to the extent that they could differentially identify them as b, d, and g. [This work was supported in part by the Carnegie Corporation of New York, and in part by the Department of Defense in connection with Contract DA49‐170‐SC‐1642.]

2 citations


Cited by
More filters
Journal ArticleDOI
22 Nov 2002-Science
TL;DR: It is argued that an understanding of the faculty of language requires substantial interdisciplinary cooperation and how current developments in linguistics can be profitably wedded to work in evolutionary biology, anthropology, psychology, and neuroscience is suggested.
Abstract: We argue that an understanding of the faculty of language requires substantial interdisciplinary cooperation. We suggest how current developments in linguistics can be profitably wedded to work in evolutionary biology, anthropology, psychology, and neuroscience. We submit that a distinction should be made between the faculty of language in the broad sense (FLB)and in the narrow sense (FLN) . FLB includes a sensory-motor system, a conceptual-intentional system, and the computational mechanisms for recursion, providing the capacity to generate an infinite range of expressions from a finite set of elements. We hypothesize that FLN only includes recursion and is the only uniquely human component of the faculty of language. We further argue that FLN may have evolved for reasons other than language, hence comparative studies might look for evidence of such computations outside of the domain of communication (for example, number, navigation, and social relations).

3,293 citations

Journal ArticleDOI
TL;DR: A motor theory of speech perception, initially proposed to account for results of early experiments with synthetic speech, is now extensively revised to accommodate recent findings, and to relate the assumptions of the theory to those that might be made about other perceptual modes.

2,523 citations

Journal ArticleDOI
TL;DR: Functional magnetic resonance imaging revealed that premotor mirror neuron areas—areas active during the execution and the observation of an action—previously thought to be involved only in action recognition are actually also involved in understanding the intentions of others.
Abstract: Understanding the intentions of others while watching their actions is a fundamental building block of social behavior. The neural and functional mechanisms underlying this ability are still poorly understood. To investigate these mechanisms we used functional magnetic resonance imaging. Twenty-three subjects watched three kinds of stimuli: grasping hand actions without a context, context only (scenes containing objects), and grasping hand actions performed in two different contexts. In the latter condition the context suggested the intention associated with the grasping action (either drinking or cleaning). Actions embedded in contexts, compared with the other two conditions, yielded a significant signal increase in the posterior part of the inferior frontal gyrus and the adjacent sector of the ventral premotor cortex where hand actions are represented. Thus, premotor mirror neuron areas—areas active during the execution and the observation of an action—previously thought to be involved only in action recognition are actually also involved in understanding the intentions of others. To ascribe an intention is to infer a forthcoming new goal, and this is an operation that the motor system does automatically.

1,819 citations

Book ChapterDOI
01 Jan 1990
TL;DR: In the H&H program the quest for phonetic invariance is replaced by another research task: Explicating the notion of sufficient discriminability and defining the class of speech signals that meet that criterion.
Abstract: The H&H theory is developed from evidence showing that speaking and listening are shaped by biologically general processes. Speech production is adaptive. Speakers can, and typically do, tune their performance according to communicative and situational demands, controlling the interplay between production-oriented factors on the one hand, and output-oriented constraints on the other. For the ideal speaker, H&H claims that such adaptations reflect his tacit awareness of the listener’s access to sources of information independent of the signal and his judgement of the short-term demands for explicit signal information. Hence speakers are expected to vary their output along a continuum of hyper- and hypospeech. The theory suggests that the lack of invariance that speech signals commonly exhibit (Perkell and Klatt 1986) is a direct consequence of this adaptive organization (cf MacNeilage 1970). Accordingly, in the H&H program the quest for phonetic invariance is replaced by another research task: Explicating the notion of sufficient discriminability and defining the class of speech signals that meet that criterion.

1,574 citations