scispace - formally typeset
Search or ask a question
Topic

Voice

About: Voice is a research topic. Over the lifetime, 2393 publications have been published within this topic receiving 56637 citations.


Papers
More filters
Book
01 Jan 2005
TL;DR: Nasukawa et al. as discussed by the authors analyzed data from a large number of different languages and established a clear affinity between nasality and voicing, and demonstrated the advantages of treating these two properties as different phonetic manifestations of a single nasal-voice category.
Abstract: This book makes an important contribution to the expanding body of work in generative phonology which aims to reduce the number of traditionally recognized melodic categories in order to achieve a greater degree of restrictiveness. By analyzing data from a large number of different languages, Nasukawa establishes a clear affinity between nasality and voicing, and demonstrates the advantages of treating these two properties as different phonetic manifestations of a single nasal-voice category. The choice of whether to interpret this category as voicing or nasality is determined by the active or inactive status of a complement tier. This study deepens our understanding of the typological relation between nasality and voicing, and sheds new light on a number of related agreement phenomena.

20 citations

Journal ArticleDOI
Abstract: A recent paper on the prehistory of the Tibetan verbal system by Guillaume Jacques (2012), in keeping with many previous authorities, presents Tibetan verbs as occurring in pairs, with a voiced intransitive and a voice-alternating transitive member. However, as noticed by Uray, Tibetan verbs occur in triplets with no relationship between voicing and transitivity.

20 citations

Journal ArticleDOI
TL;DR: Results indicate that there exist feature detector mechanisms that are tuned to respond to the information specifying phonetic feature values best in particular acoustic environments, but that the extent of this selective tuning is limited.

20 citations

DissertationDOI
01 Jan 1999
TL;DR: In this article, the authors compare two approaches to the phonology of nasality: the phonetic approach, which is discussed in part 1, and the cognitive approach (part 2), which is argued to be the more empirical one.
Abstract: This thesis compares two approaches to the phonology of nasality and consists therefore of two main parts: the phonetic approach, which is discussed in part 1, and the cognitive approach (part 2). This is to say that this thesis investigates how the Language Acquisition Device employs nasality to define vocalic or consonantal systems of contrast, on the one hand, and phonotactic constraints and phonological processes, on the other. Ultimately, the phonetic approach is rejected, while the cognitive view is argued to be the more empirical one. Part 1, which deals with the phonetic approach, has three chapters. In chapter 1, I show after a brief introduction to Popper's evolutionary view of research and empiricism, that the assumption that the phonologial behaviour of nasality or any other phonetically defined notion is phonetically motivated or grounded (the 'Phonetic Hypothesis', 'PH') is flawed. Chapter 2 investigates feature theories, e.g. underspecification and feature geometry, and discusses the metatheoretical problems these framework have due to the assumption of the PH. This demonstrates that phonological processes involving 'nasality' cannot be explained by the employment of features. In Chapter 3,1 look at the commonly held view that there is a phonetically motivated phonologically relevant link between nasality and vocalic height or consonantal place of articulation (the 'Heightmyth', 'HM'). Part 2 of this thesis shows in four chapters how a cognitive account avoids the metatheoretical problems of the phonetic approach. In addition, it introduces a new proposal in relation to the acquisitional role of phonology: Chapter 4 provides an introduction to Government Phonology ('GP') and, more specifically, to GP's subtheories dealing with melody: (Revised) Element Theory and the Theory of Generative Constraints. This chapter demonstrates that there are languages with phonetically oral vowels which can phonetically nasalise following oral consonants. In chapter 5, I put forward evidence for the merger of Kaye, Lowenstamm & Vergnaud's L- and N-element into one new element (new) L. The main advantages of such a move are that it helps to keep overgeneration down and that it provides the basis for a integrated account for the cross-linguistically attested phenomena of nasality-induced voicing, Dahl's and Meinhof's Law. Chapter 6 investigates Quebec French nasal vowels, Montpelier VN-sequences and English NC-clusters and proposes a unified account for them. This analysis includes a cognitive explanation of the French version of the Heightmyth, i.e. for the observation that French vowels may not be high. Finally, in chapter 7, I demonstrate that the view that the PH is mistaken points to a new insight: Acoustic cues do not only contain much phonologically useless packaging in addition to phonologically relevant material, but also underdetermine the phonological representation. In other words, acoustic cues do not always contain all the information necessary to determine the internal representation of a segment. This is due to a phenomenon I have labelled 'acoustic cue overlap'. I can show for a number of Turkic vowel systems that they could not be acquired without the help of phonological processes (I- and U-harmony). Similarly, even though phonetically defined cues like 'voiced' or 'voiceless' for segments do not contain much useful information in relation to the phonological behaviour of the segments involved, there is cross-linguistic evidence for my claim that many consonant systems (including those exhibiting voiced-voiceless contrasts) could not be acquired without the helping, i.e. disambiguating, hand of phonology. All in all, the cognitive approach to phonology will not only be shown to be more empirical than the phonetic approach but also to be much more insightful. (Abstract shortened by ProQuest.).

20 citations

Book ChapterDOI
01 Jan 2004
TL;DR: Nguyen et al. as mentioned in this paper showed that the durational difference in stressed syllables can be 100 ms or more, and it is well-established as one of the strongest perceptual cues to whether the coda is voiced or voiceless.
Abstract: It is well known that syllables in many languages have longer vowels when their codas are voiced rather than voiceless (for English, cf. Jones, 1972; House & Fairbanks, 1953; Peterson & Lehiste, 1960; for other languages, including exceptions, see Keating, 1985). In English, the durational difference in stressed syllables can be 100 ms or more, and it is well-established as one of the strongest perceptual cues to whether the coda is voiced or voiceless (e.g. Denes, 1955; Chen, 1970; Raphael, 1972). More recently, van Santen, Coleman & Randolph (1992) showed for one General American speaker that this coda-dependent durational difference is not restricted to syllabic nuclei, but includes sonorant consonants, while Slater and Coleman (1996) showed that, for a British English speaker, the differences tended to be greatest in a confined region of the syllable, the specific location being determined by the syllable’s segmental structure. In a companion study to the present paper (Nguyen & Hawkins, 1998; Hawkins & Nguyen, submitted), we confirmed the existence of the durational difference and showed that it is accompanied by systematic spectral differences in four accents of British English (one speaker per accent). For three speakers/accents, F2 frequency and the spectral centre of gravity (COG) in the /l/ were lower before voiced compared with voiceless codas, as illustrated in Figure X.1. (The fourth speaker, not discussed further here, had a different pattern, consistent with the fact that his accent realises the /l/-/r/ contrast differently.) Since F1 frequency in onset /l/s did not differ due to coda voicing, whereas both F2 frequency and the COG did, we tentatively concluded that our measured spectral differences reflect degree of velarisation, consistent with impressionistic observations. Thus the general pattern is that onset /l/ is relatively long and dark when the coda of the same syllable is voiced, and relatively short and light when the coda is voiceless. Do these differences in the acoustic shape of onset /l/ affect whether the syllable coda is heard as voiced or voiceless? If they do, the contribution of the onset is likely to be small and subtle, because the measured acoustic differences are small (mean 4.3 ms, 11 Hz COG, 16 Hz F2 over three speakers). However, though small, the durational differences are completely consistent and strongly statistically significant. Spectral differences are more variable but also statistically significant. Moreover, at least some can be heard. Even if only the more extreme variants provide listeners with early perceptual information about coda voicing, there are far-reaching implications for how we model syllableand word-recognition, because the acoustic-phonetic properties we are concerned with are in nonadjacent segments and, for the most part, seem to be articulatorily and acoustically independent of one another. So, by testing whether these acoustic properties of onset /l/ affect the identification of coda voicing, we are coming closer to testing the standard assumption that lexical items are represented as sequences of discrete phonemic or allophonic units, for in standard phonological theory, longer duration and

20 citations


Network Information
Related Topics (5)
Speech perception
12.3K papers, 545K citations
85% related
Speech processing
24.2K papers, 637K citations
78% related
First language
23.9K papers, 544.4K citations
75% related
Sentence
41.2K papers, 929.6K citations
75% related
Noise
110.4K papers, 1.3M citations
74% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023102
2022248
202156
202073
201981
201888