scispace - formally typeset
Open AccessJournal ArticleDOI

Relative cue encoding in the context of sophisticated models of categorization: Separating information from categorization.

TLDR
It is found that, for both classes of models, in the vast majority of parameter settings, relative cues greatly helped the models approximate human performance, suggesting that expectation-relative processing is a crucial precursor step in phoneme categorization, and that understanding the information content is essential to understanding categorization processes.
Abstract
Traditional studies of human categorization often treat the processes of encoding features and cues as peripheral to the question of how stimuli are categorized. However, in domains where the features and cues are less transparent, how information is encoded prior to categorization may constrain our understanding of the architecture of categorization. This is particularly true in speech perception, where acoustic cues to phonological categories are ambiguous and influenced by multiple factors. Here, it is crucial to consider the joint contributions of the information in the input and the categorization architecture. We contrasted accounts that argue for raw acoustic information encoding with accounts that posit that cues are encoded relative to expectations, and investigated how two categorization architectures—exemplar models and back-propagation parallel distributed processing models—deal with each kind of information. Relative encoding, akin to predictive coding, is a form of noise reduction, so it can be expected to improve model accuracy; however, like predictive coding, the use of relative encoding in speech perception by humans is controversial, so results are compared to patterns of human performance, rather than on the basis of overall accuracy. We found that, for both classes of models, in the vast majority of parameter settings, relative cues greatly helped the models approximate human performance. This suggests that expectation-relative processing is a crucial precursor step in phoneme categorization, and that understanding the information content is essential to understanding categorization processes.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Infant-directed speech is consistent with teaching.

TL;DR: In this article, the authors use a formal theory of teaching, validated through experiments in other domains, as the basis for a detailed analysis of whether IDS is well designed for teaching phonetic categories.
Posted Content

Infant directed speech is consistent with teaching

TL;DR: The authors used a formal theory of teaching, validated through experiments in other domains, as the basis for a detailed analysis of whether infant-directed speech is well-designed for teaching phonetic categories.
Journal ArticleDOI

Divide and conquer: How perceptual contrast sensitivity and perceptual learning cooperate in reducing input variation in speech perception.

TL;DR: The findings show that perceptual contrast effects precede lexically guided perceptual learning, at least in terms of temporal order, and potentially in Terms of cognitive processing levels as well.
Journal ArticleDOI

When context is and isn't helpful: A corpus study of naturalistic speech

TL;DR: Analysis of top-down and normalization accounts for naturalistic listening tasks shows that when there are systematic regularities in which contexts different sounds occur in, normalization can actually increase category overlap rather than decrease it.
Journal ArticleDOI

Now you hear it, now you don't: Malleable illusory vowel effects in Spanish–English bilinguals

TL;DR: This article found that late bilinguals do not simply learn to perceive initial [s]-consonant sequences veridically, but that elements of both their phonotactic systems interact dynamically during speech perception, as listeners work to identify what it was they just heard.
References
More filters
Journal ArticleDOI

Learning representations by back-propagating errors

TL;DR: Back-propagation repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector, which helps to represent important features of the task domain.
Journal ArticleDOI

Multilayer feedforward networks are universal approximators

TL;DR: It is rigorously established that standard multilayer feedforward networks with as few as one hidden layer using arbitrary squashing functions are capable of approximating any Borel measurable function from one finite dimensional space to another to any desired degree of accuracy, provided sufficiently many hidden units are available.
Journal ArticleDOI

Whatever next? Predictive brains, situated agents, and the future of cognitive science

TL;DR: This target article critically examines this "hierarchical prediction machine" approach, concluding that it offers the best clue yet to the shape of a unified science of mind and action.
Related Papers (5)