M
Michael M. Cohen
Researcher at University of California, Santa Cruz
Publications - 53
Citations - 3619
Michael M. Cohen is an academic researcher from University of California, Santa Cruz. The author has contributed to research in topics: Speech perception & Visible Speech. The author has an hindex of 34, co-authored 53 publications receiving 3500 citations.
Papers
More filters
Book ChapterDOI
Modeling Coarticulation in Synthetic Visual Speech
TL;DR: An implementation of Lofqvist’s (1990) gestural theory of speech production is described for visual speech synthesis along with a description of the graphically controlled development system.
Book ChapterDOI
“Your Word is my Command”: Google Search by Voice: A Case Study
Johan Schalkwyk,Doug Beeferman,Francoise Beaufays,Bill Byrne,Ciprian Chelba,Michael M. Cohen,Maryam Kamvar,Brian Strope +7 more
TL;DR: An important goal at Google is to make spoken access ubiquitously available and performance works so well that the modality adds no friction to the interaction.
Journal ArticleDOI
Phonological context in speech perception
TL;DR: Identification of synthetic speech varying in both acoustic featural information and phonological context allowed quantitative tests of various models of how these two sources of information are evaluated and integrated in speech perception.
Proceedings Article
Universal speech tools: the CSLU toolkit.
Stephen Sutton,Ronald A. Cole,Jacques de Villiers,Johan Schalkwyk,Pieter J. Vermeulen,Michael W. Macon,Yonghong Yan,Edward C. Kaiser,Brian Rundle,Khaldoun Shobaki,John-Paul Hosom,Alexander Kain,Johan Wouters,Dominic W. Massaro,Michael M. Cohen +14 more
TL;DR: Recent improvements, additions and uses of the CSLU Toolkit are described, which makes the core technology and fundamental infrastructure accessible, affordable and easy to use.
Journal ArticleDOI
Perception of asynchronous and conflicting visual and auditory speech
TL;DR: The fuzzy logical model of perception (FLMP), which accurately describes integration, was used to measure the degree to which integration of audible and visible speech occurred and provide information about the temporal window of integration and its apparent dependence on the range of speech events in the test.