scispace - formally typeset
E

Elizabeth Shriberg

Researcher at SRI International

Publications -  196
Citations -  13547

Elizabeth Shriberg is an academic researcher from SRI International. The author has contributed to research in topics: Speaker recognition & Prosody. The author has an hindex of 55, co-authored 192 publications receiving 12778 citations. Previous affiliations of Elizabeth Shriberg include Institute of Company Secretaries of India & Unisys.

Papers
More filters
Journal ArticleDOI

Dialogue act modeling for automatic tagging and recognition of conversational speech

TL;DR: The authors proposed a statistical approach for modeling dialogue acts in conversational speech, i.e., speech-act-like units such as STATEMENT, QUESTION, BACKCHANNEL, AGREEMENT, DISAGREEMENT and APOLOGY.
Proceedings ArticleDOI

The ICSI Meeting Corpus

TL;DR: A corpus of data from natural meetings that occurred at the International Computer Science Institute in Berkeley, California over the last three years is collected, which supports work in automatic speech recognition, noise robustness, dialog modeling, prosody, rich transcription, information retrieval, and more.
Journal ArticleDOI

Prosody-based automatic segmentation of speech into sentences and topics

TL;DR: This work combines prosodic cues with word-based approaches, and evaluates performance on two speech corpora, Broadcast News and Switchboard, finding that the prosodic model achieves comparable performance with significantly less training data, and requires no hand-labeling of prosodic events.
Proceedings ArticleDOI

Expanding the scope of the ATIS task: the ATIS-3 corpus

TL;DR: The migration of the ATIS task to a richer relational database and development corpus (ATIS-3) and the ATis-3 corpus is described, including breakdowns of data by type (e.g. context-independent, context-dependent, and unevaluable) and variations in the data collected at different sites.
Proceedings Article

Prosody-based automatic detection of annoyance and frustration in human-computer dialog.

TL;DR: Results show that a prosodic model can predict whether an utterance is neutral ve sus “annoyed or frustrated” with an accuracy on par with that of human interlabeler agreement.