scispace - formally typeset
Y

Yun-Cheng Ju

Researcher at Microsoft

Publications -  92
Citations -  2608

Yun-Cheng Ju is an academic researcher from Microsoft. The author has contributed to research in topics: Language model & Voice search. The author has an hindex of 25, co-authored 92 publications receiving 2573 citations. Previous affiliations of Yun-Cheng Ju include University of Illinois at Urbana–Champaign & University of Southern California.

Papers
More filters
Patent

Method and apparatus for automatic grammar generation from data entries

TL;DR: In this article, a method of generating an optimized grammar for speech recognition from a data set or big list of items is presented, including the steps of obtaining a tree representing items in the data set, and generating the grammar using the tree.
Patent

Application controls for speech enabled recognition

TL;DR: In this article, a web server is used to generate client side markups that include recognition and/or audible prompting, such as a prompt, answer, confirmation, command and validation.
Patent

Speech recognition system with display information

TL;DR: In this paper, a language processing system may determine a display form of a spoken word by analyzing the spoken form using a language model that includes dictionary entries for display forms of homonyms.
Patent

Intra-language statistical machine translation

TL;DR: In this article, an intra-language SMT model is used to translate between queries and listings in the human language submitted to a search engine, where the query strings are text strings of formal names of real world entities that are to be searched by the search engine to find matches for query strings.
Patent

Transcribing speech data with dialog context and/or recognition alternative information

TL;DR: In this article, a framework for easy and accurate transcription of speech data is provided, where utterances related to a single task are grouped together and processed using combinations of associated sets of recognition results and/or context information in a manner that allows the same transcription for a selected recognition result to be assigned to each of the utterances under consideration.