scispace - formally typeset
Search or ask a question
Author

Igor R. Jablokov

Bio: Igor R. Jablokov is an academic researcher from Nuance Communications. The author has contributed to research in topics: VoiceXML & Multimodal interaction. The author has an hindex of 10, co-authored 12 publications receiving 716 citations.

Papers
More filters
Patent
04 Feb 2008
TL;DR: In this article, a method is described for ordering recognition results produced by an automatic speech recognition (ASR) engine for a multimodal application implemented with a grammar of the multimodAL application in the ASR engine.
Abstract: A method is described for ordering recognition results produced by an automatic speech recognition ( ASR ) engine for a multimodal application implemented with a grammar of the multimodalapplication in the ASR engine, with the multimodal application operating in a multimodalbrowser on a multimodal device supporting multiple modes of interaction including a voice mode and one or more non-voice modes, the multimodal application 10 operativelycoupled to the ASR enginethrough a VoiceXML interpreter. The method includes: receiving, in the VoiceXML interpreter from the multimodal application, a voice utterance; determining, by the VoiceXML interpreter using the ASR engine, a plurality of recognition results in dependence upon the voice utterance and the grammar; determining, by the VoiceXML interpreter according to semantic interpretation scripts of the grammar, a 15 weight for each recognition result; and sorting, bythe VoiceXML interpreter, the plurality of recognition results in dependence upon the weight for each recognition result.

189 citations

Patent
16 Jun 2005
TL;DR: In this paper, a multimodal application voice is established, including selecting a voice personality for the application, and creating in dependence upon the voice personality a VoiceXML dialog.
Abstract: Establishing a multimodal application voice including selecting a voice personality for the multimodal application and creating in dependence upon the voice personality a VoiceXML dialog. Selecting a voice personality for the multimodal application may also include retrieving a user profile and selecting a voice personality for the multimodal application in dependence upon the user profile. Selecting a voice personality for the multimodal application may also include retrieving a sponsor profile and selecting a voice personality for the multimodal application in dependence upon the sponsor profile. Selecting a voice personality for the multimodal application may also include retrieving a system profile and selecting a voice personality for the multimodal application in dependence upon the system profile.

96 citations

Patent
16 Jun 2005
TL;DR: In this paper, the authors describe methods, systems, and products for synchronizing visual and speech events in a multimodal application, including receiving from a user speech; determining a semantic interpretation of the speech; calling a global application update handler; identifying, by the global application updating handler, an additional processing function in dependence upon the semantic interpretation; and executing the additional function.
Abstract: Exemplary methods, systems, and products are disclosed for synchronizing visual and speech events in a multimodal application, including receiving from a user speech; determining a semantic interpretation of the speech; calling a global application update handler; identifying, by the global application update handler, an additional processing function in dependence upon the semantic interpretation; and executing the additional function. Typical embodiments may include updating a visual element after executing the additional function. Typical embodiments may include updating a voice form after executing the additional function. Typical embodiments also may include updating a state table after updating the voice form. Typical embodiments also may include restarting the voice form after executing the additional function.

79 citations

Patent
16 Jun 2005
TL;DR: In this article, the authors present a system for modifying a grammar of a hierarchical multimodal menu that includes monitoring a user invoking a speech command in a first tier grammar, and adding the speech command to a second tier grammar in dependence upon the frequency of the user invoking the command.
Abstract: Methods, systems, and computer program products are provided for modifying a grammar of a hierarchical multimodal menu that include monitoring a user invoking a speech command in a first tier grammar, and adding the speech command to a second tier grammar in dependence upon the frequency of the user invoking the speech command. Adding the speech command to a second tier grammar may be carried out by adding the speech command to a higher tier grammar or by adding the speech command to a lower tier grammar. Adding the speech command to a second tier grammar may include storing the speech command in a grammar cache in the second tier grammar.

70 citations

Patent
12 Apr 2007
TL;DR: In this paper, the authors describe a multimodal application operating in a browser on a multi-modal device supporting multiple modes of user interaction including a voice mode and one or more non-voice modes.
Abstract: Methods, apparatus, and products are disclosed for providing expressive user interaction with a multimodal application, the multimodal application operating in a multimodal browser on a multimodal device supporting multiple modes of user interaction including a voice mode and one or more non-voice modes, the multimodal application operatively coupled to a speech engine through a VoiceXML interpreter, including: receiving, by the multimodal browser, user input from a user through a particular mode of user interaction; determining, by the multimodal browser, user output for the user in dependence upon the user input; determining, by the multimodal browser, a style for the user output in dependence upon the user input, the style specifying expressive output characteristics for at least one other mode of user interaction; and rendering, by the multimodal browser, the user output in dependence upon the style.

60 citations


Cited by
More filters
Patent
11 Jan 2011
TL;DR: In this article, an intelligent automated assistant system engages with the user in an integrated, conversational manner using natural language dialog, and invokes external services when appropriate to obtain information or perform various actions.
Abstract: An intelligent automated assistant system engages with the user in an integrated, conversational manner using natural language dialog, and invokes external services when appropriate to obtain information or perform various actions. The system can be implemented using any of a number of different platforms, such as the web, email, smartphone, and the like, or any combination thereof. In one embodiment, the system is based on sets of interrelated domains and tasks, and employs additional functionally powered by external services with which the system can interact.

1,462 citations

PatentDOI
TL;DR: In this paper, a system for receiving speech and non-speech communications of natural language questions and commands, transcribing the speech and NN communications to textual messages, and executing the questions and/or commands is presented.
Abstract: Systems and methods are provided for receiving speech and non-speech communications of natural language questions and/or commands, transcribing the speech and non-speech communications to textual messages, and executing the questions and/or commands. The invention applies context, prior information, domain knowledge, and user specific profile data to achieve a natural environment for one or more users presenting questions or commands across multiple domains. The systems and methods creates, stores and uses extensive personal profile information for each user, thereby improving the reliability of determining the context of the speech and non-speech communications and presenting the expected results for a particular question or command.

1,164 citations

Patent
29 Aug 2006
TL;DR: In this article, a mobile system is provided that includes speech-based and non-speech-based interfaces for telematics applications that identify and uses context, prior information, domain knowledge, and user specific profile data to achieve a natural environment for users that submit requests and/or commands in multiple domains.
Abstract: A mobile system is provided that includes speech-based and non-speech-based interfaces for telematics applications. The mobile system identifies and uses context, prior information, domain knowledge, and user specific profile data to achieve a natural environment for users that submit requests and/or commands in multiple domains. The invention creates, stores and uses extensive personal profile information for each user, thereby improving the reliability of determining the context and presenting the expected results for a particular question or command. The invention may organize domain specific behavior and information into agents, that are distributable or updateable over a wide area network.

716 citations

Patent
28 Sep 2012
TL;DR: In this article, a virtual assistant uses context information to supplement natural language or gestural input from a user, which helps to clarify the user's intent and reduce the number of candidate interpretations of user's input, and reduces the need for the user to provide excessive clarification input.
Abstract: A virtual assistant uses context information to supplement natural language or gestural input from a user. Context helps to clarify the user's intent and to reduce the number of candidate interpretations of the user's input, and reduces the need for the user to provide excessive clarification input. Context can include any available information that is usable by the assistant to supplement explicit user input to constrain an information-processing problem and/or to personalize results. Context can be used to constrain solutions during various phases of processing, including, for example, speech recognition, natural language processing, task flow processing, and dialog generation.

593 citations

Patent
11 Dec 2007
TL;DR: In this paper, a conversational, natural language voice user interface may provide an integrated voice navigation services environment, where the user can speak conversationally, using natural language, to issue queries, commands, or other requests relating to the navigation services provided in the environment.
Abstract: A conversational, natural language voice user interface may provide an integrated voice navigation services environment. The voice user interface may enable a user to make natural language requests relating to various navigation services, and further, may interact with the user in a cooperative, conversational dialogue to resolve the requests. Through dynamic awareness of context, available sources of information, domain knowledge, user behavior and preferences, and external systems and devices, among other things, the voice user interface may provide an integrated environment in which the user can speak conversationally, using natural language, to issue queries, commands, or other requests relating to the navigation services provided in the environment.

450 citations