scispace - formally typeset
Search or ask a question
Institution

Nuance Communications

CompanyVienna, Austria
About: Nuance Communications is a company organization based out in Vienna, Austria. It is known for research contribution in the topics: Speech processing & Voice activity detection. The organization has 1518 authors who have published 1701 publications receiving 54891 citations. The organization is also known as: ScanSoft & ScanSoft Inc..


Papers
More filters
Patent
11 May 2009
TL;DR: In this paper, a pointing device can be a touchpad, a mouse, a pen, or any device capable of providing two or three-dimensional location, and a representation of the location of the pointing device over a virtual keyboard/pad can be dynamically shown on an associated display.
Abstract: A selective input system and associated method is provided which tracks the motion of a pointing device over a region or area. The pointing device can be a touchpad, a mouse, a pen, or any device capable of providing two or three-dimensional location. The region or area is preferably augmented with a printed or actual keyboard/pad. Alternatively, a representation of the location of the pointing device over a virtual keyboard/pad can be dynamically shown on an associated display. The system identifies selections of items or characters by detecting parameters of motion of the pointing device, such as length of motion, a change in direction, a change in velocity, and or a lack of motion at locations that correspond to features on the keyboard/pad. The input system is preferably coupled to a text disambiguation system such as a T9® or Sloppytype™ system, to improve the accuracy and usability of the input system.

233 citations

Patent
10 Nov 2014
TL;DR: In this paper, the authors described improved capabilities for interacting with a mobile communication facility comprising receiving a switch activation from a user to initiate a speech recognition recording session, wherein the speech recording session comprises a voice command from the user followed by the speech to be recognized from the users.
Abstract: In embodiments of the present invention improved capabilities are described for interacting with a mobile communication facility comprising receiving a switch activation from a user to initiate a speech recognition recording session, wherein the speech recognition recording session comprises a voice command from the user followed by the speech to be recognized from the user; recording the speech recognition recording session using a mobile communication facility resident capture facility; recognizing at least a portion of the voice command as an indication that user speech for recognition will begin following the end of the at least a portion of the voice command; recognizing the recorded speech using a speech recognition facility to produce an external output; and using the selected output to perform a function on the mobile communication facility.

231 citations

Patent
12 Dec 2001
TL;DR: A system and method for verifying user identity, in accordance with the present invention, includes a conversational system for receiving inputs from a user and transforming the inputs into formal commands.
Abstract: A system and method for verifying user identity, in accordance with the present invention, includes a conversational system for receiving inputs from a user and transforming the inputs into formal commands. A behavior verifier is coupled to the conversational system for extracting features from the inputs. The features include behavior patterns of the user. The behavior verifier is adapted to compare the input behavior to a behavior model to determine if the user is authorized to interact with the system.

226 citations

Patent
16 Dec 2013
TL;DR: In this article, the authors describe a system comprising at least one processor configured to perform: receiving a first request to access a first user profile of a user from a first device configured to execute a first virtual assistant to interact with the first user; in response to receiving the first request, providing the first device with access to information in the user profile so that the virtual assistant is able to customize, based on the accessed information, its behavior when interacting with the user.
Abstract: A system comprising at least one processor configured to perform: receiving a first request to access a first user profile of a first user from a first device configured to execute a first virtual assistant to interact with the first user; in response to receiving the first request, providing the first device with access to information in the first user profile so that the first virtual assistant is able to customize, based on the accessed information, its behavior when interacting with the first user; receiving a second request to access the first user profile from a second device configured to execute a second virtual assistant to interact with the first user; and in response to receiving the second request, providing the second device with access to the information so that the second virtual assistant is able to customize, based on the accessed information, its behavior when interacting with the first user.

223 citations

Patent
14 Jul 2003
TL;DR: In this article, a method for integrating processes with a multi-faceted human centered interface is provided, where a natural language model is used to parse voice initiated commands and data, and to route those voice initiated inputs to the required applications or processes.
Abstract: According to the present invention, a method for integrating processes with a multi-faceted human centered interface is provided. The interface is facilitated to implement a hands free, voice driven environment to control processes and applications. A natural language model is used to parse voice initiated commands and data, and to route those voice initiated inputs to the required applications or processes. The use of an intelligent context based parser allows the system to intelligently determine what processes are required to complete a task which is initiated using natural language. A single window environment provides an interface which is comfortable to the user by preventing the occurrence of distracting windows from appearing. The single window has a plurality of facets which allow distinct viewing areas. Each facet has an independent process routing its outputs thereto. As other processes are activated, each facet can reshape itself to bring a new process into one of the viewing areas. All activated processes are executed simultaneously to provide true multitasking.

222 citations


Authors

Showing all 1521 results

NameH-indexPapersCitations
Vinayak P. Dravid10381743612
Mehryar Mohri7532022868
Jinsong Wu7056616282
Horacio D. Espinosa6731516270
Shumin Zhai6720013447
Shang-Hua Teng6626516647
Dimitri Kanevsky6236214072
Marilyn A. Walker6230913429
Tara N. Sainath6127425183
Kenneth Church6129521179
John B Ketterson6081416929
Pascal Frossard5963722749
Michael Picheny5724411759
G. R. Scott Budinger5619612063
Jun Wu5335912110
Network Information
Related Institutions (5)
Google
39.8K papers, 2.1M citations

82% related

Microsoft
86.9K papers, 4.1M citations

82% related

Carnegie Mellon University
104.3K papers, 5.9M citations

80% related

Nokia
28.3K papers, 695.7K citations

79% related

Performance
Metrics
No. of papers from the Institution in previous years
YearPapers
20223
202124
202042
201955
201841
201753