scispace - formally typeset
Search or ask a question
Author

James Christopher Ramming

Bio: James Christopher Ramming is an academic researcher from AT&T. The author has contributed to research in topics: Wireless network & Network planning and design. The author has an hindex of 4, co-authored 6 publications receiving 223 citations.

Papers
More filters
Patent
07 Dec 1995
TL;DR: The MAWL language as discussed by the authors provides an expressive typing capability, and the compiler performs error checking for common errors and self-consistency before actual compiling so run-time error checking is avoided.
Abstract: A new application language called the MAWL language and a compiler for the new application language called the MAWL compiler are provided for use by programmers of World Wide Web services. The MAWL language and the MAWL compiler may be used to provide any World Wide Web service, but they are especially useful for programming interactive services. The MAWL language provides an expressive typing capability. Through this expressive ability World Wide Web services that have defined states, sequences and sessions are straightforward where previously such capabilities do not exist. Further, the MAWL compiler performs error checking for common errors and self-consistency before actual compiling so run-time error checking is avoided. Together the MAWL language and the MAWL compiler greatly increase the productivity of the World Wide Web programmer and the complexity of the World Wide Web services that can reliably be provided.

104 citations

Patent
12 Dec 2000
TL;DR: In this article, a method for placing a call intended for an enhanced network user on hold is disclosed, where a calling party is enabled to select the type of information which is provided to the calling party while the caller is on hold by using a Distributed Feature Network (DFN).
Abstract: A method for placing a call intended for an enhanced network user on hold is disclosed. A calling party is enabled to select the type of information which is provided to the calling party while the calling party is on hold by using a Distributed Feature Network (DFN) architecture. The DFN includes a plurality of feature boxes with each feature box being created for the purpose of enabling a particular communication feature. A call intended for one of a group of ENUs is received by the DFN and an estimated hold time is determined for the call. The hold time is communicated to a caller associated with the call and the caller is provided with a list of options for information to be received by the caller while the caller is on hold. The caller's selection of a hold option is received by the DFN. A feature box is created for providing the caller with the selected information option, and the call is connected to the created feature box. When the DFN determines that one of the group of ENUs is available, the call is rerouted from the created feature box to the available ENU.

58 citations

Patent
18 Mar 1997
TL;DR: In this article, a method and apparatus for retrieving information from a document server (160) using an audio interface device (110) is presented, in which an audio channel is established between the audio interface devices and the audio browsing node.
Abstract: A method and apparatus for retrieving information from a document server (160) using an audio interface device (110). In an advantageous embodiment, a telecommunications network includes an audio browsing node (150) comprising an audio processing node (152) and an audio interpreter node (154). An audio channel is established between the audio interface device and the audio browsing node. A document serving protocol channel (164) is established between the audio browsing node (150) and the document server (160). The document server (160) provides documents to the audio browsing node (150) via the document serving protocol channel (164). The audio browsing node (150) interprets the document into audio data and provides the audio data to the audio interface device (110) via the audio channel. The audio interface device (110) provides audio user input to the audio browsing node (150) via the audio channel. The audio browsing node (150) interprets the audio user input into user data appropriate to be provided to the document server (160) and provides the user data to the document server (160) via the document serving protocol channel (164).

31 citations

Patent
01 Sep 2009
TL;DR: In this article, a system and method for providing a temporary wireless service connection to one or more users within a wireless local area network is presented, where in-building services and Internet related services are provided to the users over their respective temporary wireless services connections.
Abstract: The invention provides a system and method for providing a temporary wireless service connection to one or more users within a wireless local area network. In-building services and Internet related services are provided to the users over their respective temporary wireless service connections. Each user is charged for their specific usage amounts which may be based on the number of packets transferred, the number of bytes transferred, the number of distinct transactions and/or the time period each user's temporary wireless service connection was active.

23 citations

Patent
07 Oct 2004
TL;DR: In this article, a distributed features system for telecommunication networks is described, which permits a telecommunication network to efficiently distribute data related to network addresses, such as IP addresses, in a distributed manner.
Abstract: A distributed features system is disclosed which permits a telecommunication network to efficiently distribute data related to network addresses.

4 citations


Cited by
More filters
PatentDOI
TL;DR: In this paper, a system for receiving speech and non-speech communications of natural language questions and commands, transcribing the speech and NN communications to textual messages, and executing the questions and/or commands is presented.
Abstract: Systems and methods are provided for receiving speech and non-speech communications of natural language questions and/or commands, transcribing the speech and non-speech communications to textual messages, and executing the questions and/or commands. The invention applies context, prior information, domain knowledge, and user specific profile data to achieve a natural environment for one or more users presenting questions or commands across multiple domains. The systems and methods creates, stores and uses extensive personal profile information for each user, thereby improving the reliability of determining the context of the speech and non-speech communications and presenting the expected results for a particular question or command.

1,164 citations

Patent
29 Aug 2006
TL;DR: In this article, a mobile system is provided that includes speech-based and non-speech-based interfaces for telematics applications that identify and uses context, prior information, domain knowledge, and user specific profile data to achieve a natural environment for users that submit requests and/or commands in multiple domains.
Abstract: A mobile system is provided that includes speech-based and non-speech-based interfaces for telematics applications. The mobile system identifies and uses context, prior information, domain knowledge, and user specific profile data to achieve a natural environment for users that submit requests and/or commands in multiple domains. The invention creates, stores and uses extensive personal profile information for each user, thereby improving the reliability of determining the context and presenting the expected results for a particular question or command. The invention may organize domain specific behavior and information into agents, that are distributable or updateable over a wide area network.

716 citations

Patent
11 Dec 2007
TL;DR: In this paper, a conversational, natural language voice user interface may provide an integrated voice navigation services environment, where the user can speak conversationally, using natural language, to issue queries, commands, or other requests relating to the navigation services provided in the environment.
Abstract: A conversational, natural language voice user interface may provide an integrated voice navigation services environment. The voice user interface may enable a user to make natural language requests relating to various navigation services, and further, may interact with the user in a cooperative, conversational dialogue to resolve the requests. Through dynamic awareness of context, available sources of information, domain knowledge, user behavior and preferences, and external systems and devices, among other things, the voice user interface may provide an integrated environment in which the user can speak conversationally, using natural language, to issue queries, commands, or other requests relating to the navigation services provided in the environment.

450 citations

Patent
04 Aug 2006
TL;DR: In this article, a conversational human-machine interface that includes conversational speech analyzer, a general cognitive model, an environmental model, and a personalized cognitive model to determine context, domain knowledge, and invoke prior information to interpret a spoken utterance or a received non-spoken message is presented.
Abstract: A system and method are provided for receiving speech and/or non-speech communications of natural language questions and/or commands and executing the questions and/or commands. The invention provides a conversational human-machine interface that includes a conversational speech analyzer, a general cognitive model, an environmental model, and a personalized cognitive model to determine context, domain knowledge, and invoke prior information to interpret a spoken utterance or a received non-spoken message. The system and method creates, stores and uses extensive personal profile information for each user, thereby improving the reliability of determining the context of the speech or non-speech communication and presenting the expected results for a particular question or command.

430 citations

Patent
16 Oct 2007
TL;DR: In this paper, a cooperative conversational voice user interface is presented, which builds upon short-term and long-term shared knowledge to generate one or more explicit and/or implicit hypotheses about an intent of a user utterance.
Abstract: A cooperative conversational voice user interface is provided. The cooperative conversational voice user interface may build upon short-term and long-term shared knowledge to generate one or more explicit and/or implicit hypotheses about an intent of a user utterance. The hypotheses may be ranked based on varying degrees of certainty, and an adaptive response may be generated for the user. Responses may be worded based on the degrees of certainty and to frame an appropriate domain for a subsequent utterance. In one implementation, misrecognitions may be tolerated, and conversational course may be corrected based on subsequent utterances and/or responses.

413 citations