Abstract: An intelligent query system for processing voiced-based queries is disclosed, which uses a combination of both statistical and semantic based processing to identify the question posed by the user by understanding the meaning of the user's utterance. Based on identifying the meaning of the utterance, the system selects a single answer that best matches the user's query. The answer that is paired to this single question is then retrieved and presented to the user. The system, as implemented, accepts environmental variables selected by the user and is scalable to provide answers to a variety and quantity of user-initiated queries.
Abstract: Speech signal information is formatted, processed and transported in accordance with a format adapted for TCP/IP protocols used on the Internet and other communications networks. NULL characters are used for indicating the end of a voice segment. The method is useful for distributed speech recognition systems such as a client-server system, typically implemented on an intranet or over the Internet based on user queries at his/her computer, a PDA, or a workstation using a speech input interface.
Abstract: A speech recognition system includes distributed processing across a client and server for recognizing a spoken query by a user. A number of different speech models for different languages are used to support and detect a language spoken by a user. In some implementations an interactive electronic agent responds in the user's language to facilitate a real-time, human like dialogue.
Type:
Grant
Filed:
January 7, 2005
Date of Patent:
October 2, 2007
Assignee:
Phoenix Solutions, Inc
Inventors:
Ian M. Bennett, Bandi Ramesh Babu, Kishor Morkhandikar, Pallaki Gururaj
Abstract: A speech recognition system uses speech recognition models which are specifically trained and optimized for users residing in a particular geographic area or region. The speech models are trained with samples of word variants expected to be used in a natural language by representative members of a population associated with the geographic region or community of users. The speech recognition system is configured to have a real-time response that imitates a dialogue with a human operator.
Type:
Grant
Filed:
January 7, 2005
Date of Patent:
May 29, 2007
Assignee:
Phoenix Solutions, Inc.
Inventors:
Ian M. Bennett, Bandi Ramesh Babu, Kishor Morkhandikar, Pallaki Gururaj
Abstract: A speech-enabled internet based computing system includes a configurable speech recognition engine used for interacting with content on a web accessible page. The speech recognition engine is distributed across a client and server architecture, and is adaptive so that speech processing operations can be allocated as needed between the two. This allows for support for client devices having differing computing capabilities. Natural language operations can also be supported as desired. A user can thus interact with a web page and select items of interest using speech as a mode of input. Dynamic grammars can assist in the recognition operations to improve speed and comprehension.
Abstract: A real-time speech recognition system includes distributed processing across a client and server for recognizing a spoken query by a user. Both the client and server can dedicate a variable number of processing resources for performing speech recognition functions. In some implementations the partitioning of responsibility for speech recognition operations can be done on a client by client or query by query basis.
Type:
Grant
Filed:
January 7, 2005
Date of Patent:
November 21, 2006
Assignee:
Phoenix Solutions, Inc.
Inventors:
Ian M. Bennett, Bandi Ramesh Babu, Kishor Morkhandikar, Pallaki Gururaj
Abstract: An Internet-based server with speech support for enhanced interactivity is disclosed. This server hosts a server-side speech recognition engine and additional linguistic and database functions that cooperate to provide enhanced interactivity for clients so that their browsing experience is more satisfying, efficient and productive. This human-like interactivity which allows the user to ask queries about topics that range from customer delivery, product descriptions, payment details, is facilitated by the allowing the user to articulate the his or her questions directly in his or her natural language. The answer typically provided in real-time, can also be interfaced and integrated with existing telephone, e-mail and other mixed media services to provide a single point of interactivity for the user when browsing at a web-site.
Abstract: A real-time speech-based learning/training system distributed between client and server, and incorporating speech recognition and linguistic processing for recognizing a spoken question and to provide an answer to the student in a learning or training environment implemented on an intranet or over the Internet, is disclosed. The system accepts the student's question in the form of speech at his or her computer, PDA or workstation where minimal processing extracts a sufficient number of acoustic speech vectors representing the utterance. The system as implemented accepts environmental variables such as course, chapter, section as selected by the user so that the search time, accuracy and response time for the question can be optimized. A minimum set of acoustic vectors extracted at the client are then sent via a communications channel to the server where additional acoustic vectors are derived.
Type:
Grant
Filed:
November 12, 1999
Date of Patent:
December 16, 2003
Assignee:
Phoenix Solutions, Inc.
Inventors:
Ian M. Bennett, Bandi Ramesh Babu, Kishor Morkhandikar, Pallaki Gururaj
Abstract: A real-time system incorporating speech recognition and linguistic processing for recognizing a spoken query by a user and distributed between client and server, is disclosed. The system accepts user's queries in the form of speech at the client where minimal processing extracts a sufficient number of acoustic speech vectors representing the utterance. These vectors are sent via a communications channel to the server where additional acoustic vectors are derived. Using Hidden Markov Models (HMMs), and appropriate grammars and dictionaries conditioned by the selections made by the user, the speech representing the user's query is fully decoded into text (or some other suitable form) at the server. This text corresponding to the user's query is then simultaneously sent to a natural language engine and a database processor where optimized SQL statements are constructed for a full-text search from a database for a recordset of several stored questions that best matches the user's query.
Type:
Grant
Filed:
November 12, 1999
Date of Patent:
October 14, 2003
Assignee:
Phoenix Solutions, Inc.
Inventors:
Ian M. Bennett, Bandi Ramesh Babu, Kishor Morkhandikar, Pallaki Gururaj
Abstract: An intelligent query system for processing voiced-based queries is disclosed. This distributed client-server system, typically implemented on an intranet or over the Internet accepts a user's queries at his/her computer, PDA or workstation using a speech input interface. After converting the user's query from speech to text, a 2-step algorithm employing a natural language engine, a database processor and a full-text SQL database is implemented to find a single answer that best matches the user's query. The system, as implemented, accepts environmental variables selected by the user and is scalable to provide answers to a variety and quantity of user-initiated queries.
Type:
Grant
Filed:
November 12, 1999
Date of Patent:
September 2, 2003
Assignee:
Phoenix Solutions, Inc.
Inventors:
Ian M. Bennett, Bandi Ramesh Babu, Kishor Morkhandikar, Pallaki Gururaj