Patents by Inventor Larry Baldwin
Larry Baldwin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9105266Abstract: A system and method for processing multi-modal device interactions in a natural language voice services environment may be provided. In particular, one or more multi-modal device interactions may be received in a natural language voice services environment that includes one or more electronic devices. The multi-modal device interactions may include a non-voice interaction with at least one of the electronic devices or an application associated therewith, and may further include a natural language utterance relating to the non-voice interaction. Context relating to the non-voice interaction and the natural language utterance may be extracted and combined to determine an intent of the multi-modal device interaction, and a request may then be routed to one or more of the electronic devices based on the determined intent of the multi-modal device interaction.Type: GrantFiled: May 15, 2014Date of Patent: August 11, 2015Assignee: VoiceBox Technologies CorporationInventors: Larry Baldwin, Chris Weider
-
Patent number: 9015049Abstract: A cooperative conversational voice user interface is provided. The cooperative conversational voice user interface may build upon short-term and long-term shared knowledge to generate one or more explicit and/or implicit hypotheses about an intent of a user utterance. The hypotheses may be ranked based on varying degrees of certainty, and an adaptive response may be generated for the user. Responses may be worded based on the degrees of certainty and to frame an appropriate domain for a subsequent utterance. In one implementation, misrecognitions may be tolerated, and conversational course may be corrected based on subsequent utterances and/or responses.Type: GrantFiled: August 19, 2013Date of Patent: April 21, 2015Assignee: VoiceBox Technologies CorporationInventors: Larry Baldwin, Tom Freeman, Michael Tjalve, Blane Ebersold, Chris Weider
-
Publication number: 20150095159Abstract: In certain implementations, a system-initiated dialog with a user may be provided based on prior user interactions. In an implementation, context information determined based on one or more prior interactions of the user with the system may be obtained. A dialog-initiation opportunity may be detected based on the context information. A natural language dialog with the user may be initiated based on the dialog-initiation opportunity. In an implementation, the one or more prior interactions of the user may comprise one or more prior conversations between the user and the system. At least one of the one or more prior conversations may, for example, comprise a natural language utterance of the user and a natural language response of the system to the natural language utterance.Type: ApplicationFiled: December 8, 2014Publication date: April 2, 2015Applicant: VOICEBOX TECHNOLOGIES CORPORATIONInventors: Michael R. KENNEWICK, Catherine CHEUNG, Larry BALDWIN, Ari SALOMON, Michael TJALVE, Sheetal GUTTIGOLI, Lynn ARMSTRONG, Philippe DI CRISTO, Bernie ZIMMERMAN, Sam MENAKER
-
Patent number: 8983839Abstract: The system and method described herein may dynamically generate a recognition grammar associated with a conversational voice user interface in an integrated voice navigation services environment. In particular, in response to receiving a natural language utterance that relates to a navigation context at the voice user interface, a conversational language processor may generate a dynamic recognition grammar that organizes grammar information based on one or more topological domains. For example, the one or more topological domains may be determined based on a current location associated with a navigation device, whereby a speech recognition engine may use the grammar information organized in the dynamic recognition grammar according to the one or more topological domains to generate one or more interpretations associated with the natural language utterance.Type: GrantFiled: November 30, 2012Date of Patent: March 17, 2015Assignee: VoiceBox Technologies CorporationInventors: Michael R. Kennewick, Catherine Cheung, Larry Baldwin, Ari Salomon, Michael Tjalve, Sheetal Guttigoli, Lynn Armstrong, Philippe Di Cristo, Bernie Zimmerman, Sam Menaker
-
Publication number: 20150073910Abstract: Advertisements may be provided based on navigation-related preferences. In certain implementations, a current location associated with a user may be obtained. One or more navigation-related preferences associated with the user may be obtained. An advertisement may be determined based on the current location and the navigation-related preferences. The advertisement may be provided for presentation to the user. In some implementations, a directional proximity of the user to a location associated with the advertisement may be determined. The advertisement may be determined (or selected for the user) based on the directional proximity and the navigation-related preferences.Type: ApplicationFiled: November 17, 2014Publication date: March 12, 2015Applicant: VOICEBOX TECHNOLOGIES CORPORATIONInventors: Michael R. KENNEWICK, Catherine CHEUNG, Larry BALDWIN, Ari SALOMON, Michael TJALVE, Sheetal GUTTIGOLI, Lynn ARMSTRONG, Philippe DI CRISTO, Bernie ZIMMERMAN, Sam MENAKER
-
Publication number: 20140288934Abstract: A conversational, natural language voice user interface may provide an integrated voice navigation services environment. The voice user interface may enable a user to make natural language requests relating to various navigation services, and further, may interact with the user in a cooperative, conversational dialogue to resolve the requests. Through dynamic awareness of context, available sources of information, domain knowledge, user behavior and preferences, and external systems and devices, among other things, the voice user interface may provide an integrated environment in which the user can speak conversationally, using natural language, to issue queries, commands, or other requests relating to the navigation services provided in the environment.Type: ApplicationFiled: May 5, 2014Publication date: September 25, 2014Applicant: VoiceBox Technologies CorporationInventors: Michael R. Kennewick, Catherine Cheung, Larry Baldwin, Ari Salomon, Michael Tjalve, Sheetal Guttigoli, Lynn Armstrong, Philippe Di Cristo, Bernie Zimmerman, Sam Menaker
-
Publication number: 20140249822Abstract: A system and method for processing multi-modal device interactions in a natural language voice services environment may be provided. In particular, one or more multi-modal device interactions may be received in a natural language voice services environment that includes one or more electronic devices. The multi-modal device interactions may include a non-voice interaction with at least one of the electronic devices or an application associated therewith, and may further include a natural language utterance relating to the non-voice interaction. Context relating to the non-voice interaction and the natural language utterance may be extracted and combined to determine an intent of the multi-modal device interaction, and a request may then be routed to one or more of the electronic devices based on the determined intent of the multi-modal device interaction.Type: ApplicationFiled: May 15, 2014Publication date: September 4, 2014Applicant: VOICEBOX TECHNOLOGIES CORPORATIONInventors: LARRY BALDWIN, CHRIS WEIDER
-
Publication number: 20140156278Abstract: The system and method described herein may dynamically generate a recognition grammar associated with a conversational voice user interface in an integrated voice navigation services environment. In particular, in response to receiving a natural language utterance that relates to a navigation context at the voice user interface, a conversational language processor may generate a dynamic recognition grammar that organizes grammar information based on one or more topological domains. For example, the one or more topological domains may be determined based on a current location associated with a navigation device, whereby a speech recognition engine may use the grammar information organized in the dynamic recognition grammar according to the one or more topological domains to generate one or more interpretations associated with the natural language utterance.Type: ApplicationFiled: November 30, 2012Publication date: June 5, 2014Applicant: VoiceBox Technologies, Inc.Inventors: Michael R. Kennewick, Catherine Cheung, Larry Baldwin, Ari Salomon, Michael Tjalve, Sheetal Guttigoli, Lynn Armstrong, Philippe Di Christo, Bernie Zimmerman, Sam Menaker
-
Patent number: 8738380Abstract: A system and method for processing multi-modal device interactions in a natural language voice services environment may be provided. In particular, one or more multi-modal device interactions may be received in a natural language voice services environment that includes one or more electronic devices. The multi-modal device interactions may include a non-voice interaction with at least one of the electronic devices or an application associated therewith, and may further include a natural language utterance relating to the non-voice interaction. Context relating to the non-voice interaction and the natural language utterance may be extracted and combined to determine an intent of the multi-modal device interaction, and a request may then be routed to one or more of the electronic devices based on the determined intent of the multi-modal device interaction.Type: GrantFiled: December 3, 2012Date of Patent: May 27, 2014Assignee: VoiceBox Technologies CorporationInventors: Larry Baldwin, Chris Weider
-
Patent number: 8719009Abstract: A system and method for processing multi-modal device interactions in a natural language voice services environment may be provided. In particular, one or more multi-modal device interactions may be received in a natural language voice services environment that includes one or more electronic devices. The multi-modal device interactions may include a non-voice interaction with at least one of the electronic devices or an application associated therewith, and may further include a natural language utterance relating to the non-voice interaction. Context relating to the non-voice interaction and the natural language utterance may be extracted and combined to determine an intent of the multi-modal device interaction, and a request may then be routed to one or more of the electronic devices based on the determined intent of the multi-modal device interaction.Type: GrantFiled: September 14, 2012Date of Patent: May 6, 2014Assignee: VoiceBox Technologies CorporationInventors: Larry Baldwin, Chris Weider
-
Patent number: 8719026Abstract: A conversational, natural language voice user interface may provide an integrated voice navigation services environment. The voice user interface may enable a user to make natural language requests relating to various navigation services, and further, may interact with the user in a cooperative, conversational dialogue to resolve the requests. Through dynamic awareness of context, available sources of information, domain knowledge, user behavior and preferences, and external systems and devices, among other things, the voice user interface may provide an integrated environment in which the user can speak conversationally, using natural language, to issue queries, commands, or other requests relating to the navigation services provided in the environment.Type: GrantFiled: February 4, 2013Date of Patent: May 6, 2014Assignee: VoiceBox Technologies CorporationInventors: Michael R. Kennewick, Catherine Cheung, Larry Baldwin, Ari Salomon, Michael Tjalve, Sheetal Guttigoli, Lynn Armstrong, Philippe DiChristo, Bernie Zimmerman, Sam Menaker
-
Publication number: 20130339022Abstract: A cooperative conversational voice user interface is provided. The cooperative conversational voice user interface may build upon short-term and long-term shared knowledge to generate one or more explicit and/or implicit hypotheses about an intent of a user utterance. The hypotheses may be ranked based on varying degrees of certainty, and an adaptive response may be generated for the user. Responses may be worded based on the degrees of certainty and to frame an appropriate domain for a subsequent utterance. In one implementation, misrecognitions may be tolerated, and conversational course may be corrected based on subsequent utterances and/or responses.Type: ApplicationFiled: August 19, 2013Publication date: December 19, 2013Applicant: VoiceBox Technologies CorporationInventors: Larry Baldwin, Tom Freeman, Michael Tjalve, Blane Ebersold, Chris Weider
-
Publication number: 20130304473Abstract: A system and method for processing multi-modal device interactions in a natural language voice services environment may be provided. In particular, one or more multi-modal device interactions may be received in a natural language voice services environment that includes one or more electronic devices. The multi-modal device interactions may include a non-voice interaction with at least one of the electronic devices or an application associated therewith, and may further include a natural language utterance relating to the non-voice interaction. Context relating to the non-voice interaction and the natural language utterance may be extracted and combined to determine an intent of the multi-modal device interaction, and a request may then be routed to one or more of the electronic devices based on the determined intent of the multi-modal device interaction.Type: ApplicationFiled: December 3, 2012Publication date: November 14, 2013Applicant: VoiceBox Technologies, Inc.Inventors: Larry Baldwin, Chris Weider
-
Patent number: 8515765Abstract: A cooperative conversational voice user interface is provided. The cooperative conversational voice user interface may build upon short-term and long-term shared knowledge to generate one or more explicit and/or implicit hypotheses about an intent of a user utterance. The hypotheses may be ranked based on varying degrees of certainty, and an adaptive response may be generated for the user. Responses may be worded based on the degrees of certainty and to frame an appropriate domain for a subsequent utterance. In one implementation, misrecognitions may be tolerated, and conversational course may be corrected based on subsequent utterances and/or responses.Type: GrantFiled: October 3, 2011Date of Patent: August 20, 2013Assignee: VoiceBox Technologies, Inc.Inventors: Larry Baldwin, Tom Freeman, Michael Tjalve, Blane Ebersold, Chris Weider
-
Patent number: 8452598Abstract: The system and method described herein may provide advertisements in an integrated voice navigation services environment. In particular, one or more advertisements may be identified based on affinities among a current location associated with a navigation device and shared knowledge and information used to interpret natural language utterances that relate to a navigation context, wherein the one or more advertisements may then be presented via a multi-modal output. As such, the shared knowledge and the information relating to the navigation context may provide the system and method with dynamic awareness relating to context, available information sources, domain knowledge, and user behavior and preferences, among other things, which may be used to deliver targeted and contextually relevant advertisements in the integrated navigation services environment.Type: GrantFiled: December 30, 2011Date of Patent: May 28, 2013Assignee: VoiceBox Technologies, Inc.Inventors: Michael R. Kennewick, Catherine Cheung, Larry Baldwin, Ari Salomon, Michael Tjalve, Sheetal Guttigoli, Lynn Armstrong, Philippe DiChristo, Bernie Zimmerman, Sam Menaker
-
Publication number: 20130054228Abstract: A system and method for processing multi-modal device interactions in a natural language voice services environment may be provided. In particular, one or more multi-modal device interactions may be received in a natural language voice services environment that includes one or more electronic devices. The multi-modal device interactions may include a non-voice interaction with at least one of the electronic devices or an application associated therewith, and may further include a natural language utterance relating to the non-voice interaction. Context relating to the non-voice interaction and the natural language utterance may be extracted and combined to determine an intent of the multi-modal device interaction, and a request may then be routed to one or more of the electronic devices based on the determined intent of the multi-modal device interaction.Type: ApplicationFiled: September 14, 2012Publication date: February 28, 2013Applicant: VoiceBox Technologies, Inc.Inventors: LARRY BALDWIN, Chris Weider
-
Patent number: 8370147Abstract: A conversational, natural language voice user interface may provide an integrated voice navigation services environment. The voice user interface may enable a user to make natural language requests relating to various navigation services, and further, may interact with the user in a cooperative, conversational dialogue to resolve the requests. Through dynamic awareness of context, available sources of information, domain knowledge, user behavior and preferences, and external systems and devices, among other things, the voice user interface may provide an integrated environment in which the user can speak conversationally, using natural language, to issue queries, commands, or other requests relating to the navigation services provided in the environment.Type: GrantFiled: December 30, 2011Date of Patent: February 5, 2013Assignee: VoiceBox Technologies, Inc.Inventors: Michael R. Kennewick, Catherine Cheung, Larry Baldwin, Ari Salomon, Michael Tjalve, Sheetal Guttigoli, Lynn Armstrong, Philippe DiChristo, Bernie Zimmerman, Sam Menaker
-
Patent number: 8326637Abstract: A system and method for processing multi-modal device interactions in a natural language voice services environment may be provided. In particular, one or more multi-modal device interactions may be received in a natural language voice services environment that includes one or more electronic devices. The multi-modal device interactions may include a non-voice interaction with at least one of the electronic devices or an application associated therewith, and may further include a natural language utterance relating to the non-voice interaction. Context relating to the non-voice interaction and the natural language utterance may be extracted and combined to determine an intent of the multi-modal device interaction, and a request may then be routed to one or more of the electronic devices based on the determined intent of the multi-modal device interaction.Type: GrantFiled: February 20, 2009Date of Patent: December 4, 2012Assignee: VoiceBox Technologies, Inc.Inventors: Larry Baldwin, Chris Weider
-
Patent number: 8326627Abstract: The system and method described herein may dynamically generate a recognition grammar associated with a conversational voice user interface in an integrated voice navigation services environment. In particular, in response to receiving a natural language utterance that relates to a navigation context at the voice user interface, a conversational language processor may generate a dynamic recognition grammar that organizes grammar information based on one or more topological domains. For example, the one or more topological domains may be determined based on a current location associated with a navigation device, whereby a speech recognition engine may use the grammar information organized in the dynamic recognition grammar according to the one or more topological domains to generate one or more interpretations associated with the natural language utterance.Type: GrantFiled: December 30, 2011Date of Patent: December 4, 2012Assignee: VoiceBox Technologies, Inc.Inventors: Michael R. Kennewick, Catherine Cheung, Larry Baldwin, Ari Salomon, Michael Tjalve, Sheetal Guttigoli, Lynn Armstrong, Philippe DiChristo, Bernie Zimmerman, Sam Menaker
-
Publication number: 20120109753Abstract: The system and method described herein may provide advertisements in an integrated voice navigation services environment. In particular, one or more advertisements may be identified based on affinities among a current location associated with a navigation device and shared knowledge and information used to interpret natural language utterances that relate to a navigation context, wherein the one or more advertisements may then be presented via a multi-modal output. As such, the shared knowledge and the information relating to the navigation context may provide the system and method with dynamic awareness relating to context, available information sources, domain knowledge, and user behavior and preferences, among other things, which may be used to deliver targeted and contextually relevant advertisements in the integrated navigation services environment.Type: ApplicationFiled: December 30, 2011Publication date: May 3, 2012Applicant: VoiceBox Technologies, Inc.Inventors: Michael R. Kennewick, Catherine Cheung, Larry Baldwin, Ari Salomon, Michael Tjalve, Sheetal Guttigoli, Lynn Armstrong, Philippe Di Cristo, Bernie Zimmerman, Sam Menaker