Patents by Inventor Chris Weider
Chris Weider has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11222626Abstract: A cooperative conversational voice user interface is provided. The cooperative conversational voice user interface may build upon short-term and long-term shared knowledge to generate one or more explicit and/or implicit hypotheses about an intent of a user utterance. The hypotheses may be ranked based on varying degrees of certainty, and an adaptive response may be generated for the user. Responses may be worded based on the degrees of certainty and to frame an appropriate domain for a subsequent utterance. In one implementation, misrecognitions may be tolerated, and conversational course may be corrected based on subsequent utterances and/or responses.Type: GrantFiled: May 20, 2019Date of Patent: January 11, 2022Assignee: VB Assets, LLCInventors: Larry Baldwin, Tom Freeman, Michael Tjalve, Blane Ebersold, Chris Weider
-
Publication number: 20200388274Abstract: A cooperative conversational voice user interface is provided. The cooperative conversational voice user interface may build upon short-term and long-term shared knowledge to generate one or more explicit and/or implicit hypotheses about an intent of a user utterance. The hypotheses may be ranked based on varying degrees of certainty, and an adaptive response may be generated for the user. Responses may be worded based on the degrees of certainty and to frame an appropriate domain for a subsequent utterance. In one implementation, misrecognitions may be tolerated, and conversational course may be corrected based on subsequent utterances and/or responses.Type: ApplicationFiled: August 24, 2020Publication date: December 10, 2020Applicant: VB Assets, LLCInventors: Larry BALDWIN, Tom FREEMAN, Michael TJALVE, Blane EBERSOLD, Chris WEIDER
-
Patent number: 10755699Abstract: A cooperative conversational voice user interface is provided. The cooperative conversational voice user interface may build upon short-term and long-term shared knowledge to generate one or more explicit and/or implicit hypotheses about an intent of a user utterance. The hypotheses may be ranked based on varying degrees of certainty, and an adaptive response may be generated for the user. Responses may be worded based on the degrees of certainty and to frame an appropriate domain for a subsequent utterance. In one implementation, misrecognitions may be tolerated, and conversational course may be corrected based on subsequent utterances and/or responses.Type: GrantFiled: May 20, 2019Date of Patent: August 25, 2020Assignee: VB Assets, LLCInventors: Larry Baldwin, Tom Freeman, Michael Tjalve, Blane Ebersold, Chris Weider
-
Patent number: 10553216Abstract: A system and method for an integrated, multi-modal, multi-device natural language voice services environment may be provided. In particular, the environment may include a plurality of voice-enabled devices each having intent determination capabilities for processing multi-modal natural language utterances in addition to knowledge of the intent determination capabilities of other devices in the environment. Further, the environment may be arranged in a centralized manner, a distributed peer-to-peer manner, or various combinations thereof. As such, the various devices may cooperate to determine intent of multi-modal natural language utterances, and commands, queries, or other requests may be routed to one or more of the devices best suited to take action in response thereto.Type: GrantFiled: September 26, 2018Date of Patent: February 4, 2020Assignee: Oracle International CorporationInventors: Robert A. Kennewick, Chris Weider
-
Patent number: 10553213Abstract: A system and method for processing multi-modal device interactions in a natural language voice services environment may be provided. In particular, one or more multi-modal device interactions may be received in a natural language voice services environment that includes one or more electronic devices. The multi-modal device interactions may include a non-voice interaction with at least one of the electronic devices or an application associated therewith, and may further include a natural language utterance relating to the non-voice interaction. Context relating to the non-voice interaction and the natural language utterance may be extracted and combined to determine an intent of the multi-modal device interaction, and a request may then be routed to one or more of the electronic devices based on the determined intent of the multi-modal device interaction.Type: GrantFiled: April 19, 2018Date of Patent: February 4, 2020Assignee: Oracle International CorporationInventors: Larry Baldwin, Chris Weider
-
Patent number: 10515628Abstract: A cooperative conversational voice user interface is provided. The cooperative conversational voice user interface may build upon short-term and long-term shared knowledge to generate one or more explicit and/or implicit hypotheses about an intent of a user utterance. The hypotheses may be ranked based on varying degrees of certainty, and an adaptive response may be generated for the user. Responses may be worded based on the degrees of certainty and to frame an appropriate domain for a subsequent utterance. In one implementation, misrecognitions may be tolerated, and conversational course may be corrected based on subsequent utterances and/or responses.Type: GrantFiled: May 20, 2019Date of Patent: December 24, 2019Assignee: VB Assets, LLCInventors: Larry Baldwin, Tom Freeman, Michael Tjalve, Blane Ebersold, Chris Weider
-
Publication number: 20190385596Abstract: A cooperative conversational voice user interface is provided. The cooperative conversational voice user interface may build upon short-term and long-term shared knowledge to generate one or more explicit and/or implicit hypotheses about an intent of a user utterance. The hypotheses may be ranked based on varying degrees of certainty, and an adaptive response may be generated for the user. Responses may be worded based on the degrees of certainty and to frame an appropriate domain for a subsequent utterance. In one implementation, misrecognitions may be tolerated, and conversational course may be corrected based on subsequent utterances and/or responses.Type: ApplicationFiled: August 29, 2019Publication date: December 19, 2019Applicant: VB Assets, LLCInventors: Larry BALDWIN, Tom FREEMAN, Michael TJALVE, Blane EBERSOLD, Chris WEIDER
-
Patent number: 10510341Abstract: A cooperative conversational voice user interface is provided. The cooperative conversational voice user interface may build upon short-term and long-term shared knowledge to generate one or more explicit and/or implicit hypotheses about an intent of a user utterance. The hypotheses may be ranked based on varying degrees of certainty, and an adaptive response may be generated for the user. Responses may be worded based on the degrees of certainty and to frame an appropriate domain for a subsequent utterance. In one implementation, misrecognitions may be tolerated, and conversational course may be corrected based on subsequent utterances and/or responses.Type: GrantFiled: August 29, 2019Date of Patent: December 17, 2019Assignee: VB Assets, LLCInventors: Larry Baldwin, Tom Freeman, Michael Tjalve, Blane Ebersold, Chris Weider
-
Publication number: 20190272822Abstract: A cooperative conversational voice user interface is provided. The cooperative conversational voice user interface may build upon short-term and long-term shared knowledge to generate one or more explicit and/or implicit hypotheses about an intent of a user utterance. The hypotheses may be ranked based on varying degrees of certainty, and an adaptive response may be generated for the user. Responses may be worded based on the degrees of certainty and to frame an appropriate domain for a subsequent utterance. In one implementation, misrecognitions may be tolerated, and conversational course may be corrected based on subsequent utterances and/or responses.Type: ApplicationFiled: May 20, 2019Publication date: September 5, 2019Applicant: VB Assets, LLCInventors: Larry BALDWIN, Tom FREEMAN, Michael TJALVE, Blane EBERSOLD, Chris WEIDER
-
Publication number: 20190272823Abstract: A cooperative conversational voice user interface is provided. The cooperative conversational voice user interface may build upon short-term and long-term shared knowledge to generate one or more explicit and/or implicit hypotheses about an intent of a user utterance. The hypotheses may be ranked based on varying degrees of certainty, and an adaptive response may be generated for the user. Responses may be worded based on the degrees of certainty and to frame an appropriate domain for a subsequent utterance. In one implementation, misrecognitions may be tolerated, and conversational course may be corrected based on subsequent utterances and/or responses.Type: ApplicationFiled: May 20, 2019Publication date: September 5, 2019Applicant: VB Assets, LLCInventors: Larry BALDWIN, Tom FREEMAN, Michael TJALVE, Blane EBERSOLD, Chris WEIDER
-
Publication number: 20190272821Abstract: A cooperative conversational voice user interface is provided. The cooperative conversational voice user interface may build upon short-term and long-term shared knowledge to generate one or more explicit and/or implicit hypotheses about an intent of a user utterance. The hypotheses may be ranked based on varying degrees of certainty, and an adaptive response may be generated for the user. Responses may be worded based on the degrees of certainty and to frame an appropriate domain for a subsequent utterance. In one implementation, misrecognitions may be tolerated, and conversational course may be corrected based on subsequent utterances and/or responses.Type: ApplicationFiled: May 20, 2019Publication date: September 5, 2019Applicant: VB Assets, LLCInventors: Larry BALDWIN, Tom FREEMAN, Michael TJALVE, Blane EBERSOLD, Chris WEIDER
-
Patent number: 10297249Abstract: A cooperative conversational voice user interface is provided. The cooperative conversational voice user interface may build upon short-term and long-term shared knowledge to generate one or more explicit and/or implicit hypotheses about an intent of a user utterance. The hypotheses may be ranked based on varying degrees of certainty, and an adaptive response may be generated for the user. Responses may be worded based on the degrees of certainty and to frame an appropriate domain for a subsequent utterance. In one implementation, misrecognitions may be tolerated, and conversational course may be corrected based on subsequent utterances and/or responses.Type: GrantFiled: April 20, 2015Date of Patent: May 21, 2019Assignee: VB Assets, LLCInventors: Larry Baldwin, Tom Freeman, Michael Tjalve, Blane Ebersold, Chris Weider
-
Publication number: 20190027146Abstract: A system and method for an integrated, multi-modal, multi-device natural language voice services environment may be provided. In particular, the environment may include a plurality of voice-enabled devices each having intent determination capabilities for processing multi-modal natural language utterances in addition to knowledge of the intent determination capabilities of other devices in the environment. Further, the environment may be arranged in a centralized manner, a distributed peer-to-peer manner, or various combinations thereof. As such, the various devices may cooperate to determine intent of multi-modal natural language utterances, and commands, queries, or other requests may be routed to one or more of the devices best suited to take action in response thereto.Type: ApplicationFiled: September 26, 2018Publication date: January 24, 2019Applicant: VB Assets, LLCInventors: Robert A. KENNEWICK, Chris WEIDER
-
Publication number: 20180308479Abstract: A system and method for processing multi-modal device interactions in a natural language voice services environment may be provided. In particular, one or more multi-modal device interactions may be received in a natural language voice services environment that includes one or more electronic devices. The multi-modal device interactions may include a non-voice interaction with at least one of the electronic devices or an application associated therewith, and may further include a natural language utterance relating to the non-voice interaction. Context relating to the non-voice interaction and the natural language utterance may be extracted and combined to determine an intent of the multi-modal device interaction, and a request may then be routed to one or more of the electronic devices based on the determined intent of the multi-modal device interaction.Type: ApplicationFiled: April 19, 2018Publication date: October 25, 2018Applicant: VB Assets, LLCInventors: Larry BALDWIN, Chris WEIDER
-
Patent number: 10089984Abstract: A system and method for an integrated, multi-modal, multi-device natural language voice services environment may be provided. In particular, the environment may include a plurality of voice-enabled devices each having intent determination capabilities for processing multi-modal natural language inputs in addition to knowledge of the intent determination capabilities of other devices in the environment. Further, the environment may be arranged in a centralized manner, a distributed peer-to-peer manner, or various combinations thereof. As such, the various devices may cooperate to determine intent of multi-modal natural language inputs, and commands, queries, or other requests may be routed to one or more of the devices best suited to take action in response thereto.Type: GrantFiled: June 26, 2017Date of Patent: October 2, 2018Assignee: VB Assets, LLCInventors: Robert A. Kennewick, Chris Weider
-
Patent number: 9953649Abstract: A system and method for processing multi-modal device interactions in a natural language voice services environment may be provided. In particular, one or more multi-modal device interactions may be received in a natural language voice services environment that includes one or more electronic devices. The multi-modal device interactions may include a non-voice interaction with at least one of the electronic devices or an application associated therewith, and may further include a natural language utterance relating to the non-voice interaction. Context relating to the non-voice interaction and the natural language utterance may be extracted and combined to determine an intent of the multi-modal device interaction, and a request may then be routed to one or more of the electronic devices based on the determined intent of the multi-modal device interaction.Type: GrantFiled: February 13, 2017Date of Patent: April 24, 2018Assignee: VoiceBox Technologies CorporationInventors: Larry Baldwin, Chris Weider
-
Publication number: 20170294189Abstract: A system and method for an integrated, multi-modal, multi-device natural language voice services environment may be provided. In particular, the environment may include a plurality of voice-enabled devices each having intent determination capabilities for processing multi-modal natural language inputs in addition to knowledge of the intent determination capabilities of other devices in the environment. Further, the environment may be arranged in a centralized manner, a distributed peer-to-peer manner, or various combinations thereof. As such, the various devices may cooperate to determine intent of multi-modal natural language inputs, and commands, queries, or other requests may be routed to one or more of the devices best suited to take action in response thereto.Type: ApplicationFiled: June 26, 2017Publication date: October 12, 2017Applicant: VoiceBox Technologies CorporationInventors: Robert A. KENNEWICK, Chris WEIDER
-
Publication number: 20170221482Abstract: A system and method for processing multi-modal device interactions in a natural language voice services environment may be provided. In particular, one or more multi-modal device interactions may be received in a natural language voice services environment that includes one or more electronic devices. The multi-modal device interactions may include a non-voice interaction with at least one of the electronic devices or an application associated therewith, and may further include a natural language utterance relating to the non-voice interaction. Context relating to the non-voice interaction and the natural language utterance may be extracted and combined to determine an intent of the multi-modal device interaction, and a request may then be routed to one or more of the electronic devices based on the determined intent of the multi-modal device interaction.Type: ApplicationFiled: February 13, 2017Publication date: August 3, 2017Applicant: VoiceBox Technologies CorporationInventors: Larry BALDWIN, Chris WEIDER
-
Patent number: 9711143Abstract: A system and method for an integrated, multi-modal, multi-device natural language voice services environment may be provided. In particular, the environment may include a plurality of voice-enabled devices each having intent determination capabilities for processing multi-modal natural language inputs in addition to knowledge of the intent determination capabilities of other devices in the environment. Further, the environment may be arranged in a centralized manner, a distributed peer-to-peer manner, or various combinations thereof. As such, the various devices may cooperate to determine intent of multi-modal natural language inputs, and commands, queries, or other requests may be routed to one or more of the devices best suited to take action in response thereto.Type: GrantFiled: April 4, 2016Date of Patent: July 18, 2017Assignee: VoiceBox Technologies CorporationInventors: Robert A. Kennewick, Chris Weider
-
Patent number: 9626959Abstract: A system and method are provided for receiving speech and/or non-speech communications of natural language questions and/or commands and executing the questions and/or commands. The invention provides a conversational human-machine interface that includes a conversational speech analyzer, a general cognitive model, an environmental model, and a personalized cognitive model to determine context, domain knowledge, and invoke prior information to interpret a spoken utterance or a received non-spoken message. The system and method creates, stores, and uses extensive personal profile information for each user, thereby improving the reliability of determining the context of the speech or non-speech communication and presenting the expected results for a particular question or command.Type: GrantFiled: December 30, 2013Date of Patent: April 18, 2017Assignee: Nuance Communications, Inc.Inventors: Philippe Di Cristo, Chris Weider, Robert A. Kennewick