Patents Assigned to Openstream Inc.
-
Publication number: 20250005277Abstract: A system and method of programmer-interpreter approach for large language model post-editing is described.Type: ApplicationFiled: June 27, 2024Publication date: January 2, 2025Applicant: Openstream Inc.Inventors: Zhuang Li, Rajasekhr Tumuluri, Ghlolamreza Haffari
-
Patent number: 12118321Abstract: Methods and systems for multimodal collaborative conversational dialogue are disclosed. The multimodal collaborative conversational dialogue system include a multimodal avatar interface and one or more sensors, which obtains one or more multimodal inputs. A multimodal semantic parser generates one or more logical form representations based on the one or more multimodal inputs. A collaborative dialogue manager infers a goal of the user from the one or more logical form representations, and develops a plan including communicative actions and non-communicative actions with regard to the goal. The multimodal avatar interface outputs one or more multimodal collaborative plan-based dialogue system-generated communications with respect to execution of at least one communicative action. The collaborative dialogue manager maintains a collaborative dialogue with the user until obtainment of the goal.Type: GrantFiled: December 27, 2023Date of Patent: October 15, 2024Assignee: Openstream Inc.Inventors: Philip R. Cohen, Lucian Galescu, Rajasekhar Tumuluri
-
Publication number: 20240242035Abstract: Methods and systems for multimodal collaborative conversational dialogue are disclosed. The multimodal collaborative conversational dialogue system include a multimodal avatar interface and one or more sensors, which obtains one or more multimodal inputs. A multimodal semantic parser generates one or more logical form representations based on the one or more multimodal inputs. A collaborative dialogue manager infers a goal of the user from the one or more logical form representations, and develops a plan including communicative actions and non-communicative actions with regard to the goal. The multimodal avatar interface outputs one or more multimodal collaborative plan-based dialogue system-generated communications with respect to execution of at least one communicative action. The collaborative dialogue manager maintains a collaborative dialogue with the user until obtainment of the goal.Type: ApplicationFiled: December 27, 2023Publication date: July 18, 2024Applicant: Openstream Inc.Inventors: Philip R. Cohen, Lucian Galescu, Rajasekhar Tumuluri
-
Publication number: 20240221758Abstract: Methods and systems for multimodal collaborative plan-based dialogue. The multimodal collaborative plan-based dialogue system includes multiple sensors to detect multimodal inputs from a user. The multimodal collaborative plan-based dialogue system includes a multimodal sematic parser that generates logical forms based on the multimodal inputs. The multimodal collaborative plan-based dialogue system includes a dialogue manager that infers a goal of the user from the logical forms and develops a plan that includes communicative actions with regard to the goal.Type: ApplicationFiled: March 8, 2024Publication date: July 4, 2024Applicant: Openstream Inc.Inventors: Philip R. Cohen, Rajasekhar Tumuluri
-
Publication number: 20240194186Abstract: Methods and systems for a multimodal conversational system are described. A method for interactive multimodal conversation includes parsing multimodal conversation from a physical human for content, recognizing and sensing one or more multimodal content from the parsed content, identifying verbal and non-verbal behavior of the physical human from the one or more multimodal content, generating learned patterns from the identified verbal and non-verbal behavior of the physical human, training a multimodal dialog manager with and using the learned patterns to provide responses to end-user multimodal conversations and queries, and training a virtual human clone of the physical human with interactive verbal and non-verbal behaviors of the physical human, wherein appropriate interactive verbal and non-verbal behaviors are provided by the virtual human clone when providing the responses to the end-user multimodal conversations and queries.Type: ApplicationFiled: February 12, 2024Publication date: June 13, 2024Applicant: Openstream Inc.Inventor: Rajasekhar Tumuluri
-
Publication number: 20240185838Abstract: Described is a system and method for training a multilingual semantic parser. A method includes receiving, by a multilingual semantic parser, a multilingual training dataset, wherein the multilingual training dataset includes pairs of utterances and meaning representations from at least one high-resource language and at least one low-resource language and wherein the multilingual training dataset is initially a machine-translated dataset, training, the multilingual semantic parser, by translating the utterances in the multilingual training dataset to a target language; and iteratively performing selecting, by an acquisition functions estimator, a subset of the multilingual training dataset for human translation, updating the multilingual training dataset with the human-translated subset of the multilingual training dataset with, and retraining, the multilingual semantic parser, with the updated multilingual training dataset.Type: ApplicationFiled: May 16, 2023Publication date: June 6, 2024Applicant: Openstream Inc.Inventors: Zhuang Li, Ghlolamreza Haffari, Rajasekhar Tumuluri, Philp R. Cohen
-
Patent number: 11942075Abstract: Methods and systems for a multimodal conversational system are described. A method for interactive multimodal conversation includes parsing multimodal conversation from a physical human for content, recognizing and sensing one or more multimodal content from the parsed content, identifying verbal and non-verbal behavior of the physical human from the one or more multimodal content, generating learned patterns from the identified verbal and non-verbal behavior of the physical human, training a multimodal dialog manager with and using the learned patterns to provide responses to end-user multimodal conversations and queries, and training a virtual human clone of the physical human with interactive verbal and non-verbal behaviors of the physical human, wherein appropriate interactive verbal and non-verbal behaviors are provided by the virtual human clone when providing the responses to the end-user multimodal conversations and queries.Type: GrantFiled: September 24, 2021Date of Patent: March 26, 2024Assignee: Openstream Inc.Inventor: Rajasekhar Tumuluri
-
Patent number: 11935543Abstract: Methods and systems for multimodal conversational dialogue. The multimodal conversational dialogue system includes multiple sensors to detect multimodal inputs from a user. The multimodal conversational dialogue system includes a multimodal sematic parser that performs semantic parsing and multimodal fusion of the multimodal inputs to determine a goal of the user. The multimodal conversational dialogue system includes a dialogue manager that generates a dialogue with the user in real-time. The dialogue includes system-generated utterances that are used to conduct a conversation between the user and the multimodal conversational dialogue system.Type: GrantFiled: June 8, 2021Date of Patent: March 19, 2024Assignee: Openstream Inc.Inventors: Philp R. Cohen, Rajasekhar Tumuluri
-
Publication number: 20230385560Abstract: Methods and systems for processing a multi-modal conversation are disclosed. A multi-modality input is selected from a plurality of multimodality conversations among two or more users. The system annotates the first modality inputs and at least one attention region in the first modality input corresponding to a set of entities and semantic relationships in a unified modality is identified by a discrete aspect of information bounded by the attention elements. The system models the representations of the multimodality inputs at different levels of granularity, which includes entity level, turn level, conversational level. The method proposed uses a network that consists of multilevel encoder-decoder architecture that is used to determine unified focalized attention, analyze and construct one or more responses for one or more turns in a conversation.Type: ApplicationFiled: August 11, 2023Publication date: November 30, 2023Applicant: Openstream Inc.Inventor: Rajasekhar Tumuluri
-
Patent number: 11769018Abstract: Methods and systems for attention behavioral analysis for a conversational question and answer system are disclosed. A multi-modality input is selected from a plurality of multimodality conversations among two or more users. The system annotates the first modality inputs and at least one attention region in the first modality input corresponding to a set of entities and semantic relationships in a unified modality is identified by a discrete aspect of information bounded by the attention elements. The system models the representations of the multimodality inputs at different levels of granularity, which includes entity level, turn level, conversational level. The method proposed uses a network that consists of multilevel encoder-decoder architecture that is used to determine unified focalized attention, analyze and construct one or more responses for one or more turns in a conversation.Type: GrantFiled: November 24, 2020Date of Patent: September 26, 2023Assignee: Openstream Inc.Inventor: Rajasekhar Tumuluri
-
Publication number: 20230099393Abstract: Methods and systems for a multimodal conversational system are described. A method for interactive multimodal conversation includes parsing multimodal conversation from a physical human for content, recognizing and sensing one or more multimodal content from the parsed content, identifying verbal and non-verbal behavior of the physical human from the one or more multimodal content, generating learned patterns from the identified verbal and non-verbal behavior of the physical human, training a multimodal dialog manager with and using the learned patterns to provide responses to end-user multimodal conversations and queries, and training a virtual human clone of the physical human with interactive verbal and non-verbal behaviors of the physical human, wherein appropriate interactive verbal and non-verbal behaviors are provided by the virtual human clone when providing the responses to the end-user multimodal conversations and queries.Type: ApplicationFiled: September 24, 2021Publication date: March 30, 2023Applicant: Openstream Inc.Inventor: Rajasekhar Tumuluri
-
Publication number: 20220405484Abstract: A computer-implemented method and system for enrichment of responses in a multimodal conversation environment are disclosed. A Question Answer (QA) engine, such as a reinforcement document transformer exploits a document template structure or layout, adapts the information extraction using a domain ontology, stores the enriched contents in a hierarchical form, and learns context and query patterns based on the intent and utterances of one or more queries. The region of enriched content for preparing a response to a given query is expanded or collapsed by navigating upwards or downwards in the hierarchy. The QA engine returns the most relevant answer with the proper context for one or more questions. The responses are provided to the user in one or more modalities.Type: ApplicationFiled: June 1, 2022Publication date: December 22, 2022Applicant: Openstream Inc.Inventors: Chaitanya Kanchibhotla, Pruthvi Raj Venkatesh, Rishu Kumar, Radha Krishna Pisipati, Rajasekhar Tumuluri
-
Publication number: 20220392454Abstract: Methods and systems for multimodal conversational dialogue are disclosed. The multimodal conversational dialogue system includes multiple sensors to detect multimodal inputs from a user. The multimodal conversational dialogue system includes a multimodal sematic parser that performs semantic parsing and multimodal fusion of the multimodal inputs to determine a goal of the user. The multimodal conversational dialogue system includes a dialogue manager that generates a dialogue with the user in real-time. The dialogue includes system-generated utterances that are used to conduct a conversation between the user and the multimodal conversational dialogue system.Type: ApplicationFiled: June 8, 2021Publication date: December 8, 2022Applicant: Openstream Inc.Inventors: Philp R. Cohen, Rajasekhar Tumuluri
-
Patent number: 11461681Abstract: Methods and systems for multi-modality soft-agents for an enterprise virtual assistant tool are disclosed. An exemplary method comprises capturing, with a computing device, one or more user requests based on at least one multi-modality interaction, populating, with a computing device, soft-queries to access associated data sources and applications, and mining information retrieved by executing at least one populated soft-query. A soft-query is created from user requests. A multi-modality user interface engine annotates the focus of user requests received via text, speech, touch, image, video, or object scanning. A query engine populates queries by identifying the sequence of multi-modal interactions, executes queries and provides results by mining the query results. The multi-modality interactions identify specific inputs for query building and specific parameters associated with the query. A query is populated and used to generate micro-queries associated with the applications involved.Type: GrantFiled: October 14, 2020Date of Patent: October 4, 2022Assignee: Openstream Inc.Inventor: Rajasekhar Tumuluri
-
Publication number: 20220164548Abstract: Methods and systems for attention behavioral analysis for a conversational question and answer system are disclosed. A multi-modality input is selected from a plurality of multimodality conversations among two or more users. The system annotates the first modality inputs and at least one attention region in the first modality input corresponding to a set of entities and semantic relationships in a unified modality is identified by a discrete aspect of information bounded by the attention elements. The system models the representations of the multimodality inputs at different levels of granularity, which includes entity level, turn level, conversational level. The method proposed uses a network that consists of multilevel encoder-decoder architecture that is used to determine unified focalized attention, analyze and construct one or more responses for one or more turns in a conversation.Type: ApplicationFiled: November 24, 2020Publication date: May 26, 2022Applicant: Openstream Inc.Inventor: Rajasekhar Tumuluri
-
Publication number: 20220114463Abstract: Methods and systems for multi-modality soft-agents for an enterprise virtual assistant tool are disclosed. An exemplary method comprises capturing, with a computing device, one or more user requests based on at least one multi-modality interaction, populating, with a computing device, soft-queries to access associated data sources and applications, and mining information retrieved by executing at least one populated soft-query. A soft-query is created from user requests. A multi-modality user interface engine annotates the focus of user requests received via text, speech, touch, image, video, or object scanning. A query engine populates queries by identifying the sequence of multi-modal interactions, executes queries and provides results by mining the query results. The multi-modality interactions identify specific inputs for query building and specific parameters associated with the query. A query is populated and used to generate micro-queries associated with the applications involved.Type: ApplicationFiled: October 14, 2020Publication date: April 14, 2022Applicant: Openstream Inc.Inventor: Rajasekhar Tumuluri