Patents Assigned to Openstream Inc.
  • Publication number: 20250005277
    Abstract: A system and method of programmer-interpreter approach for large language model post-editing is described.
    Type: Application
    Filed: June 27, 2024
    Publication date: January 2, 2025
    Applicant: Openstream Inc.
    Inventors: Zhuang Li, Rajasekhr Tumuluri, Ghlolamreza Haffari
  • Patent number: 12118321
    Abstract: Methods and systems for multimodal collaborative conversational dialogue are disclosed. The multimodal collaborative conversational dialogue system include a multimodal avatar interface and one or more sensors, which obtains one or more multimodal inputs. A multimodal semantic parser generates one or more logical form representations based on the one or more multimodal inputs. A collaborative dialogue manager infers a goal of the user from the one or more logical form representations, and develops a plan including communicative actions and non-communicative actions with regard to the goal. The multimodal avatar interface outputs one or more multimodal collaborative plan-based dialogue system-generated communications with respect to execution of at least one communicative action. The collaborative dialogue manager maintains a collaborative dialogue with the user until obtainment of the goal.
    Type: Grant
    Filed: December 27, 2023
    Date of Patent: October 15, 2024
    Assignee: Openstream Inc.
    Inventors: Philip R. Cohen, Lucian Galescu, Rajasekhar Tumuluri
  • Publication number: 20240242035
    Abstract: Methods and systems for multimodal collaborative conversational dialogue are disclosed. The multimodal collaborative conversational dialogue system include a multimodal avatar interface and one or more sensors, which obtains one or more multimodal inputs. A multimodal semantic parser generates one or more logical form representations based on the one or more multimodal inputs. A collaborative dialogue manager infers a goal of the user from the one or more logical form representations, and develops a plan including communicative actions and non-communicative actions with regard to the goal. The multimodal avatar interface outputs one or more multimodal collaborative plan-based dialogue system-generated communications with respect to execution of at least one communicative action. The collaborative dialogue manager maintains a collaborative dialogue with the user until obtainment of the goal.
    Type: Application
    Filed: December 27, 2023
    Publication date: July 18, 2024
    Applicant: Openstream Inc.
    Inventors: Philip R. Cohen, Lucian Galescu, Rajasekhar Tumuluri
  • Publication number: 20240221758
    Abstract: Methods and systems for multimodal collaborative plan-based dialogue. The multimodal collaborative plan-based dialogue system includes multiple sensors to detect multimodal inputs from a user. The multimodal collaborative plan-based dialogue system includes a multimodal sematic parser that generates logical forms based on the multimodal inputs. The multimodal collaborative plan-based dialogue system includes a dialogue manager that infers a goal of the user from the logical forms and develops a plan that includes communicative actions with regard to the goal.
    Type: Application
    Filed: March 8, 2024
    Publication date: July 4, 2024
    Applicant: Openstream Inc.
    Inventors: Philip R. Cohen, Rajasekhar Tumuluri
  • Publication number: 20240194186
    Abstract: Methods and systems for a multimodal conversational system are described. A method for interactive multimodal conversation includes parsing multimodal conversation from a physical human for content, recognizing and sensing one or more multimodal content from the parsed content, identifying verbal and non-verbal behavior of the physical human from the one or more multimodal content, generating learned patterns from the identified verbal and non-verbal behavior of the physical human, training a multimodal dialog manager with and using the learned patterns to provide responses to end-user multimodal conversations and queries, and training a virtual human clone of the physical human with interactive verbal and non-verbal behaviors of the physical human, wherein appropriate interactive verbal and non-verbal behaviors are provided by the virtual human clone when providing the responses to the end-user multimodal conversations and queries.
    Type: Application
    Filed: February 12, 2024
    Publication date: June 13, 2024
    Applicant: Openstream Inc.
    Inventor: Rajasekhar Tumuluri
  • Publication number: 20240185838
    Abstract: Described is a system and method for training a multilingual semantic parser. A method includes receiving, by a multilingual semantic parser, a multilingual training dataset, wherein the multilingual training dataset includes pairs of utterances and meaning representations from at least one high-resource language and at least one low-resource language and wherein the multilingual training dataset is initially a machine-translated dataset, training, the multilingual semantic parser, by translating the utterances in the multilingual training dataset to a target language; and iteratively performing selecting, by an acquisition functions estimator, a subset of the multilingual training dataset for human translation, updating the multilingual training dataset with the human-translated subset of the multilingual training dataset with, and retraining, the multilingual semantic parser, with the updated multilingual training dataset.
    Type: Application
    Filed: May 16, 2023
    Publication date: June 6, 2024
    Applicant: Openstream Inc.
    Inventors: Zhuang Li, Ghlolamreza Haffari, Rajasekhar Tumuluri, Philp R. Cohen
  • Patent number: 11942075
    Abstract: Methods and systems for a multimodal conversational system are described. A method for interactive multimodal conversation includes parsing multimodal conversation from a physical human for content, recognizing and sensing one or more multimodal content from the parsed content, identifying verbal and non-verbal behavior of the physical human from the one or more multimodal content, generating learned patterns from the identified verbal and non-verbal behavior of the physical human, training a multimodal dialog manager with and using the learned patterns to provide responses to end-user multimodal conversations and queries, and training a virtual human clone of the physical human with interactive verbal and non-verbal behaviors of the physical human, wherein appropriate interactive verbal and non-verbal behaviors are provided by the virtual human clone when providing the responses to the end-user multimodal conversations and queries.
    Type: Grant
    Filed: September 24, 2021
    Date of Patent: March 26, 2024
    Assignee: Openstream Inc.
    Inventor: Rajasekhar Tumuluri
  • Patent number: 11935543
    Abstract: Methods and systems for multimodal conversational dialogue. The multimodal conversational dialogue system includes multiple sensors to detect multimodal inputs from a user. The multimodal conversational dialogue system includes a multimodal sematic parser that performs semantic parsing and multimodal fusion of the multimodal inputs to determine a goal of the user. The multimodal conversational dialogue system includes a dialogue manager that generates a dialogue with the user in real-time. The dialogue includes system-generated utterances that are used to conduct a conversation between the user and the multimodal conversational dialogue system.
    Type: Grant
    Filed: June 8, 2021
    Date of Patent: March 19, 2024
    Assignee: Openstream Inc.
    Inventors: Philp R. Cohen, Rajasekhar Tumuluri
  • Publication number: 20230385560
    Abstract: Methods and systems for processing a multi-modal conversation are disclosed. A multi-modality input is selected from a plurality of multimodality conversations among two or more users. The system annotates the first modality inputs and at least one attention region in the first modality input corresponding to a set of entities and semantic relationships in a unified modality is identified by a discrete aspect of information bounded by the attention elements. The system models the representations of the multimodality inputs at different levels of granularity, which includes entity level, turn level, conversational level. The method proposed uses a network that consists of multilevel encoder-decoder architecture that is used to determine unified focalized attention, analyze and construct one or more responses for one or more turns in a conversation.
    Type: Application
    Filed: August 11, 2023
    Publication date: November 30, 2023
    Applicant: Openstream Inc.
    Inventor: Rajasekhar Tumuluri
  • Patent number: 11769018
    Abstract: Methods and systems for attention behavioral analysis for a conversational question and answer system are disclosed. A multi-modality input is selected from a plurality of multimodality conversations among two or more users. The system annotates the first modality inputs and at least one attention region in the first modality input corresponding to a set of entities and semantic relationships in a unified modality is identified by a discrete aspect of information bounded by the attention elements. The system models the representations of the multimodality inputs at different levels of granularity, which includes entity level, turn level, conversational level. The method proposed uses a network that consists of multilevel encoder-decoder architecture that is used to determine unified focalized attention, analyze and construct one or more responses for one or more turns in a conversation.
    Type: Grant
    Filed: November 24, 2020
    Date of Patent: September 26, 2023
    Assignee: Openstream Inc.
    Inventor: Rajasekhar Tumuluri
  • Publication number: 20230099393
    Abstract: Methods and systems for a multimodal conversational system are described. A method for interactive multimodal conversation includes parsing multimodal conversation from a physical human for content, recognizing and sensing one or more multimodal content from the parsed content, identifying verbal and non-verbal behavior of the physical human from the one or more multimodal content, generating learned patterns from the identified verbal and non-verbal behavior of the physical human, training a multimodal dialog manager with and using the learned patterns to provide responses to end-user multimodal conversations and queries, and training a virtual human clone of the physical human with interactive verbal and non-verbal behaviors of the physical human, wherein appropriate interactive verbal and non-verbal behaviors are provided by the virtual human clone when providing the responses to the end-user multimodal conversations and queries.
    Type: Application
    Filed: September 24, 2021
    Publication date: March 30, 2023
    Applicant: Openstream Inc.
    Inventor: Rajasekhar Tumuluri
  • Publication number: 20220405484
    Abstract: A computer-implemented method and system for enrichment of responses in a multimodal conversation environment are disclosed. A Question Answer (QA) engine, such as a reinforcement document transformer exploits a document template structure or layout, adapts the information extraction using a domain ontology, stores the enriched contents in a hierarchical form, and learns context and query patterns based on the intent and utterances of one or more queries. The region of enriched content for preparing a response to a given query is expanded or collapsed by navigating upwards or downwards in the hierarchy. The QA engine returns the most relevant answer with the proper context for one or more questions. The responses are provided to the user in one or more modalities.
    Type: Application
    Filed: June 1, 2022
    Publication date: December 22, 2022
    Applicant: Openstream Inc.
    Inventors: Chaitanya Kanchibhotla, Pruthvi Raj Venkatesh, Rishu Kumar, Radha Krishna Pisipati, Rajasekhar Tumuluri
  • Publication number: 20220392454
    Abstract: Methods and systems for multimodal conversational dialogue are disclosed. The multimodal conversational dialogue system includes multiple sensors to detect multimodal inputs from a user. The multimodal conversational dialogue system includes a multimodal sematic parser that performs semantic parsing and multimodal fusion of the multimodal inputs to determine a goal of the user. The multimodal conversational dialogue system includes a dialogue manager that generates a dialogue with the user in real-time. The dialogue includes system-generated utterances that are used to conduct a conversation between the user and the multimodal conversational dialogue system.
    Type: Application
    Filed: June 8, 2021
    Publication date: December 8, 2022
    Applicant: Openstream Inc.
    Inventors: Philp R. Cohen, Rajasekhar Tumuluri
  • Patent number: 11461681
    Abstract: Methods and systems for multi-modality soft-agents for an enterprise virtual assistant tool are disclosed. An exemplary method comprises capturing, with a computing device, one or more user requests based on at least one multi-modality interaction, populating, with a computing device, soft-queries to access associated data sources and applications, and mining information retrieved by executing at least one populated soft-query. A soft-query is created from user requests. A multi-modality user interface engine annotates the focus of user requests received via text, speech, touch, image, video, or object scanning. A query engine populates queries by identifying the sequence of multi-modal interactions, executes queries and provides results by mining the query results. The multi-modality interactions identify specific inputs for query building and specific parameters associated with the query. A query is populated and used to generate micro-queries associated with the applications involved.
    Type: Grant
    Filed: October 14, 2020
    Date of Patent: October 4, 2022
    Assignee: Openstream Inc.
    Inventor: Rajasekhar Tumuluri
  • Publication number: 20220164548
    Abstract: Methods and systems for attention behavioral analysis for a conversational question and answer system are disclosed. A multi-modality input is selected from a plurality of multimodality conversations among two or more users. The system annotates the first modality inputs and at least one attention region in the first modality input corresponding to a set of entities and semantic relationships in a unified modality is identified by a discrete aspect of information bounded by the attention elements. The system models the representations of the multimodality inputs at different levels of granularity, which includes entity level, turn level, conversational level. The method proposed uses a network that consists of multilevel encoder-decoder architecture that is used to determine unified focalized attention, analyze and construct one or more responses for one or more turns in a conversation.
    Type: Application
    Filed: November 24, 2020
    Publication date: May 26, 2022
    Applicant: Openstream Inc.
    Inventor: Rajasekhar Tumuluri
  • Publication number: 20220114463
    Abstract: Methods and systems for multi-modality soft-agents for an enterprise virtual assistant tool are disclosed. An exemplary method comprises capturing, with a computing device, one or more user requests based on at least one multi-modality interaction, populating, with a computing device, soft-queries to access associated data sources and applications, and mining information retrieved by executing at least one populated soft-query. A soft-query is created from user requests. A multi-modality user interface engine annotates the focus of user requests received via text, speech, touch, image, video, or object scanning. A query engine populates queries by identifying the sequence of multi-modal interactions, executes queries and provides results by mining the query results. The multi-modality interactions identify specific inputs for query building and specific parameters associated with the query. A query is populated and used to generate micro-queries associated with the applications involved.
    Type: Application
    Filed: October 14, 2020
    Publication date: April 14, 2022
    Applicant: Openstream Inc.
    Inventor: Rajasekhar Tumuluri