Patents Assigned to Interactions, LLC
  • Patent number: 11743378
    Abstract: A virtual assistant system for communicating with customers uses human intelligence to correct any errors in the system AI, while collecting data for machine learning and future improvements for more automation. The system may use a modular design, with separate components for carrying out different system functions and sub-functions, and with frameworks for selecting the component best able to respond to a given customer conversation. The system may have agent assistance functionality that uses natural language processing to identity concepts in a user conversation and to illustrate that concepts within a graphical user interface of a human agent so that the human agent can more accurately and more rapidly assist the user in accomplishing the user's conversational objectives.
    Type: Grant
    Filed: October 23, 2020
    Date of Patent: August 29, 2023
    Assignee: Interactions LLC
    Inventors: Michael Johnston, Seyed Eman Mahmoodi
  • Patent number: 11625152
    Abstract: Within an environment in which users converse at least partly with human agents to accomplish a desired task, a server assists the agents by identifying workflows that are most applicable to the current conversation. Workflow selection functionality identifies one or more candidate workflows based on techniques such as user intent inference, conversation state tracking, or search, according to various embodiments. The identified candidate workflows are either automatically selected on behalf of the agent, or are presented to the agent for manual selection.
    Type: Grant
    Filed: October 29, 2021
    Date of Patent: April 11, 2023
    Assignee: Interactions LLC
    Inventors: Michael Johnston, Seyed Eman Mahmoodi
  • Patent number: 11606463
    Abstract: A virtual assistant system for communicating with customers uses human intelligence to correct any errors in the system AI, while collecting data for machine learning and future improvements for more automation. The system may use a modular design, with separate components for carrying out different system functions and sub-functions, and with frameworks for selecting the component best able to respond to a given customer conversation.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: March 14, 2023
    Assignee: INTERACTIONS LLC
    Inventors: Yoryos Yeracaris, Michael Johnston, Ethan Selfridge, Phillip Gray, Patrick Haffner
  • Patent number: 11508355
    Abstract: Systems and methods are disclosed herein for discerning aspects of user speech to determine user intent and/or other acoustic features of a sound input without the use of an ASR engine. To this end, a processor may receive a sound signal comprising raw acoustic data from a client device, and divides the data into acoustic units. The processor feeds the acoustic units through a first machine learning model to obtain a first output and determines a first mapping, using the first output, of each respective acoustic unit to a plurality of candidate representations of the respective acoustic unit. The processor feeds each candidate representation of the plurality through a second machine learning model to obtain a second output, determines a second mapping, using the second output, of each candidate representation to a known condition, and determines a label for the sound signal based on the second mapping.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: November 22, 2022
    Assignee: Interactions LLC
    Inventors: Ryan Price, Srinivas Bangalore
  • Patent number: 11314942
    Abstract: A computer-implemented method for providing agent assisted transcriptions of user utterances. A user utterance is received in response to a prompt provided to the user at a remote client device. An automatic transcription is generated from the utterance using a language model based upon an application or context, and presented to a human agent. The agent reviews the transcription and may replace at least a portion of the transcription with a corrected transcription. As the agent inputs the corrected transcription, accelerants are presented to the user comprising suggested texted to be inputted. The accelerants may be determined based upon an agent input, an application or context of the transcription, the portion of the transcription being replaced, or any combination thereof. In some cases, the user provides textual input, to which the agent transcribes an intent associated with the input with the aid of one or more accelerants.
    Type: Grant
    Filed: March 20, 2020
    Date of Patent: April 26, 2022
    Assignee: Interactions LLC
    Inventors: Ethan Selfridge, Michael Johnston, Robert Lifgren, James Dreher, John Leonard
  • Patent number: 11288457
    Abstract: Systems and methods are disclosed for determining a move driven by an interaction. In some embodiments, a processor determines an operational state of an interaction with a user based on parameter values of a data structure. The processor identifies a plurality of candidate moves for changing the operational state by determining a domain in which the interaction is occurring, retrieving a set of candidate moves that correspond to the domain from a knowledge graph, and adding the set to the plurality of candidate moves. The processor encodes input of the user received during the interaction into encoded terms, and determines a move for changing the operational state based on a match of the encoded terms to the set of candidate moves. The processor updates the parameter values of the data structure based on the move to reflect a current operational state led to by the move.
    Type: Grant
    Filed: February 1, 2019
    Date of Patent: March 29, 2022
    Assignee: Interactions LLC
    Inventors: Svetlana Stoyanchev, Michael Johnston
  • Patent number: 11210461
    Abstract: A masking system prevents a human agent from receiving sensitive personal information (SPI) provided by a caller during caller-agent communication. The masking system includes components for detecting the SPI, including automated speech recognition and natural language processing systems. When the caller communicates with the agent, e.g., via a phone call, the masking system processes the incoming caller audio. When the masking system detects SPI in the caller audio stream or when the masking system determines a high likelihood that incoming caller audio will include SPI, the caller audio is masked such that it cannot be heard by the agent. The masking system collects the SPI from the caller audio and sends it to the organization associated with the agent for processing the caller's request or transaction without giving the agent access to caller SPI.
    Type: Grant
    Filed: July 3, 2018
    Date of Patent: December 28, 2021
    Assignee: Interactions LLC
    Inventors: David Thomson, Ethan Selfridge
  • Patent number: 10891435
    Abstract: Machine translation is used to leverage the semantic properties (e.g., intent) already known for one natural language for use in another natural language. In a first embodiment, the corpus of a first language is translated to each other language of interest using machine translation, and the corresponding semantic properties are transferred to the translated corpuses. Semantic models can then be generated from the translated corpuses and the transferred semantic properties. In a second embodiment, given a first language for which there is a semantic model, if a query is received in a second, different language lacking its own semantic model, machine translation is used to translate the query into the first language. Then, the semantic model for the first language is applied to the translated query, thereby obtaining the semantic properties for the query, even though no semantic model existed for the language in which the query was specified.
    Type: Grant
    Filed: February 20, 2018
    Date of Patent: January 12, 2021
    Assignee: INTERACTIONS LLC
    Inventors: Nicholas Ruiz, John Chen, Srinivas Bangalore
  • Patent number: 10810997
    Abstract: An interactive response system directs input to a software-based router, which is able to intelligently respond to the input by drawing on a combination of human agents, advanced recognition and expert systems. The system utilizes human “intent analysts” for purposes of interpreting customer input. Automated recognition subsystems are trained by coupling customer input with IA-selected intent corresponding to the input, using model-updating subsystems to develop the training information for the automated recognition subsystems.
    Type: Grant
    Filed: October 9, 2018
    Date of Patent: October 20, 2020
    Assignee: Interactions LLC
    Inventors: Yoryos Yeracaris, Larissa Lapshina, Alwin B Carus
  • Patent number: 10796100
    Abstract: A natural language processing system has a hierarchy of user intents related to a domain of interest, the hierarchy having specific intents corresponding to leaf nodes of the hierarchy, and more general intents corresponding to ancestor nodes of the leaf nodes. The system also has a trained understanding model that can classify natural language utterances according to user intent. When the understanding model cannot determine with sufficient confidence that a natural language utterance corresponds to one of the specific intents, the natural language processing system traverses the hierarchy of intents to find a more general user intent that is related to the most applicable specific intent of the utterance and for which there is sufficient confidence. The general intent can then be used to prompt the user with questions applicable to the general intent to obtain the missing information needed for a specific intent.
    Type: Grant
    Filed: January 15, 2019
    Date of Patent: October 6, 2020
    Assignee: Interactions LLC
    Inventors: Srinivas Bangalore, John Chen
  • Patent number: 10789943
    Abstract: An interactive response system combines human intelligence (HI) subsystems with artificial intelligence (AI) subsystems to facilitate overall capability of multi-channel user interfaces. The system permits imperfect AI subsystems to nonetheless lessen the burden on HI subsystems. A combined AI and HI proxy is used to implement an interactive omnichannel system, and the proxy dynamically determines how many AI and HI subsystems are to perform recognition for any particular utterance, based on factors such as confidence thresholds of the AI recognition and availability of HI resources. Furthermore the system uses information from prior recognitions to automatically build, test, predict confidence, and maintain AI models and HI models for system recognition improvements.
    Type: Grant
    Filed: August 31, 2018
    Date of Patent: September 29, 2020
    Assignee: Interactions LLC
    Inventors: Larissa Lapshina, Mahnoosh Mehrabani Sharifbad, David Thomson, Yoryos Yeracaris
  • Patent number: 10621282
    Abstract: A computer-implemented method for providing agent assisted transcriptions of user utterances. A user utterance is received in response to a prompt provided to the user at a remote client device. An automatic transcription is generated from the utterance using a language model based upon an application or context, and presented to a human agent. The agent reviews the transcription and may replace at least a portion of the transcription with a corrected transcription. As the agent inputs the corrected transcription, accelerants are presented to the user comprising suggested texted to be inputted. The accelerants may be determined based upon an agent input, an application or context of the transcription, the portion of the transcription being replaced, or any combination thereof. In some cases, the user provides textual input, to which the agent transcribes an intent associated with the input with the aid of one or more accelerants.
    Type: Grant
    Filed: April 26, 2018
    Date of Patent: April 14, 2020
    Assignee: Interactions LLC
    Inventors: Ethan Selfridge, Michael Johnston, Robert Lifgren, James Dreher, John Leonard
  • Patent number: 10482876
    Abstract: A speech interpretation module interprets the audio of user utterances as sequences of words. To do so, the speech interpretation module parameterizes a literal corpus of expressions by identifying portions of the expressions that correspond to known concepts, and generates a parameterized statistical model from the resulting parameterized corpus. When speech is received the speech interpretation module uses a hierarchical speech recognition decoder that uses both the parameterized statistical model and language sub-models that specify how to recognize a sequence of words. The separation of the language sub-models from the statistical model beneficially reduces the size of the literal corpus needed for training, reduces the size of the resulting model, provides more fine-grained interpretation of concepts, and improves computational efficiency by allowing run-time incorporation of the language sub-models.
    Type: Grant
    Filed: October 1, 2018
    Date of Patent: November 19, 2019
    Assignee: Interactions LLC
    Inventors: Ethan Selfridge, Michael Johnston
  • Patent number: 10216832
    Abstract: A natural language processing system has a hierarchy of user intents related to a domain of interest, the hierarchy having specific intents corresponding to leaf nodes of the hierarchy, and more general intents corresponding to ancestor nodes of the leaf nodes. The system also has a trained understanding model that can classify natural language utterances according to user intent. When the understanding model cannot determine with sufficient confidence that a natural language utterance corresponds to one of the specific intents, the natural language processing system traverses the hierarchy of intents to find a more general user intent that is related to the most applicable specific intent of the utterance and for which there is sufficient confidence. The general intent can then be used to prompt the user with questions applicable to the general intent to obtain the missing information needed for a specific intent.
    Type: Grant
    Filed: December 19, 2016
    Date of Patent: February 26, 2019
    Assignee: Interactions LLC
    Inventors: Srinivas Bangalore, John Chen
  • Patent number: 10147419
    Abstract: An interactive response system directs input to a software-based router, which is able to intelligently respond to the input by drawing on a combination of human agents, advanced recognition and expert systems. The system utilizes human “intent analysts” for purposes of interpreting customer input. Automated recognition subsystems are trained by coupling customer input with IA-selected intent corresponding to the input, using model-updating subsystems to develop the training information for the automated recognition subsystems.
    Type: Grant
    Filed: August 30, 2016
    Date of Patent: December 4, 2018
    Assignee: INTERACTIONS LLC
    Inventors: Yoryos Yeracaris, Larissa Lapshina, Alwin B. Carus
  • Patent number: 10122712
    Abstract: A request from a party is received by a receiver from a remote system. The request from the party is received when the party attempts to obtain a service using the remote system. A selective determination is made to request, over a network, authentication of the party by a remote biometric system. A request is sent to the remote system for the party to provide a biometric sample responsive to determining to request authentication of the party. The service is provided contingent upon authentication of the party by the remote biometric system.
    Type: Grant
    Filed: February 22, 2017
    Date of Patent: November 6, 2018
    Assignee: INTERACTIONS LLC
    Inventors: Brian M. Novack, Daniel Larry Madsen, Timothy R. Thompson
  • Patent number: 10096317
    Abstract: A speech interpretation module interprets the audio of user utterances as sequences of words. To do so, the speech interpretation module parameterizes a literal corpus of expressions by identifying portions of the expressions that correspond to known concepts, and generates a parameterized statistical model from the resulting parameterized corpus. When speech is received the speech interpretation module uses a hierarchical speech recognition decoder that uses both the parameterized statistical model and language sub-models that specify how to recognize a sequence of words. The separation of the language sub-models from the statistical model beneficially reduces the size of the literal corpus needed for training, reduces the size of the resulting model, provides more fine-grained interpretation of concepts, and improves computational efficiency by allowing run-time incorporation of the language sub-models.
    Type: Grant
    Filed: April 18, 2016
    Date of Patent: October 9, 2018
    Assignee: INTERACTIONS LLC
    Inventors: Ethan Selfridge, Michael Johnston
  • Patent number: 10049676
    Abstract: An interactive response system mixes HSR subsystems with ASR subsystems to facilitate overall capability of user interfaces. The system permits imperfect ASR subsystems to nonetheless relieve burden on HSR subsystems. An ASR proxy is used to implement an IVR system, and the proxy dynamically selects one or more recognizers from a language model and a human agent to recognize user input. Selection of the one or more recognizers is based on factors such as confidence thresholds of the ASRs and availability of human resources for HSRs.
    Type: Grant
    Filed: July 12, 2017
    Date of Patent: August 14, 2018
    Assignee: INTERACTIONS LLC
    Inventors: Yoryos Yeracaris, Alwin B Carus, Larissa Lapshina
  • Patent number: 9842587
    Abstract: A system and method is provided for combining active and unsupervised learning for automatic speech recognition. This process enables a reduction in the amount of human supervision required for training acoustic and language models and an increase in the performance given the transcribed and un-transcribed data.
    Type: Grant
    Filed: May 31, 2016
    Date of Patent: December 12, 2017
    Assignee: Interactions LLC
    Inventors: Dilek Zeynep Hakkani-Tur, Giuseppe Riccardi
  • Patent number: 9741347
    Abstract: An interactive response system mixes HSR subsystems with ASR subsystems to facilitate overall capability of voice user interfaces. The system permits imperfect ASR subsystems to nonetheless relieve burden on HSR subsystems. An ASR proxy is used to implement an IVR system, and the proxy dynamically determines how many ASR and HSR subsystems are to perform recognition for any particular utterance, based on factors such as confidence thresholds of the ASRs and availability of human resources for HSRs. In some embodiments, the ASR proxy dynamically selects one or more recognizers based at least in part on the identified grammar and the time length of the utterance.
    Type: Grant
    Filed: December 3, 2015
    Date of Patent: August 22, 2017
    Assignee: Interactions LLC
    Inventors: Yoryos Yeracaris, Alwin B Carus, Larissa Lapshina