Patents by Inventor Dimitra Vergyri

Dimitra Vergyri has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230260510
    Abstract: An automated interactive voice dialogue system using a human-in-the-loop design may enable human-supported interventions, where the automated system still conducts most of the interaction but enables a human agent to assume control of the dialogue and assist, if deemed necessary, so that the user may continue the interaction with little interruption or frustration. In some examples, the user of the dialogue system of this disclosure may not realize that there was a problem, and that the interaction is being or has switched from an automated dialogue system to a human. In some examples, the automated dialogue system of this disclosure may also automatically switch back to machine interaction when the human agent has resolved the situation.
    Type: Application
    Filed: December 20, 2022
    Publication date: August 17, 2023
    Inventors: Dimitra Vergyri, Harry Bratt
  • Publication number: 20220115001
    Abstract: A voice-based digital assistant (VDA) uses a conversation intelligence (CI) manager module having a rule-based engine on conversational intelligence to process information from one or more modules to make determinations on both i) understanding the human conversational cues and ii) generating the human conversational cues, including at least understanding and generating a backchannel utterance, in a flow and exchange of human communication in order to at least one of grab or yield a conversational floor between a user and the VDA. The CI manager module uses the rule-based engine to analyze and make a determination on a conversational cue of, at least, prosody in a user's flow of speech to generate the backchannel utterance to signal any of i) an understanding, ii) a correction, iii) a confirmation, and iv) a questioning of verbal communications conveyed by the user in the flow of speech during a time frame when the user still holds the conversational floor.
    Type: Application
    Filed: May 7, 2020
    Publication date: April 14, 2022
    Inventors: Harry Bratt, Kristin Precoda, Dimitra Vergyri
  • Patent number: 11217228
    Abstract: Systems and methods for speech recognition are provided. In some aspects, the method comprises receiving, using an input, an audio signal. The method further comprises splitting the audio signal into auditory test segments. The method further comprises extracting, from each of the auditory test segments, a set of acoustic features. The method further comprises applying the set of acoustic features to a deep neural network to produce a hypothesis for the corresponding auditory test segment. The method further comprises selectively performing one or more of: indirect adaptation of the deep neural network and direct adaptation of the deep neural network.
    Type: Grant
    Filed: March 22, 2017
    Date of Patent: January 4, 2022
    Assignee: SRI International
    Inventors: Vikramjit Mitra, Horacio E. Franco, Chris D. Bartels, Dimitra Vergyri, Julien van Hout, Martin Graciarena
  • Patent number: 10977452
    Abstract: Provided are systems, computer-implemented methods, and computer-program products for a multi-lingual device, capable of receiving verbal input in multiple languages, and further capable of providing conversational responses in multiple languages. In various implementations, the multi-lingual device includes an automatic speech recognition engine capable of receiving verbal input in a first natural language and providing a textual representation of the input and a confidence value for the recognition. The multi-lingual device can also include a machine translation engine, capable of translating textual input from the first natural language into a second natural language. The machine translation engine can output a confidence value for the translation. The multi-lingual device can further include natural language processing, capable of translating from the second natural language to a computer-based language.
    Type: Grant
    Filed: July 11, 2019
    Date of Patent: April 13, 2021
    Assignee: SRI International
    Inventors: Wen Wang, Dimitra Vergyri, Girish Acharya
  • Patent number: 10915570
    Abstract: In general, the disclosure describes techniques for personalizing a meeting summary according to the relevance of different meeting items within a meeting to different users. In some examples, a computing system for automatically providing personalized summaries of meetings comprises a memory configured to store information describing a meeting; and processing circuitry configured to receive a plurality of meeting item summaries of respective meeting items included in the transcript of the meeting; determine, by applying a model of meeting item relevance to the meeting item summaries, a corresponding relevance to a user of each of the meeting item summaries; and output respective indications of relevance to the user for one or more of the meeting item summaries to provide a personalized summary of the meeting to the user.
    Type: Grant
    Filed: March 26, 2019
    Date of Patent: February 9, 2021
    Assignee: SRI International
    Inventors: Bhaskar Ramamurthy, Rajan Singh, Dimitra Vergyri, Jagjit Singh Srawan, Rolf Joseph Rando
  • Publication number: 20200311122
    Abstract: In general, the disclosure describes techniques for personalizing a meeting summary according to the relevance of different meeting items within a meeting to different users. In some examples, a computing system for automatically providing personalized summaries of meetings comprises a memory configured to store information describing a meeting; and processing circuitry configured to receive a plurality of meeting item summaries of respective meeting items included in the transcript of the meeting; determine, by applying a model of meeting item relevance to the meeting item summaries, a corresponding relevance to a user of each of the meeting item summaries; and output respective indications of relevance to the user for one or more of the meeting item summaries to provide a personalized summary of the meeting to the user.
    Type: Application
    Filed: March 26, 2019
    Publication date: October 1, 2020
    Inventors: Bhaskar Ramamurthy, Rajan Singh, Dimitra Vergyri, Jagjit Singh Srawan, Rolf Joseph Rando
  • Patent number: 10726846
    Abstract: An electronic device for providing health information or assistance includes an input configured to receive at least one type of signal selected from sound signals, verbal signals, non-verbal signals, and combinations thereof, a communication module configured to send information related to the at least one user and his/her environment to a remote device, including the sound signals, non-verbal signals, and verbal signals, the remote device being configured to analyze a condition of the at least one user and communicate condition signals to the electronic device, a processing module configured to receive the condition signals and to cause the electronic device to engage in a passive monitoring mode or an active engagement and monitoring mode, the active engagement and monitoring mode including, but not limited to, verbal communication with the at least one user, and an output configured to engage the at least one user in verbal communication.
    Type: Grant
    Filed: June 3, 2017
    Date of Patent: July 28, 2020
    Assignee: SRI INTERNATIONAL
    Inventors: Dimitra Vergyri, Diego Castan Lavilla, Girish Acharya, David Sahner, Elizabeth Shriberg, Joseph B Rogers, Bruce H Knoth
  • Publication number: 20200168208
    Abstract: Systems and methods for speech recognition are provided. In some aspects, the method comprises receiving, using an input, an audio signal. The method further comprises splitting the audio signal into auditory test segments. The method further comprises extracting, from each of the auditory test segments, a set of acoustic features. The method further comprises applying the set of acoustic features to a deep neural network to produce a hypothesis for the corresponding auditory test segment. The method further comprises selectively performing one or more of: indirect adaptation of the deep neural network and direct adaptation of the deep neural network.
    Type: Application
    Filed: March 22, 2017
    Publication date: May 28, 2020
    Inventors: Vikramjit Mitra, Horacio E. Franco, Chris D. Bartels, Dimitra Vergyri, Julien van Hout, Martin Graciarena
  • Patent number: 10478111
    Abstract: A computer-implemented method can include a speech collection module collecting a speech pattern from a patient, a speech feature computation module computing at least one speech feature from the collected speech pattern, a mental health determination module determining a state-of-mind of the patient based at least in part on the at least one computed speech feature, and an output module providing an indication of a diagnosis with regard to a possibility that the patient is suffering from a certain condition such as depression or Post-Traumatic Stress Disorder (PTSD).
    Type: Grant
    Filed: August 5, 2015
    Date of Patent: November 19, 2019
    Assignee: SRI International
    Inventors: Bruce Knoth, Dimitra Vergyri, Elizabeth Shriberg, Vikramjit Mitra, Mitchell McLaren, Andreas Kathol, Colleen Richey, Martin Graciarena
  • Publication number: 20190332680
    Abstract: Provided are systems, computer-implemented methods, and computer-program products for a multi-lingual device, capable of receiving verbal input in multiple languages, and further capable of providing conversational responses in multiple languages. In various implementations, the multi-lingual device includes an automatic speech recognition engine capable of receiving verbal input in a first natural language and providing a textual representation of the input and a confidence value for the recognition. The multi-lingual device can also include a machine translation engine, capable of translating textual input from the first natural language into a second natural language. The machine translation engine can output a confidence value for the translation. The multi-lingual device can further include natural language processing, capable of translating from the second natural language to a computer-based language.
    Type: Application
    Filed: July 11, 2019
    Publication date: October 31, 2019
    Applicant: SRI International
    Inventors: Wen Wang, Dimitra Vergyri, Girish Acharya
  • Patent number: 10402501
    Abstract: Provided are systems, computer-implemented methods, and computer-program products for a multi-lingual device, capable of receiving verbal input in multiple languages, and further capable of providing conversational responses in multiple languages. In various implementations, the multi-lingual device includes an automatic speech recognition engine capable of receiving verbal input in a first natural language and providing a textual representation of the input and a confidence value for the recognition. The multi-lingual device can also include a machine translation engine, capable of translating textual input from the first natural language into a second natural language. The machine translation engine can output a confidence value for the translation. The multi-lingual device can further include natural language processing, capable of translating from the second natural language to a computer-based language.
    Type: Grant
    Filed: June 21, 2018
    Date of Patent: September 3, 2019
    Assignee: SRI International
    Inventors: Wen Wang, Dimitra Vergyri, Girish Acharya
  • Publication number: 20190108841
    Abstract: An electronic device for providing health information or assistance includes an input configured to receive at least one type of signal selected from sound signals, verbal signals, non-verbal signals, and combinations thereof, a communication module configured to send information related to the at least one user and his/her environment to a remote device, including the sound signals, non-verbal signals, and verbal signals, the remote device being configured to analyze a condition of the at least one user and communicate condition signals to the electronic device, a processing module configured to receive the condition signals and to cause the electronic device to engage in a passive monitoring mode or an active engagement and monitoring mode, the active engagement and monitoring mode including, but not limited to, verbal communication with the at least one user, and an output configured to engage the at least one user in verbal communication.
    Type: Application
    Filed: June 3, 2017
    Publication date: April 11, 2019
    Inventors: Dimitra Vergyri, Diego Castan Lavilla, Girish Acharya, David Sahner, Elizabeth Shriberg, Joseph B Rogers, Bruce H Knoth
  • Publication number: 20180314689
    Abstract: Provided are systems, computer-implemented methods, and computer-program products for a multi-lingual device, capable of receiving verbal input in multiple languages, and further capable of providing conversational responses in multiple languages. In various implementations, the multi-lingual device includes an automatic speech recognition engine capable of receiving verbal input in a first natural language and providing a textual representation of the input and a confidence value for the recognition. The multi-lingual device can also include a machine translation engine, capable of translating textual input from the first natural language into a second natural language. The machine translation engine can output a confidence value for the translation. The multi-lingual device can further include natural language processing, capable of translating from the second natural language to a computer-based language.
    Type: Application
    Filed: June 21, 2018
    Publication date: November 1, 2018
    Applicant: SRI International
    Inventors: Wen Wang, Dimitra Vergyri, Girish Acharya
  • Publication number: 20180214061
    Abstract: A computer-implemented method can include a speech collection module collecting a speech pattern from a patient, a speech feature computation module computing at least one speech feature from the collected speech pattern, a mental health determination module determining a state-of-mind of the patient based at least in part on the at least one computed speech feature, and an output module providing an indication of a diagnosis with regard to a possibility that the patient is suffering from a certain condition such as depression or Post-Traumatic Stress Disorder (PTSD).
    Type: Application
    Filed: August 5, 2015
    Publication date: August 2, 2018
    Inventors: Bruce Knoth, Dimitra Vergyri, Elizabeth Shriberg, Vikramjit Mitra, Mitchell McLaren, Andreas Kathol, Colleen Richey, Martin Graciarena