Patents by Inventor Elizabeth Shriberg

Elizabeth Shriberg has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240170109
    Abstract: The present disclosure provides systems and methods for assessing a mental state of a subject in a single session or over multiple different sessions, using for example an automated module to present and/or formulate at least one query based in part on one or more target mental states to be assessed. The query may be configured to elicit at least one response from the subject. The query may be transmitted in an audio, visual, and/or textual format to the subject to elicit the response. Data comprising the response from the subject can be received. The data can be processed using one or more individual, joint, or fused models. One or more assessments of the mental state associated with the subject can be generated for the single session, for each of the multiple different sessions, or upon completion of one or more sessions of the multiple different sessions.
    Type: Application
    Filed: February 1, 2024
    Publication date: May 23, 2024
    Inventors: Elizabeth Shriberg, Michael Aratow, Mainul Islam, Amir Hossein Harati, Tomasz Rutowski, David Lin, Yang Lu, Farshid Haque, Robert D. Rogers
  • Publication number: 20220270716
    Abstract: The present disclosure provides systems, methods, and computer program products for assessing the reliability of health survey results. An example can comprise: (a) determining that each of a predetermined pair of events are present in response data generated by a subject in response to prompts presented to the subject in administration of a health survey, wherein the pair of events includes a conditioning event and a conditioned event; (b) determining a probability that the conditioned event is present in the response data given the presence of the conditioning event in the response data; (c) repeating steps (a) and (b) for each of two or more predetermined pair of events; and (d) combining the probabilities to form a confidence vector data that represents a measure of confidence in the reliability of the subject in generating the response data.
    Type: Application
    Filed: October 4, 2021
    Publication date: August 25, 2022
    Inventors: Elizabeth Shriberg, Yang Lu, Amir Harati
  • Publication number: 20210081056
    Abstract: Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information.
    Type: Application
    Filed: December 1, 2020
    Publication date: March 18, 2021
    Inventors: Ajay Divakaran, Amir Tamrakar, Girish Acharya, William Mark, Greg Ho, Jihua Huang, David Salter, Edgar Kalns, Michael Wessel, Min Yin, James Carpenter, Brent Mombourquette, Kenneth Nitz, Elizabeth Shriberg, Eric Law, Michael Frandsen, Hyong-Gyun Kim, Cory Albright, Andreas Tsiartas
  • Patent number: 10884503
    Abstract: Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information.
    Type: Grant
    Filed: October 24, 2016
    Date of Patent: January 5, 2021
    Assignee: SRI International
    Inventors: Ajay Divakaran, Amir Tamrakar, Girish Acharya, William Mark, Greg Ho, Jihua Huang, David Salter, Edgar Kalns, Michael Wessel, Min Yin, James Carpenter, Brent Mombourquette, Kenneth Nitz, Elizabeth Shriberg, Eric Law, Michael Frandsen, Hyong-Gyun Kim, Cory Albright, Andreas Tsiartas
  • Patent number: 10726846
    Abstract: An electronic device for providing health information or assistance includes an input configured to receive at least one type of signal selected from sound signals, verbal signals, non-verbal signals, and combinations thereof, a communication module configured to send information related to the at least one user and his/her environment to a remote device, including the sound signals, non-verbal signals, and verbal signals, the remote device being configured to analyze a condition of the at least one user and communicate condition signals to the electronic device, a processing module configured to receive the condition signals and to cause the electronic device to engage in a passive monitoring mode or an active engagement and monitoring mode, the active engagement and monitoring mode including, but not limited to, verbal communication with the at least one user, and an output configured to engage the at least one user in verbal communication.
    Type: Grant
    Filed: June 3, 2017
    Date of Patent: July 28, 2020
    Assignee: SRI INTERNATIONAL
    Inventors: Dimitra Vergyri, Diego Castan Lavilla, Girish Acharya, David Sahner, Elizabeth Shriberg, Joseph B Rogers, Bruce H Knoth
  • Patent number: 10706873
    Abstract: Disclosed are machine learning-based technologies that analyze an audio input and provide speaker state predictions in response to the audio input. The speaker state predictions can be selected and customized for each of a variety of different applications.
    Type: Grant
    Filed: June 10, 2016
    Date of Patent: July 7, 2020
    Assignee: SRI International
    Inventors: Andreas Tsiartas, Elizabeth Shriberg, Cory Albright, Michael W. Frandsen
  • Patent number: 10679063
    Abstract: A computing system for recognizing salient events depicted in a video utilizes learning algorithms to detect audio and visual features of the video. The computing system identifies one or more salient events depicted in the video based on the audio and visual features.
    Type: Grant
    Filed: September 4, 2015
    Date of Patent: June 9, 2020
    Assignee: SRI International
    Inventors: Hui Cheng, Ajay Divakaran, Elizabeth Shriberg, Harpreet Singh Sawhney, Jingen Liu, Ishani Chakraborty, Omar Javed, David Chisolm, Behjat Siddiquie, Steven S. Weiner
  • Patent number: 10529321
    Abstract: Prosodic features are used for discriminating computer-directed speech from human-directed speech. Statistics and models describing energy/intensity patterns over time, speech/pause distributions, pitch patterns, vocal effort features, and speech segment duration patterns may be used for prosodic modeling. The prosodic features for at least a portion of an utterance are monitored over a period of time to determine a shape associated with the utterance. A score may be determined to assist in classifying the current utterance as human directed or computer directed without relying on knowledge of preceding utterances or utterances following the current utterance. Outside data may be used for training lexical addressee detection systems for the H-H-C scenario. H-C training data can be obtained from a single-user H-C collection and that H-H speech can be modeled using general conversational speech. H-C and H-H language models may also be adapted using interpolation with small amounts of matched H-H-C data.
    Type: Grant
    Filed: August 7, 2017
    Date of Patent: January 7, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Elizabeth Shriberg, Andreas Stolcke, Dilek Hakkani-Tur, Larry Heck, Heeyoung Lee
  • Patent number: 10478111
    Abstract: A computer-implemented method can include a speech collection module collecting a speech pattern from a patient, a speech feature computation module computing at least one speech feature from the collected speech pattern, a mental health determination module determining a state-of-mind of the patient based at least in part on the at least one computed speech feature, and an output module providing an indication of a diagnosis with regard to a possibility that the patient is suffering from a certain condition such as depression or Post-Traumatic Stress Disorder (PTSD).
    Type: Grant
    Filed: August 5, 2015
    Date of Patent: November 19, 2019
    Assignee: SRI International
    Inventors: Bruce Knoth, Dimitra Vergyri, Elizabeth Shriberg, Vikramjit Mitra, Mitchell McLaren, Andreas Kathol, Colleen Richey, Martin Graciarena
  • Patent number: 10303768
    Abstract: Technologies to detect persuasive multimedia content by using affective and semantic concepts extracted from the audio-visual content as well as the sentiment of associated comments are disclosed. The multimedia content is analyzed and compared with a persuasiveness model.
    Type: Grant
    Filed: October 2, 2015
    Date of Patent: May 28, 2019
    Assignee: SRI International
    Inventors: Ajay Divakaran, Behjat Siddiquie, David Chisholm, Elizabeth Shriberg
  • Publication number: 20190108841
    Abstract: An electronic device for providing health information or assistance includes an input configured to receive at least one type of signal selected from sound signals, verbal signals, non-verbal signals, and combinations thereof, a communication module configured to send information related to the at least one user and his/her environment to a remote device, including the sound signals, non-verbal signals, and verbal signals, the remote device being configured to analyze a condition of the at least one user and communicate condition signals to the electronic device, a processing module configured to receive the condition signals and to cause the electronic device to engage in a passive monitoring mode or an active engagement and monitoring mode, the active engagement and monitoring mode including, but not limited to, verbal communication with the at least one user, and an output configured to engage the at least one user in verbal communication.
    Type: Application
    Filed: June 3, 2017
    Publication date: April 11, 2019
    Inventors: Dimitra Vergyri, Diego Castan Lavilla, Girish Acharya, David Sahner, Elizabeth Shriberg, Joseph B Rogers, Bruce H Knoth
  • Publication number: 20190051293
    Abstract: Prosodic features are used for discriminating computer-directed speech from human-directed speech. Statistics and models describing energy/intensity patterns over time, speech/pause distributions, pitch patterns, vocal effort features, and speech segment duration patterns may be used for prosodic modeling. The prosodic features for at least a portion of an utterance are monitored over a period of time to determine a shape associated with the utterance. A score may be determined to assist in classifying the current utterance as human directed or computer directed without relying on knowledge of preceding utterances or utterances following the current utterance. Outside data may be used for training lexical addressee detection systems for the H-H-C scenario. H-C training data can be obtained from a single-user H-C collection and that H-H speech can be modeled using general conversational speech. H-C and H-H language models may also be adapted using interpolation with small amounts of matched H-H-C data.
    Type: Application
    Filed: August 7, 2017
    Publication date: February 14, 2019
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Elizabeth Shriberg, Andreas Stolcke, Dilek Hakkani-Tur, Larry Heck, Heeyoung Lee
  • Publication number: 20180214061
    Abstract: A computer-implemented method can include a speech collection module collecting a speech pattern from a patient, a speech feature computation module computing at least one speech feature from the collected speech pattern, a mental health determination module determining a state-of-mind of the patient based at least in part on the at least one computed speech feature, and an output module providing an indication of a diagnosis with regard to a possibility that the patient is suffering from a certain condition such as depression or Post-Traumatic Stress Disorder (PTSD).
    Type: Application
    Filed: August 5, 2015
    Publication date: August 2, 2018
    Inventors: Bruce Knoth, Dimitra Vergyri, Elizabeth Shriberg, Vikramjit Mitra, Mitchell McLaren, Andreas Kathol, Colleen Richey, Martin Graciarena
  • Patent number: 9761247
    Abstract: Prosodic features are used for discriminating computer-directed speech from human-directed speech. Statistics and models describing energy/intensity patterns over time, speech/pause distributions, pitch patterns, vocal effort features, and speech segment duration patterns may be used for prosodic modeling. The prosodic features for at least a portion of an utterance are monitored over a period of time to determine a shape associated with the utterance. A score may be determined to assist in classifying the current utterance as human directed or computer directed without relying on knowledge of preceding utterances or utterances following the current utterance. Outside data may be used for training lexical addressee detection systems for the H-H-C scenario. H-C training data can be obtained from a single-user H-C collection and that H-H speech can be modeled using general conversational speech. H-C and H-H language models may also be adapted using interpolation with small amounts of matched H-H-C data.
    Type: Grant
    Filed: January 31, 2013
    Date of Patent: September 12, 2017
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Elizabeth Shriberg, Andreas Stolcke, Dilek Hakkani-Tur, Larry Heck, Heeyoung Lee
  • Publication number: 20170160813
    Abstract: Methods, computing devices, and computer-program products are provided for implementing a virtual personal assistant. In various implementations, a virtual personal assistant can be configured to receive sensory input, including at least two different types of information. The virtual personal assistant can further be configured to determine semantic information from the sensory input, and to identify a context-specific framework. The virtual personal assistant can further be configured to determine a current intent. Determining the current intent can include using the semantic information and the context-specific framework. The virtual personal assistant can further be configured to determine a current input state. Determining the current input state can include using the semantic information and one or more behavioral models. The behavioral models can include one or more interpretations of previously-provided semantic information.
    Type: Application
    Filed: October 24, 2016
    Publication date: June 8, 2017
    Applicant: SRI International
    Inventors: Ajay Divakaran, Amir Tamrakar, Girish Acharya, William Mark, Greg Ho, Jihua Huang, David Salter, Edgar Kalns, Michael Wessel, Min Yin, James Carpenter, Brent Mombourquette, Kenneth Nitz, Elizabeth Shriberg, Eric Law, Michael Frandsen, Hyong-Gyun Kim, Cory Albright, Andreas Tsiartas
  • Publication number: 20170084295
    Abstract: Disclosed are machine learning-based technologies that analyze an audio input and provide speaker state predictions in response to the audio input. The speaker state predictions can be selected and customized for each of a variety of different applications.
    Type: Application
    Filed: June 10, 2016
    Publication date: March 23, 2017
    Inventors: Andreas Tsiartas, Elizabeth Shriberg, Cory Albright, Michael W. Frandsen
  • Publication number: 20170061316
    Abstract: The present invention relates to a method and apparatus for tailoring the output of an intelligent automated assistant. One embodiment of a method for conducting an interaction with a human user includes collecting data about the user using a multimodal set of sensors positioned in a vicinity of the user, making a set of inferences about the user in accordance with the data, and tailoring an output to be delivered to the user in accordance with the set of inferences.
    Type: Application
    Filed: November 16, 2016
    Publication date: March 2, 2017
    Inventors: Gokhan Tur, Horacio E. Franco, Elizabeth Shriberg, Gregory K. Myers, William S. Mark, Norman D. Winarsky, Andreas Stolcke, Bart Peintner, Michael J. Wolverton, Luciana Ferrer, Martin Graciarena, Neil Yorke-Smith, Harry Bratt
  • Patent number: 9564134
    Abstract: The present invention relates to a method and apparatus for speaker-calibrated speaker detection. One embodiment of a method for generating a speaker model for use in detecting a speaker of interest includes identifying one or more speech features that best distinguish the speaker of interest from a plurality of impostor speakers and then incorporating the speech features in the speaker model.
    Type: Grant
    Filed: September 28, 2015
    Date of Patent: February 7, 2017
    Assignee: SRI INTERNATIONAL
    Inventors: Elizabeth Shriberg, Luciana Ferrer, Andreas Stolcke, Martin Graciarena, Nicolas Scheffer
  • Patent number: 9501743
    Abstract: The present invention relates to a method and apparatus for tailoring the output of an intelligent automated assistant. One embodiment of a method for conducting an interaction with a human user includes collecting data about the user using a multimodal set of sensors positioned in a vicinity of the user, making a set of inferences about the user in accordance with the data, and tailoring an output to be delivered to the user in accordance with the set of inferences.
    Type: Grant
    Filed: December 2, 2015
    Date of Patent: November 22, 2016
    Assignee: SRI INTERNATIONAL
    Inventors: Gokhan Tur, Horacio E. Franco, Elizabeth Shriberg, Gregory K. Myers, William S. Mark, Norman D. Winarsky, Andreas Stolcke, Bart Peintner, Michael J. Wolverton, Luciana Ferrer, Martin Graciarena, Neil Yorke-Smith, Harry Bratt
  • Publication number: 20160328384
    Abstract: Technologies to detect persuasive multimedia content by using affective and semantic concepts extracted from the audio-visual content as well as the sentiment of associated comments are disclosed. The multimedia content is analyzed and compared with a persuasiveness model.
    Type: Application
    Filed: October 2, 2015
    Publication date: November 10, 2016
    Inventors: Ajay Divakaran, Behjat Siddiquie, David Chisholm, Elizabeth Shriberg