Patents by Inventor Emily Mower Provost

Emily Mower Provost has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11645473
    Abstract: Systems, computer-implemented methods, and computer program products that can facilitate predicting a source of a subsequent spoken dialogue are provided. According to an embodiment, a system can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can comprise a speech receiving component that can receive a spoken dialogue from a first entity. The computer executable components can further comprise a speech processing component that can employ a network that can concurrently process a transition type and a dialogue act of the spoken dialogue to predict a source of a subsequent spoken dialogue.
    Type: Grant
    Filed: December 23, 2020
    Date of Patent: May 9, 2023
    Assignees: INTERNATIONAL BUSINESS MACHINES CORPORATION, THE REGENTS OF THE UNIVERSITY OF MICHIGAN
    Inventors: Lazaros Polymenakos, Dimitrios B. Dimitriadis, Zakaria Aldeneh, Emily Mower Provost
  • Patent number: 11545173
    Abstract: A method of predicting a mood state of a user may include recording an audio sample via a microphone of a mobile computing device of the user based on the occurrence of an event, extracting a set of acoustic features from the audio sample, generating one or more emotion values by analyzing the set of acoustic features using a trained machine learning model, and determining the mood state of the user, based on the one or more emotion values. In some embodiments, the audio sample may be ambient audio recorded periodically, and/or call data of the user recorded during clinical calls or personal calls.
    Type: Grant
    Filed: August 30, 2019
    Date of Patent: January 3, 2023
    Assignee: THE REGENTS OF THE UNIVERSITY OF MICHIGAN
    Inventors: Emily Mower Provost, Melvin McInnis, John Henry Gideon, Katherine Anne Matton, Soheil Khorram
  • Patent number: 11072344
    Abstract: A method includes receiving acoustic features and phonetic features associated with an utterance from a driver in a vehicle, providing the acoustic features and the phonetic features to a feature fusion sub-network, receiving a feature fusion utterance representation from the feature fusion sub-network, providing one of the acoustic features or the phonetic features to a non-fusion sub-network trained using supervised learning, receiving a non-fusion utterance representation from the non-fusion sub-network, generating an intermediate utterance representation based on the feature fusion utterance representation and the non-fusion utterance representation, providing at least a portion of the intermediate utterance representation to a fully-connected sub-network trained using supervised learning, receiving a valence vector from the fully-connected sub-network, and causing a vehicle control system to perform a vehicle maneuver based on the valence vector.
    Type: Grant
    Filed: March 18, 2019
    Date of Patent: July 27, 2021
    Assignee: The Regents of the University of Michigan
    Inventors: Emily Mower Provost, Biqiao Zhang, Soheil Khorram
  • Publication number: 20210110829
    Abstract: Systems, computer-implemented methods, and computer program products that can facilitate predicting a source of a subsequent spoken dialogue are provided. According to an embodiment, a system can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can comprise a speech receiving component that can receive a spoken dialogue from a first entity. The computer executable components can further comprise a speech processing component that can employ a network that can concurrently process a transition type and a dialogue act of the spoken dialogue to predict a source of a subsequent spoken dialogue.
    Type: Application
    Filed: December 23, 2020
    Publication date: April 15, 2021
    Inventors: Lazaros Polymenakos, Dimitrios B. Dimitriadis, Zakaria Aldeneh, Emily Mower Provost
  • Patent number: 10957320
    Abstract: Systems, computer-implemented methods, and computer program products that can facilitate predicting a source of a subsequent spoken dialogue are provided. According to an embodiment, a system can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can comprise a speech receiving component that can receive a spoken dialogue from a first entity. The computer executable components can further comprise a speech processing component that can employ a network that can concurrently process a transition type and a dialogue act of the spoken dialogue to predict a source of a subsequent spoken dialogue.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: March 23, 2021
    Assignees: INTERNATIONAL BUSINESS MACHINES CORPORATION, THE REGENTS OF THE UNIVERSITY OF MICHIGAN
    Inventors: Lazaros Polymenakos, Dimitrios B. Dimitriadis, Zakaria Aldeneh, Emily Mower Provost
  • Publication number: 20200298873
    Abstract: The present disclosure provides a method that includes receiving acoustic features and phonetic features associated with an utterance from a driver in a vehicle with a vehicle control system, providing the plurality of acoustic features and the plurality of phonetic features to a feature fusion sub-network, receiving a feature fusion utterance representation from the feature fusion sub-network, providing one of the plurality of acoustic features or the plurality of phonetic features to a non-fusion sub-network trained using supervised learning, receiving a non-fusion utterance representation from the non-fusion sub-network, generating an intermediate utterance representation based on the feature fusion utterance representation and the non-fusion utterance representation, providing at least a portion of the intermediate utterance representation to a fully-connected sub-network trained using supervised learning, receiving a valence vector from the fully-connected sub-network, and causing the vehicle control sys
    Type: Application
    Filed: March 18, 2019
    Publication date: September 24, 2020
    Inventors: EMILY MOWER PROVOST, BIQIAO ZHANG, SOHEIL KHORRAM
  • Publication number: 20200258616
    Abstract: Embodiments described herein relate, inter alia, to receiving one or more segments of a digital recording, wherein the one or segments include video and/or audio data of a surgical procedure; analyzing, via a video/audio understanding model, the one or more segments to (i) characterize a plurality of independent features associated with a technical skill and/or a non-technical practice that are evident in the one or more segments and (ii) determine a higher-order pattern based upon analyzing a group of at least two of the plurality of independent features; comparing the higher-order pattern to ratings data associated to outcomes following one or more surgical procedures; and automatically generating a quality score based upon the comparing, wherein the quality score is predictive of an assessment of the technical skill and/or non-technical practice.
    Type: Application
    Filed: December 6, 2019
    Publication date: August 13, 2020
    Inventors: Donald Likosky, Steven Yule, Francis D. Pagani, Michael R. Mathias, Jason J. Corso, Roger Daglius Dias, Emily Mower Provost
  • Publication number: 20200243073
    Abstract: Systems, computer-implemented methods, and computer program products that can facilitate predicting a source of a subsequent spoken dialogue are provided. According to an embodiment, a system can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can comprise a speech receiving component that can receive a spoken dialogue from a first entity. The computer executable components can further comprise a speech processing component that can employ a network that can concurrently process a transition type and a dialogue act of the spoken dialogue to predict a source of a subsequent spoken dialogue.
    Type: Application
    Filed: January 25, 2019
    Publication date: July 30, 2020
    Inventors: Lazaros Polymenakos, Dimitrios B. Dimitriadis, Zakaria Aldeneh, Emily Mower Provost
  • Publication number: 20200075040
    Abstract: A method of predicting a mood state of a user may include recording an audio sample via a microphone of a mobile computing device of the user based on the occurrence of an event, extracting a set of acoustic features from the audio sample, generating one or more emotion values by analyzing the set of acoustic features using a trained machine learning model, and determining the mood state of the user, based on the one or more emotion values. In some embodiments, the audio sample may be ambient audio recorded periodically, and/or call data of the user recorded during clinical calls or personal calls.
    Type: Application
    Filed: August 30, 2019
    Publication date: March 5, 2020
    Inventors: Emily Mower Provost, Melvin McInnis, John Henry Gideon, Katherine Anne Matton, Soheil Khorram
  • Patent number: 9685174
    Abstract: A system that monitors and assesses the moods of subjects with neurological disorders, like bipolar disorder, by analyzing normal conversational speech to identify speech data that is then analyzed through an automated speech data classifier. The classifier may be based on a vector, separator, hyperplane, decision boundary, or other set of rules to classify one or more mood states of a subject. The system classifier is used to assess current mood state, predicted instability, and/or a change in future mood state, in particular for subjects with bipolar disorder.
    Type: Grant
    Filed: May 1, 2015
    Date of Patent: June 20, 2017
    Assignee: THE REGENTS OF THE UNIVERSITY OF MICHIGAN
    Inventors: Zahi N. Karam, Satinder Singh Baveja, Melvin Mcinnis, Emily Mower Provost