Patents by Inventor Emily Mower Provost
Emily Mower Provost has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11645473Abstract: Systems, computer-implemented methods, and computer program products that can facilitate predicting a source of a subsequent spoken dialogue are provided. According to an embodiment, a system can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can comprise a speech receiving component that can receive a spoken dialogue from a first entity. The computer executable components can further comprise a speech processing component that can employ a network that can concurrently process a transition type and a dialogue act of the spoken dialogue to predict a source of a subsequent spoken dialogue.Type: GrantFiled: December 23, 2020Date of Patent: May 9, 2023Assignees: INTERNATIONAL BUSINESS MACHINES CORPORATION, THE REGENTS OF THE UNIVERSITY OF MICHIGANInventors: Lazaros Polymenakos, Dimitrios B. Dimitriadis, Zakaria Aldeneh, Emily Mower Provost
-
Patent number: 11545173Abstract: A method of predicting a mood state of a user may include recording an audio sample via a microphone of a mobile computing device of the user based on the occurrence of an event, extracting a set of acoustic features from the audio sample, generating one or more emotion values by analyzing the set of acoustic features using a trained machine learning model, and determining the mood state of the user, based on the one or more emotion values. In some embodiments, the audio sample may be ambient audio recorded periodically, and/or call data of the user recorded during clinical calls or personal calls.Type: GrantFiled: August 30, 2019Date of Patent: January 3, 2023Assignee: THE REGENTS OF THE UNIVERSITY OF MICHIGANInventors: Emily Mower Provost, Melvin McInnis, John Henry Gideon, Katherine Anne Matton, Soheil Khorram
-
Patent number: 11072344Abstract: A method includes receiving acoustic features and phonetic features associated with an utterance from a driver in a vehicle, providing the acoustic features and the phonetic features to a feature fusion sub-network, receiving a feature fusion utterance representation from the feature fusion sub-network, providing one of the acoustic features or the phonetic features to a non-fusion sub-network trained using supervised learning, receiving a non-fusion utterance representation from the non-fusion sub-network, generating an intermediate utterance representation based on the feature fusion utterance representation and the non-fusion utterance representation, providing at least a portion of the intermediate utterance representation to a fully-connected sub-network trained using supervised learning, receiving a valence vector from the fully-connected sub-network, and causing a vehicle control system to perform a vehicle maneuver based on the valence vector.Type: GrantFiled: March 18, 2019Date of Patent: July 27, 2021Assignee: The Regents of the University of MichiganInventors: Emily Mower Provost, Biqiao Zhang, Soheil Khorram
-
Publication number: 20210110829Abstract: Systems, computer-implemented methods, and computer program products that can facilitate predicting a source of a subsequent spoken dialogue are provided. According to an embodiment, a system can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can comprise a speech receiving component that can receive a spoken dialogue from a first entity. The computer executable components can further comprise a speech processing component that can employ a network that can concurrently process a transition type and a dialogue act of the spoken dialogue to predict a source of a subsequent spoken dialogue.Type: ApplicationFiled: December 23, 2020Publication date: April 15, 2021Inventors: Lazaros Polymenakos, Dimitrios B. Dimitriadis, Zakaria Aldeneh, Emily Mower Provost
-
Patent number: 10957320Abstract: Systems, computer-implemented methods, and computer program products that can facilitate predicting a source of a subsequent spoken dialogue are provided. According to an embodiment, a system can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can comprise a speech receiving component that can receive a spoken dialogue from a first entity. The computer executable components can further comprise a speech processing component that can employ a network that can concurrently process a transition type and a dialogue act of the spoken dialogue to predict a source of a subsequent spoken dialogue.Type: GrantFiled: January 25, 2019Date of Patent: March 23, 2021Assignees: INTERNATIONAL BUSINESS MACHINES CORPORATION, THE REGENTS OF THE UNIVERSITY OF MICHIGANInventors: Lazaros Polymenakos, Dimitrios B. Dimitriadis, Zakaria Aldeneh, Emily Mower Provost
-
Publication number: 20200298873Abstract: The present disclosure provides a method that includes receiving acoustic features and phonetic features associated with an utterance from a driver in a vehicle with a vehicle control system, providing the plurality of acoustic features and the plurality of phonetic features to a feature fusion sub-network, receiving a feature fusion utterance representation from the feature fusion sub-network, providing one of the plurality of acoustic features or the plurality of phonetic features to a non-fusion sub-network trained using supervised learning, receiving a non-fusion utterance representation from the non-fusion sub-network, generating an intermediate utterance representation based on the feature fusion utterance representation and the non-fusion utterance representation, providing at least a portion of the intermediate utterance representation to a fully-connected sub-network trained using supervised learning, receiving a valence vector from the fully-connected sub-network, and causing the vehicle control sysType: ApplicationFiled: March 18, 2019Publication date: September 24, 2020Inventors: EMILY MOWER PROVOST, BIQIAO ZHANG, SOHEIL KHORRAM
-
Publication number: 20200258616Abstract: Embodiments described herein relate, inter alia, to receiving one or more segments of a digital recording, wherein the one or segments include video and/or audio data of a surgical procedure; analyzing, via a video/audio understanding model, the one or more segments to (i) characterize a plurality of independent features associated with a technical skill and/or a non-technical practice that are evident in the one or more segments and (ii) determine a higher-order pattern based upon analyzing a group of at least two of the plurality of independent features; comparing the higher-order pattern to ratings data associated to outcomes following one or more surgical procedures; and automatically generating a quality score based upon the comparing, wherein the quality score is predictive of an assessment of the technical skill and/or non-technical practice.Type: ApplicationFiled: December 6, 2019Publication date: August 13, 2020Inventors: Donald Likosky, Steven Yule, Francis D. Pagani, Michael R. Mathias, Jason J. Corso, Roger Daglius Dias, Emily Mower Provost
-
Publication number: 20200243073Abstract: Systems, computer-implemented methods, and computer program products that can facilitate predicting a source of a subsequent spoken dialogue are provided. According to an embodiment, a system can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can comprise a speech receiving component that can receive a spoken dialogue from a first entity. The computer executable components can further comprise a speech processing component that can employ a network that can concurrently process a transition type and a dialogue act of the spoken dialogue to predict a source of a subsequent spoken dialogue.Type: ApplicationFiled: January 25, 2019Publication date: July 30, 2020Inventors: Lazaros Polymenakos, Dimitrios B. Dimitriadis, Zakaria Aldeneh, Emily Mower Provost
-
Publication number: 20200075040Abstract: A method of predicting a mood state of a user may include recording an audio sample via a microphone of a mobile computing device of the user based on the occurrence of an event, extracting a set of acoustic features from the audio sample, generating one or more emotion values by analyzing the set of acoustic features using a trained machine learning model, and determining the mood state of the user, based on the one or more emotion values. In some embodiments, the audio sample may be ambient audio recorded periodically, and/or call data of the user recorded during clinical calls or personal calls.Type: ApplicationFiled: August 30, 2019Publication date: March 5, 2020Inventors: Emily Mower Provost, Melvin McInnis, John Henry Gideon, Katherine Anne Matton, Soheil Khorram
-
Patent number: 9685174Abstract: A system that monitors and assesses the moods of subjects with neurological disorders, like bipolar disorder, by analyzing normal conversational speech to identify speech data that is then analyzed through an automated speech data classifier. The classifier may be based on a vector, separator, hyperplane, decision boundary, or other set of rules to classify one or more mood states of a subject. The system classifier is used to assess current mood state, predicted instability, and/or a change in future mood state, in particular for subjects with bipolar disorder.Type: GrantFiled: May 1, 2015Date of Patent: June 20, 2017Assignee: THE REGENTS OF THE UNIVERSITY OF MICHIGANInventors: Zahi N. Karam, Satinder Singh Baveja, Melvin Mcinnis, Emily Mower Provost