Patents by Inventor Gus Cooney
Gus Cooney has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240062547Abstract: Technology is provided for generating conversation features for recorded conversations. The technology includes, receiving videos depicting a multiple-user interaction, segmenting the videos into multiple utterances based on identifying utterances from individual users, receiving label data for the utterance segments specifying conversation features, and storing the label data in association with the utterance segments.Type: ApplicationFiled: October 31, 2023Publication date: February 22, 2024Applicant: BetterUp, Inc.Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek
-
Patent number: 11810357Abstract: Technology is provided for generating conversation features for recorded conversations. The technology includes, receiving videos depicting a multiple-user interaction, segmenting the videos into multiple utterances based on identifying utterances from individual users, receiving label data for the utterance segments specifying conversation features, and storing the label data in association with the utterance segments.Type: GrantFiled: February 21, 2020Date of Patent: November 7, 2023Assignee: BetterUp, Inc.Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek
-
Publication number: 20230083298Abstract: Technology is provided for identifying synthesized conversation features from recorded conversations. The technology can identify, for each of one or more utterances, data for multiple modalities, such as acoustic data, video data, and text data. The technology can extract features, for each particular utterance of the one or more utterances, from each of the data for the multiple modalities associated with that particular utterance. The technology can also apply a machine learning model that receives the extracted features and/or previously synthesized conversation features and produces one or more additional synthesized conversation features.Type: ApplicationFiled: November 1, 2022Publication date: March 16, 2023Applicant: BetterUp, Inc.Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek
-
Patent number: 11521620Abstract: Technology is provided for identifying synthesized conversation features from recorded conversations. The technology can identify, for each of one or more utterances, data for multiple modalities, such as acoustic data, video data, and text data. The technology can extract features, for each particular utterance of the one or more utterances, from each of the data for the multiple modalities associated with that particular utterance. The technology can also apply a machine learning model that receives the extracted features and/or previously synthesized conversation features and produces one or more additional synthesized conversation features.Type: GrantFiled: February 21, 2020Date of Patent: December 6, 2022Assignee: BetterUp, Inc.Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek
-
Publication number: 20220343899Abstract: Technology is provided for causing a computing system to extract conversation features from a multiparty conversation (e.g., between a coach and mentee), apply the conversation features to a machine learning system to generate conversation analysis indicators, and apply a mapping of conversation analysis indicators to actions and inferences to determine actions to take or inferences to make for the multiparty conversation. In various implementations, the actions and inferences can include determining scores for the multiparty conversation such as a score for progress toward a coaching goal, instant scores for various points throughout the conversation, conversation impact score, ownership scores, etc. These scores can be, e.g., surfaced in various user interfaces along with context and benchmark indicators, used to select resources for the coach or mentee, used to update coach/mentee matchings, used to provide real-time alerts to signify how the conversation is going, etc.Type: ApplicationFiled: July 11, 2022Publication date: October 27, 2022Applicant: BetterUp, Inc.Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek
-
Publication number: 20220343911Abstract: Technology is provided for conversation analysis. The technology includes, receiving multiple utterance representations, where each utterance representation represents a portion of a conversation performed by at least two users, and each utterance representation is associated with video data, acoustic data, and text data. The technology further includes generating a first utterance output by applying video data, acoustic data, and text data of the first utterance representation to a respective video processing part of the machine learning system to generate video, text, and acoustic-based outputs. A second utterance output is further generated for a second user. Conversation analysis indicators are generated by applying, to a sequential machine learning system the combined speaker features and a previous state of the sequential machine learning system.Type: ApplicationFiled: July 11, 2022Publication date: October 27, 2022Applicant: BetterUp, Inc.Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek
-
Patent number: 11417330Abstract: Technology is provided for conversation analysis. The technology includes, receiving multiple utterance representations, where each utterance representation represents a portion of a conversation performed by at least two users, and each utterance representation is associated with video data, acoustic data, and text data. The technology further includes generating a first utterance output by applying video data, acoustic data, and text data of the first utterance representation to a respective video processing part of the machine learning system to generate video, text, and acoustic-based outputs. A second utterance output is further generated for a second user. Conversation analysis indicators are generated by applying, to a sequential machine learning system the combined speaker features and a previous state of the sequential machine learning system.Type: GrantFiled: February 21, 2020Date of Patent: August 16, 2022Assignee: BetterUp, Inc.Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Kellerman, Ryan Sonnek
-
Patent number: 11417318Abstract: Technology is provided for causing a computing system to extract conversation features from a multiparty conversation (e.g., between a coach and mentee), apply the conversation features to a machine learning system to generate conversation analysis indicators, and apply a mapping of conversation analysis indicators to actions and inferences to determine actions to take or inferences to make for the multiparty conversation. In various implementations, the actions and inferences can include determining scores for the multiparty conversation such as a score for progress toward a coaching goal, instant scores for various points throughout the conversation, conversation impact score, ownership scores, etc. These scores can be, e.g., surfaced in various user interfaces along with context and benchmark indicators, used to select resources for the coach or mentee, used to update coach/mentee matchings, used to provide real-time alerts to signify how the conversation is going, etc.Type: GrantFiled: February 21, 2020Date of Patent: August 16, 2022Assignee: BetterUp, Inc.Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek
-
Publication number: 20210264162Abstract: Technology is provided for generating conversation features for recorded conversations. The technology includes, receiving videos depicting a multiple-user interaction, segmenting the videos into multiple utterances based on identifying utterances from individual users, receiving label data for the utterance segments specifying conversation features, and storing the label data in association with the utterance segments.Type: ApplicationFiled: February 21, 2020Publication date: August 26, 2021Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek
-
Publication number: 20210264900Abstract: Technology is provided for causing a computing system to extract conversation features from a multiparty conversation (e.g., between a coach and mentee), apply the conversation features to a machine learning system to generate conversation analysis indicators, and apply a mapping of conversation analysis indicators to actions and inferences to determine actions to take or inferences to make for the multiparty conversation. In various implementations, the actions and inferences can include determining scores for the multiparty conversation such as a score for progress toward a coaching goal, instant scores for various points throughout the conversation, conversation impact score, ownership scores, etc. These scores can be, e.g., surfaced in various user interfaces along with context and benchmark indicators, used to select resources for the coach or mentee, used to update coach/mentee matchings, used to provide real-time alerts to signify how the conversation is going, etc.Type: ApplicationFiled: February 21, 2020Publication date: August 26, 2021Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek
-
Publication number: 20210264921Abstract: Technology is provided for identifying synthesized conversation features from recorded conversations. The technology can identify, for each of one or more utterances, data for multiple modalities, such as acoustic data, video data, and text data. The technology can extract features, for each particular utterance of the one or more utterances, from each of the data for the multiple modalities associated with that particular utterance. The technology can also apply a machine learning model that receives the extracted features and/or previously synthesized conversation features and produces one or more additional synthesized conversation features.Type: ApplicationFiled: February 21, 2020Publication date: August 26, 2021Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek
-
Publication number: 20210264909Abstract: Technology is provided for conversation analysis. The technology includes, receiving multiple utterance representations, where each utterance representation represents a portion of a conversation performed by at least two users, and each utterance representation is associated with video data, acoustic data, and text data. The technology further includes generating a first utterance output by applying video data, acoustic data, and text data of the first utterance representation to a respective video processing part of the machine learning system to generate video, text, and acoustic-based outputs. A second utterance output is further generated for a second user. Conversation analysis indicators are generated by applying, to a sequential machine learning system the combined speaker features and a previous state of the sequential machine learning system.Type: ApplicationFiled: February 21, 2020Publication date: August 26, 2021Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek