Patents by Inventor Casey Fitzpatrick

Casey Fitzpatrick has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240062547
    Abstract: Technology is provided for generating conversation features for recorded conversations. The technology includes, receiving videos depicting a multiple-user interaction, segmenting the videos into multiple utterances based on identifying utterances from individual users, receiving label data for the utterance segments specifying conversation features, and storing the label data in association with the utterance segments.
    Type: Application
    Filed: October 31, 2023
    Publication date: February 22, 2024
    Applicant: BetterUp, Inc.
    Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek
  • Patent number: 11810357
    Abstract: Technology is provided for generating conversation features for recorded conversations. The technology includes, receiving videos depicting a multiple-user interaction, segmenting the videos into multiple utterances based on identifying utterances from individual users, receiving label data for the utterance segments specifying conversation features, and storing the label data in association with the utterance segments.
    Type: Grant
    Filed: February 21, 2020
    Date of Patent: November 7, 2023
    Assignee: BetterUp, Inc.
    Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek
  • Publication number: 20230083298
    Abstract: Technology is provided for identifying synthesized conversation features from recorded conversations. The technology can identify, for each of one or more utterances, data for multiple modalities, such as acoustic data, video data, and text data. The technology can extract features, for each particular utterance of the one or more utterances, from each of the data for the multiple modalities associated with that particular utterance. The technology can also apply a machine learning model that receives the extracted features and/or previously synthesized conversation features and produces one or more additional synthesized conversation features.
    Type: Application
    Filed: November 1, 2022
    Publication date: March 16, 2023
    Applicant: BetterUp, Inc.
    Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek
  • Patent number: 11521620
    Abstract: Technology is provided for identifying synthesized conversation features from recorded conversations. The technology can identify, for each of one or more utterances, data for multiple modalities, such as acoustic data, video data, and text data. The technology can extract features, for each particular utterance of the one or more utterances, from each of the data for the multiple modalities associated with that particular utterance. The technology can also apply a machine learning model that receives the extracted features and/or previously synthesized conversation features and produces one or more additional synthesized conversation features.
    Type: Grant
    Filed: February 21, 2020
    Date of Patent: December 6, 2022
    Assignee: BetterUp, Inc.
    Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek
  • Publication number: 20220343899
    Abstract: Technology is provided for causing a computing system to extract conversation features from a multiparty conversation (e.g., between a coach and mentee), apply the conversation features to a machine learning system to generate conversation analysis indicators, and apply a mapping of conversation analysis indicators to actions and inferences to determine actions to take or inferences to make for the multiparty conversation. In various implementations, the actions and inferences can include determining scores for the multiparty conversation such as a score for progress toward a coaching goal, instant scores for various points throughout the conversation, conversation impact score, ownership scores, etc. These scores can be, e.g., surfaced in various user interfaces along with context and benchmark indicators, used to select resources for the coach or mentee, used to update coach/mentee matchings, used to provide real-time alerts to signify how the conversation is going, etc.
    Type: Application
    Filed: July 11, 2022
    Publication date: October 27, 2022
    Applicant: BetterUp, Inc.
    Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek
  • Publication number: 20220343911
    Abstract: Technology is provided for conversation analysis. The technology includes, receiving multiple utterance representations, where each utterance representation represents a portion of a conversation performed by at least two users, and each utterance representation is associated with video data, acoustic data, and text data. The technology further includes generating a first utterance output by applying video data, acoustic data, and text data of the first utterance representation to a respective video processing part of the machine learning system to generate video, text, and acoustic-based outputs. A second utterance output is further generated for a second user. Conversation analysis indicators are generated by applying, to a sequential machine learning system the combined speaker features and a previous state of the sequential machine learning system.
    Type: Application
    Filed: July 11, 2022
    Publication date: October 27, 2022
    Applicant: BetterUp, Inc.
    Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek
  • Patent number: 11417330
    Abstract: Technology is provided for conversation analysis. The technology includes, receiving multiple utterance representations, where each utterance representation represents a portion of a conversation performed by at least two users, and each utterance representation is associated with video data, acoustic data, and text data. The technology further includes generating a first utterance output by applying video data, acoustic data, and text data of the first utterance representation to a respective video processing part of the machine learning system to generate video, text, and acoustic-based outputs. A second utterance output is further generated for a second user. Conversation analysis indicators are generated by applying, to a sequential machine learning system the combined speaker features and a previous state of the sequential machine learning system.
    Type: Grant
    Filed: February 21, 2020
    Date of Patent: August 16, 2022
    Assignee: BetterUp, Inc.
    Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Kellerman, Ryan Sonnek
  • Patent number: 11417318
    Abstract: Technology is provided for causing a computing system to extract conversation features from a multiparty conversation (e.g., between a coach and mentee), apply the conversation features to a machine learning system to generate conversation analysis indicators, and apply a mapping of conversation analysis indicators to actions and inferences to determine actions to take or inferences to make for the multiparty conversation. In various implementations, the actions and inferences can include determining scores for the multiparty conversation such as a score for progress toward a coaching goal, instant scores for various points throughout the conversation, conversation impact score, ownership scores, etc. These scores can be, e.g., surfaced in various user interfaces along with context and benchmark indicators, used to select resources for the coach or mentee, used to update coach/mentee matchings, used to provide real-time alerts to signify how the conversation is going, etc.
    Type: Grant
    Filed: February 21, 2020
    Date of Patent: August 16, 2022
    Assignee: BetterUp, Inc.
    Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek
  • Publication number: 20210264162
    Abstract: Technology is provided for generating conversation features for recorded conversations. The technology includes, receiving videos depicting a multiple-user interaction, segmenting the videos into multiple utterances based on identifying utterances from individual users, receiving label data for the utterance segments specifying conversation features, and storing the label data in association with the utterance segments.
    Type: Application
    Filed: February 21, 2020
    Publication date: August 26, 2021
    Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek
  • Publication number: 20210264900
    Abstract: Technology is provided for causing a computing system to extract conversation features from a multiparty conversation (e.g., between a coach and mentee), apply the conversation features to a machine learning system to generate conversation analysis indicators, and apply a mapping of conversation analysis indicators to actions and inferences to determine actions to take or inferences to make for the multiparty conversation. In various implementations, the actions and inferences can include determining scores for the multiparty conversation such as a score for progress toward a coaching goal, instant scores for various points throughout the conversation, conversation impact score, ownership scores, etc. These scores can be, e.g., surfaced in various user interfaces along with context and benchmark indicators, used to select resources for the coach or mentee, used to update coach/mentee matchings, used to provide real-time alerts to signify how the conversation is going, etc.
    Type: Application
    Filed: February 21, 2020
    Publication date: August 26, 2021
    Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek
  • Publication number: 20210264921
    Abstract: Technology is provided for identifying synthesized conversation features from recorded conversations. The technology can identify, for each of one or more utterances, data for multiple modalities, such as acoustic data, video data, and text data. The technology can extract features, for each particular utterance of the one or more utterances, from each of the data for the multiple modalities associated with that particular utterance. The technology can also apply a machine learning model that receives the extracted features and/or previously synthesized conversation features and produces one or more additional synthesized conversation features.
    Type: Application
    Filed: February 21, 2020
    Publication date: August 26, 2021
    Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek
  • Publication number: 20210264909
    Abstract: Technology is provided for conversation analysis. The technology includes, receiving multiple utterance representations, where each utterance representation represents a portion of a conversation performed by at least two users, and each utterance representation is associated with video data, acoustic data, and text data. The technology further includes generating a first utterance output by applying video data, acoustic data, and text data of the first utterance representation to a respective video processing part of the machine learning system to generate video, text, and acoustic-based outputs. A second utterance output is further generated for a second user. Conversation analysis indicators are generated by applying, to a sequential machine learning system the combined speaker features and a previous state of the sequential machine learning system.
    Type: Application
    Filed: February 21, 2020
    Publication date: August 26, 2021
    Inventors: Andrew Reece, Peter Bull, Gus Cooney, Casey Fitzpatrick, Gabriella Rosen Kellerman, Ryan Sonnek