Patents by Inventor Rosalin PARIDA

Rosalin PARIDA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12380915
    Abstract: A method and system for emotion recognition and forecasting are disclosed. The method may include obtaining an audio data of a conversation involving a plurality of speakers and identifying a plurality of turns of the conversation from the plurality of utterances. The method may further include extracting audio embedding features from the plurality of turns, obtaining a plurality of text segments associated with the audio data, extracting text embedding features from the plurality of text segments, obtaining and concatenating speaker embedding features associated with the audio data, obtaining and concatenating a plurality of emotion features corresponding to the plurality of turns. The method further comprises executing a tree-based prediction model to predict emotion features of the plurality of speakers for a subsequent turn of the ongoing conversation based on the audio embedding features, text embedding features, the concatenated speaker embedding features, and the concatenated emotion features.
    Type: Grant
    Filed: November 30, 2022
    Date of Patent: August 5, 2025
    Assignee: ACCENTURE GLOBAL SOLUTIONS LIMITED
    Inventors: Rosalin Parida, Bhushan Gurmukhdas Jagyasi, Surajit Sen, Aditi Debsharma, Gopali Raval Contractor
  • Publication number: 20240177729
    Abstract: A method and system for emotion recognition and forecasting are disclosed. The method may include obtaining an audio data of a conversation involving a plurality of speakers and identifying a plurality of turns of the conversation from the plurality of utterances. The method may further include extracting audio embedding features from the plurality of turns, obtaining a plurality of text segments associated with the audio data, extracting text embedding features from the plurality of text segments, obtaining and concatenating speaker embedding features associated with the audio data, obtaining and concatenating a plurality of emotion features corresponding to the plurality of turns. The method further comprises executing a tree-based prediction model to predict emotion features of the plurality of speakers for a subsequent turn of the ongoing conversation based on the audio embedding features, text embedding features, the concatenated speaker embedding features, and the concatenated emotion features.
    Type: Application
    Filed: November 30, 2022
    Publication date: May 30, 2024
    Applicant: Accenture Global Solutions Limited
    Inventors: Rosalin PARIDA, Bhushan Gurmukhdas JAGYASI, Surajit SEN, Aditi DEBSHARMA, Gopali Raval CONTRACTOR