Patents by Inventor Trisha MITTAL

Trisha MITTAL has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11861940
    Abstract: Systems, methods, apparatuses, and computer program products for recognizing human emotion in images or video. A method for recognizing perceived human emotion may include receiving a raw input. The raw input may be processed to generate input data corresponding to at least one context. Features from the raw input data may be extracted to obtain a plurality of feature vectors and inputs. The plurality of feature vectors and the inputs may be transmitted to a respective neural network. At least some of the plurality of feature vectors may be fused to obtain a feature encoding. Additional feature encodings may be computed from the plurality of feature vectors via the respective neural network. A multi-label emotion classification of a primary agent may be performed in the raw input based on the feature encoding and the additional feature encodings.
    Type: Grant
    Filed: June 16, 2021
    Date of Patent: January 2, 2024
    Assignee: UNIVERSITY OF MARYLAND, COLLEGE PARK
    Inventors: Trisha Mittal, Aniket Bera, Uttaran Bhattacharya, Rohan Chandra, Pooja Guhan, Dinesh Manocha
  • Publication number: 20230410505
    Abstract: Techniques for video manipulation detection are described to detect one or more manipulations present in digital content such as a digital video. A detection system, for instance, receives a frame of a digital video that depicts at least one entity. Coordinates of the frame that correspond to a gaze location of the entity are determined, and the detection system determines whether the coordinates correspond to a portion of an object depicted in the frame to calculate a gaze confidence score. A manipulation score is generated that indicates whether the digital video has been manipulated based on the gaze confidence score. In some examples, the manipulation score is based on at least one additional confidence score.
    Type: Application
    Filed: June 21, 2022
    Publication date: December 21, 2023
    Applicant: Adobe Inc.
    Inventors: Ritwik Sinha, Viswanathan Swaminathan, Trisha Mittal, John Philip Collomosse
  • Patent number: 11830291
    Abstract: Systems, methods, apparatuses, and computer program products for providing multimodal emotion recognition. The method may include receiving raw input from an input source. The method may also include extracting one or more feature vectors from the raw input. The method may further include determining an effectiveness of the one or more feature vectors. Further, the method may include performing, based on the determination, multiplicative fusion processing on the one or more feature vectors. The method may also include predicting, based on results of the multiplicative fusion processing, one or more emotions of the input source.
    Type: Grant
    Filed: February 10, 2021
    Date of Patent: November 28, 2023
    Assignee: UNIVERSITY OF MARYLAND, COLLEGE PARK
    Inventors: Trisha Mittal, Aniket Bera, Uttaran Bhattacharya, Rohan Chandra, Dinesh Manocha
  • Publication number: 20230139824
    Abstract: Various disclosed embodiments are directed to using one or more algorithms or models to select a suitable or optimal variation, among multiple variations, of a given content item based on feedback. Such feedback guides the algorithm or model to arrive at suitable variation result such that the variation result is produced as the output for consumption by users. Further, various embodiments resolve tedious manual user input requirements and reduce computing resource consumption, among other things, as described in more detail below.
    Type: Application
    Filed: November 4, 2021
    Publication date: May 4, 2023
    Inventors: Trisha Mittal, Viswanathan Swaminathan, Ritwik Sinha, Saayan Mitra, David Arbour, Somdeb Sarkhel
  • Publication number: 20220138472
    Abstract: A video is classified as real or fake by extracting facial features, including facial modalities and facial emotions, and speech features, including speech modalities and speech emotions, from the video. The facial and speech modalities are passed through first and second neural networks, respectively, to generate facial and speech modality embeddings. The facial and speech emotions are passed through third and fourth neural networks, respectively, to generate facial and speech emotion embeddings. A first distance, d1, between the facial modality embedding and the speech modality embedding is generated, together with a second distance, d2, between the facial emotion embedding and the speech emotion embedding. The video is classified as fake if a sum of the first distance and the second distance exceeds a threshold distance. The networks may be trained using real and fake video pairs for multiple subjects.
    Type: Application
    Filed: November 1, 2021
    Publication date: May 5, 2022
    Inventors: Trisha Mittal, Uttaran Bhattacharya, Rohan Chandra, Aniket Bera, Dinesh Manocha
  • Publication number: 20210390288
    Abstract: Systems, methods, apparatuses, and computer program products for recognizing human emotion in images or video. A method for recognizing perceived human emotion may include receiving a raw input. The raw input may be processed to generate input data corresponding to at least one context. Features from the raw input data may be extracted to obtain a plurality of feature vectors and inputs. The plurality of feature vectors and the inputs may be transmitted to a respective neural network. At least some of the plurality of feature vectors may be fused to obtain a feature encoding. Additional feature encodings may be computed from the plurality of feature vectors via the respective neural network. A multi-label emotion classification of a primary agent may be performed in the raw input based on the feature encoding and the additional feature encodings.
    Type: Application
    Filed: June 16, 2021
    Publication date: December 16, 2021
    Inventors: Trisha MITTAL, Aniket BERA, Uttaran BHATTACHARYA, Rohan CHANDRA, Pooja GUHAN, Dinesh MANOCHA
  • Publication number: 20210342656
    Abstract: Systems, methods, apparatuses, and computer program products for providing multimodal emotion recognition. The method may include receiving raw input from an input source. The method may also include extracting one or more feature vectors from the raw input. The method may further include determining an effectiveness of the one or more feature vectors. Further, the method may include performing, based on the determination, multiplicative fusion processing on the one or more feature vectors. The method may also include predicting, based on results of the multiplicative fusion processing, one or more emotions of the input source.
    Type: Application
    Filed: February 10, 2021
    Publication date: November 4, 2021
    Inventors: Trisha MITTAL, Aniket BERA, Uttaran BHATTACHARYA, Rohan CHANDRA, Dinesh MANOCHA