Patents by Inventor Rohan CHANDRA

Rohan CHANDRA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11861940
    Abstract: Systems, methods, apparatuses, and computer program products for recognizing human emotion in images or video. A method for recognizing perceived human emotion may include receiving a raw input. The raw input may be processed to generate input data corresponding to at least one context. Features from the raw input data may be extracted to obtain a plurality of feature vectors and inputs. The plurality of feature vectors and the inputs may be transmitted to a respective neural network. At least some of the plurality of feature vectors may be fused to obtain a feature encoding. Additional feature encodings may be computed from the plurality of feature vectors via the respective neural network. A multi-label emotion classification of a primary agent may be performed in the raw input based on the feature encoding and the additional feature encodings.
    Type: Grant
    Filed: June 16, 2021
    Date of Patent: January 2, 2024
    Assignee: UNIVERSITY OF MARYLAND, COLLEGE PARK
    Inventors: Trisha Mittal, Aniket Bera, Uttaran Bhattacharya, Rohan Chandra, Pooja Guhan, Dinesh Manocha
  • Patent number: 11830291
    Abstract: Systems, methods, apparatuses, and computer program products for providing multimodal emotion recognition. The method may include receiving raw input from an input source. The method may also include extracting one or more feature vectors from the raw input. The method may further include determining an effectiveness of the one or more feature vectors. Further, the method may include performing, based on the determination, multiplicative fusion processing on the one or more feature vectors. The method may also include predicting, based on results of the multiplicative fusion processing, one or more emotions of the input source.
    Type: Grant
    Filed: February 10, 2021
    Date of Patent: November 28, 2023
    Assignee: UNIVERSITY OF MARYLAND, COLLEGE PARK
    Inventors: Trisha Mittal, Aniket Bera, Uttaran Bhattacharya, Rohan Chandra, Dinesh Manocha
  • Publication number: 20230020678
    Abstract: In some implementations, a computing device may generate a day view, month views, and year view that show cards specific to each view. The cards include images, videos, and/or other assets from a media library that reflect a corresponding time frame of the card on which the assets are displayed. A selected view is presented in a graphical user interface (GUI) for interaction with a user of the media library. Upon selection of an asset displayed to the GUI, the view is switched to show more assets from a time frame similar to the selected asset while maintaining a focus on the selected asset.
    Type: Application
    Filed: August 1, 2022
    Publication date: January 19, 2023
    Applicant: Apple Inc.
    Inventors: Eric M. Circlaeys, Guillaume Vergnaud, Samantha E. Fierro, Kevin Aujoulet, Benedikt M. Hirmer, Alexandre N. Lopoukhine, Kevin Bessiere, Vignesh Jagadeesh, Rohan Chandra
  • Patent number: 11520465
    Abstract: In some implementations, a computing device may generate a day view, month views, and year view that show cards specific to each view. The cards include images, videos, and/or other assets from a media library that reflect a corresponding time frame of the card on which the assets are displayed. A selected view is presented in a graphical user interface (GUI) for interaction with a user of the media library. Upon selection of an asset displayed to the GUI, the view is switched to show more assets from a time frame similar to the selected asset while maintaining a focus on the selected asset.
    Type: Grant
    Filed: September 4, 2019
    Date of Patent: December 6, 2022
    Assignee: Apple Inc.
    Inventors: Eric Circlaeys, Guillaume Vergnaud, Samantha E. Fierro, Kevin Aujoulet, Benedikt M. Hirmer, Alexandre N. Lopoukhine, Kevin Bessiere, Vignesh Jagadeesh, Rohan Chandra
  • Publication number: 20220382811
    Abstract: Techniques for digital asset classification are described. One or more holiday score are assigned to a set of digital media assets. The one or more holiday scores are calculated based on weighted holiday metrics determined based on characteristics of the set of digital assets. A holiday classification is assigned based on the one or more holiday score, and the set of digital media assets are provided for presentation in accordance with the holiday classification.
    Type: Application
    Filed: April 12, 2022
    Publication date: December 1, 2022
    Inventors: Sabrine Rekik, Alexa Rockwell, Rohan Chandra, Sowmya Gopalan, Emma D. Clark, Kevin Bessiere
  • Publication number: 20220382803
    Abstract: This disclosure relates to systems, methods, and computer-readable media for identifying digital assets on the end-user device and received from another device as the secondary digital assets; adding the secondary digital assets to a syndication library separate from the primary photo library; and applying eligibility filters to the secondary digital assets in the syndication library, the eligibility filters resulting in a set of eligible secondary digital assets in the syndication library and a set of ineligible secondary digital assets in the syndication library. The set of eligible secondary digital assets in the syndication library are linked with the primary photo library.
    Type: Application
    Filed: March 29, 2022
    Publication date: December 1, 2022
    Inventors: Kalu O. Kalu, Akarsh Simha, Kevin Aujoulet, Marcelo Lotif Araujo, Rohan Chandra, Pamela Chen, Elisa Y. Cui, Kamal Benkiran
  • Publication number: 20220138472
    Abstract: A video is classified as real or fake by extracting facial features, including facial modalities and facial emotions, and speech features, including speech modalities and speech emotions, from the video. The facial and speech modalities are passed through first and second neural networks, respectively, to generate facial and speech modality embeddings. The facial and speech emotions are passed through third and fourth neural networks, respectively, to generate facial and speech emotion embeddings. A first distance, d1, between the facial modality embedding and the speech modality embedding is generated, together with a second distance, d2, between the facial emotion embedding and the speech emotion embedding. The video is classified as fake if a sum of the first distance and the second distance exceeds a threshold distance. The networks may be trained using real and fake video pairs for multiple subjects.
    Type: Application
    Filed: November 1, 2021
    Publication date: May 5, 2022
    Inventors: Trisha Mittal, Uttaran Bhattacharya, Rohan Chandra, Aniket Bera, Dinesh Manocha
  • Publication number: 20210390288
    Abstract: Systems, methods, apparatuses, and computer program products for recognizing human emotion in images or video. A method for recognizing perceived human emotion may include receiving a raw input. The raw input may be processed to generate input data corresponding to at least one context. Features from the raw input data may be extracted to obtain a plurality of feature vectors and inputs. The plurality of feature vectors and the inputs may be transmitted to a respective neural network. At least some of the plurality of feature vectors may be fused to obtain a feature encoding. Additional feature encodings may be computed from the plurality of feature vectors via the respective neural network. A multi-label emotion classification of a primary agent may be performed in the raw input based on the feature encoding and the additional feature encodings.
    Type: Application
    Filed: June 16, 2021
    Publication date: December 16, 2021
    Inventors: Trisha MITTAL, Aniket BERA, Uttaran BHATTACHARYA, Rohan CHANDRA, Pooja GUHAN, Dinesh MANOCHA
  • Publication number: 20210342656
    Abstract: Systems, methods, apparatuses, and computer program products for providing multimodal emotion recognition. The method may include receiving raw input from an input source. The method may also include extracting one or more feature vectors from the raw input. The method may further include determining an effectiveness of the one or more feature vectors. Further, the method may include performing, based on the determination, multiplicative fusion processing on the one or more feature vectors. The method may also include predicting, based on results of the multiplicative fusion processing, one or more emotions of the input source.
    Type: Application
    Filed: February 10, 2021
    Publication date: November 4, 2021
    Inventors: Trisha MITTAL, Aniket BERA, Uttaran BHATTACHARYA, Rohan CHANDRA, Dinesh MANOCHA
  • Publication number: 20200356227
    Abstract: In some implementations, a computing device may generate a day view, month views, and year view that show cards specific to each view. The cards include images, videos, and/or other assets from a media library that reflect a corresponding time frame of the card on which the assets are displayed. A selected view is presented in a graphical user interface (GUI) for interaction with a user of the media library. Upon selection of an asset displayed to the GUI, the view is switched to show more assets from a time frame similar to the selected asset while maintaining a focus on the selected asset.
    Type: Application
    Filed: September 4, 2019
    Publication date: November 12, 2020
    Applicant: Apple Inc.
    Inventors: Eric Circlaeys, Guillaume Vergnaud, Samantha E. Fierro, Kevin Aujoulet, Benedikt M. Hirmer, Alexandre N. Lopoukhine, Kevin Bessiere, Vignesh Jagadeesh, Rohan Chandra
  • Patent number: 9602796
    Abstract: Technologies for improving the accuracy of depth camera images include a computing device to generate a foreground mask and a background mask for an image generated by a depth camera. The computing device identifies areas of a depth image of a depth channel of the generated image having unknown depth values as one of interior depth holes or exterior depth holes based on the foreground and background masks. The computing device fills at least a portion of the interior depth holes of the depth image based on depth values of areas of the depth image within a threshold distance of the corresponding portion of the interior depth holes. Similarly, the computing device fills at least a portion of the exterior depth holes of the depth image based on depth values of areas of the depth image within the threshold distance of the corresponding portion of the exterior depth holes.
    Type: Grant
    Filed: May 20, 2013
    Date of Patent: March 21, 2017
    Assignee: Intel Corporation
    Inventors: Rohan Chandra, Abhishek Ranjan, Shahzad A. Malik
  • Publication number: 20160065930
    Abstract: Technologies for improving the accuracy of depth camera images include a computing device to generate a foreground mask and a background mask for an image generated by a depth camera. The computing device identifies areas of a depth image of a depth channel of the generated image having unknown depth values as one of interior depth holes or exterior depth holes based on the foreground and background masks. The computing device fills at least a portion of the interior depth holes of the depth image based on depth values of areas of the depth image within a threshold distance of the corresponding portion of the interior depth holes. Similarly, the computing device fills at least a portion of the exterior depth holes of the depth image based on depth values of areas of the depth image within the threshold distance of the corresponding portion of the exterior depth holes.
    Type: Application
    Filed: May 20, 2013
    Publication date: March 3, 2016
    Inventors: Rohan CHANDRA, Abhishek RANJAN, Shahzad A. MALIK