Patents by Inventor Jeffrey F. Cohn

Jeffrey F. Cohn has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230029505
    Abstract: A method may include obtaining a facial image of a subject and identifying a number of new images to be synthesized with target AU combinations and categories of intensity. The method may also include synthesizing the number of new images using the facial image of the subject as the base image with the number of target AU combinations and categories of intensity with a number of new images that have different AU combinations than the facial image of the subject. The method may additionally include adding the number of new images to a dataset and training a machine learning system using the dataset to identify a facial expression of the subject.
    Type: Application
    Filed: August 2, 2021
    Publication date: February 2, 2023
    Applicant: FUJITSU LIMITED
    Inventors: Koichiro NIINUMA, Jeffrey F. COHN, Laszlo A. JENI
  • Patent number: 11557149
    Abstract: A method may include obtaining a dataset including a target Action Unit (AU) combination and labeled images of the target AU combination with at least a first category of intensity for each AU of the target AU combination and a second category of intensity for each AU of the target AU combination. The method may also include determining that the first category of intensity for a first AU has a higher number of labeled images than the second category of intensity for the first AU, and based on the determination, identifying a number of new images to be synthesized in the second category of intensity for the first AU. The method may additionally include synthesizing the number of new images with the second category of intensity for the first AU, and adding the new images to the dataset.
    Type: Grant
    Filed: August 14, 2020
    Date of Patent: January 17, 2023
    Assignees: FUJITSU LIMITED, CARNEGIE MELLON UNIVERSITY
    Inventors: Koichiro Niinuma, Laszlo A. Jeni, Itir Onal Ertugrul, Jeffrey F. Cohn
  • Publication number: 20220051003
    Abstract: A method may include obtaining a dataset including a target Action Unit (AU) combination and labeled images of the target AU combination with at least a first category of intensity for each AU of the target AU combination and a second category of intensity for each AU of the target AU combination. The method may also include determining that the first category of intensity for a first AU has a higher number of labeled images than the second category of intensity for the first AU, and based on the determination, identifying a number of new images to be synthesized in the second category of intensity for the first AU. The method may additionally include synthesizing the number of new images with the second category of intensity for the first AU, and adding the new images to the dataset.
    Type: Application
    Filed: August 14, 2020
    Publication date: February 17, 2022
    Applicants: FUJITSU LIMITED, CARNEGIE MELLON UNIVERSITY
    Inventors: Koichiro NIINUMA, Laszlo A. JENI, Itir Onal ERTUGRUL, Jeffrey F. COHN
  • Patent number: 11244206
    Abstract: A method may include obtaining a base facial image, and obtaining a first set of base facial features within the base facial image, the first set of base facial features associated with a first facial AU to be detected in an analysis facial image. The method may also include obtaining a second set of base facial features within the base facial image, the second set of facial features associated with a second facial AU to be detected. The method may include obtaining the analysis facial image, and applying a first image normalization to the analysis facial image using the first set of base facial features to facilitate prediction of a probability of the first facial AU. The method may include applying a second image normalization to the analysis facial image using the second set of base facial features to facilitate prediction of a probability of the second facial AU.
    Type: Grant
    Filed: September 6, 2019
    Date of Patent: February 8, 2022
    Assignees: FUJITSU LIMITED, CARNEGIE MELLON UNIVERSITY
    Inventors: Koichiro Niinuma, Laszlo A. Jeni, Itir Onal Ertugrul, Jeffrey F. Cohn
  • Publication number: 20210073600
    Abstract: A method may include obtaining a base facial image, and obtaining a first set of base facial features within the base facial image, the first set of base facial features associated with a first facial AU to be detected in an analysis facial image. The method may also include obtaining a second set of base facial features within the base facial image, the second set of facial features associated with a second facial AU to be detected. The method may include obtaining the analysis facial image, and applying a first image normalization to the analysis facial image using the first set of base facial features to facilitate prediction of a probability of the first facial AU. The method may include applying a second image normalization to the analysis facial image using the second set of base facial features to facilitate prediction of a probability of the second facial AU.
    Type: Application
    Filed: September 6, 2019
    Publication date: March 11, 2021
    Applicants: FUJITSU LIMITED, CARNEGIE MELLON UNIVERSITY
    Inventors: Koichiro NIINUMA, Laszlo A. JENI, Itir Onal ERTUGRUL, Jeffrey F. COHN
  • Patent number: 10335045
    Abstract: Recent studies in computer vision have shown that, while practically invisible to a human observer, skin color changes due to blood flow can be captured on face videos and, surprisingly, be used to estimate the heart rate (HR). While considerable progress has been made in the last few years, still many issues remain open. In particular, state-of-the-art approaches are not robust enough to operate in natural conditions (e.g. in case of spontaneous movements, facial expressions, or illumination changes). Opposite to previous approaches that estimate the HR by processing all the skin pixels inside a fixed region of interest, we introduce a strategy to dynamically select face regions useful for robust HR estimation. The present approach, inspired by recent advances on matrix completion theory, allows us to predict the HR while simultaneously discover the best regions of the face to be used for estimation.
    Type: Grant
    Filed: June 23, 2017
    Date of Patent: July 2, 2019
    Assignees: Universita degli Studi Di Trento, Fondazione Bruno Kessler, The Research Foundation for the State University of New York, University of Pittsburgh of the Commonwealth of Higher Education
    Inventors: Niculae Sebe, Xavier Alameda-Pineda, Sergey Tulyakov, Elisa Ricci, Lijun Yin, Jeffrey F. Cohn
  • Publication number: 20170367590
    Abstract: Recent studies in computer vision have shown that, while practically invisible to a human observer, skin color changes due to blood flow can be captured on face videos and, surprisingly, be used to estimate the heart rate (HR). While considerable progress has been made in the last few years, still many issues remain open. In particular, state-of-the-art approaches are not robust enough to operate in natural conditions (e.g. in case of spontaneous movements, facial expressions, or illumination changes). Opposite to previous approaches that estimate the HR by processing all the skin pixels inside a fixed region of interest, we introduce a strategy to dynamically select face regions useful for robust HR estimation. The present approach, inspired by recent advances on matrix completion theory, allows us to predict the HR while simultaneously discover the best regions of the face to be used for estimation.
    Type: Application
    Filed: June 23, 2017
    Publication date: December 28, 2017
    Inventors: Niculae Sebe, Xavier Alameda-Pineda, Sergey Tulyakov, Elisa Ricci, Lijun Yin, Jeffrey F. Cohn
  • Patent number: 9799096
    Abstract: A system and method for real-time image and video face de-identification that removes the identity of the subject while preserving the facial behavior is described The facial features of the source face are replaced with that of the target face while preserving the facial actions of the source face on the target face. The facial actions of the source face are transferred to the target face using personalized Facial Action Transfer (FAT), and the color and illumination is adapted. Finally, the source image or video containing the target facial features is outputted for display. Alternatively, the system can run in real-time.
    Type: Grant
    Filed: July 8, 2015
    Date of Patent: October 24, 2017
    Assignee: CARNEGIE MELLON UNIVERSITY
    Inventors: Fernando De la Torre, Jeffrey F. Cohn, Dong Huang