Patents by Inventor Jeffrey F. Cohn
Jeffrey F. Cohn has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12073655Abstract: A method may include obtaining a facial image of a subject and identifying a number of new images to be synthesized with target AU combinations and categories of intensity. The method may also include synthesizing the number of new images using the facial image of the subject as the base image with the number of target AU combinations and categories of intensity with a number of new images that have different AU combinations than the facial image of the subject. The method may additionally include adding the number of new images to a dataset and training a machine learning system using the dataset to identify a facial expression of the subject.Type: GrantFiled: August 2, 2021Date of Patent: August 27, 2024Assignees: FUJITSU LIMITED, CARNEGIE MELLON UNIVERSITYInventors: Koichiro Niinuma, Jeffrey F. Cohn, Laszlo A. Jeni
-
Publication number: 20230029505Abstract: A method may include obtaining a facial image of a subject and identifying a number of new images to be synthesized with target AU combinations and categories of intensity. The method may also include synthesizing the number of new images using the facial image of the subject as the base image with the number of target AU combinations and categories of intensity with a number of new images that have different AU combinations than the facial image of the subject. The method may additionally include adding the number of new images to a dataset and training a machine learning system using the dataset to identify a facial expression of the subject.Type: ApplicationFiled: August 2, 2021Publication date: February 2, 2023Applicant: FUJITSU LIMITEDInventors: Koichiro NIINUMA, Jeffrey F. COHN, Laszlo A. JENI
-
Patent number: 11557149Abstract: A method may include obtaining a dataset including a target Action Unit (AU) combination and labeled images of the target AU combination with at least a first category of intensity for each AU of the target AU combination and a second category of intensity for each AU of the target AU combination. The method may also include determining that the first category of intensity for a first AU has a higher number of labeled images than the second category of intensity for the first AU, and based on the determination, identifying a number of new images to be synthesized in the second category of intensity for the first AU. The method may additionally include synthesizing the number of new images with the second category of intensity for the first AU, and adding the new images to the dataset.Type: GrantFiled: August 14, 2020Date of Patent: January 17, 2023Assignees: FUJITSU LIMITED, CARNEGIE MELLON UNIVERSITYInventors: Koichiro Niinuma, Laszlo A. Jeni, Itir Onal Ertugrul, Jeffrey F. Cohn
-
Publication number: 20220051003Abstract: A method may include obtaining a dataset including a target Action Unit (AU) combination and labeled images of the target AU combination with at least a first category of intensity for each AU of the target AU combination and a second category of intensity for each AU of the target AU combination. The method may also include determining that the first category of intensity for a first AU has a higher number of labeled images than the second category of intensity for the first AU, and based on the determination, identifying a number of new images to be synthesized in the second category of intensity for the first AU. The method may additionally include synthesizing the number of new images with the second category of intensity for the first AU, and adding the new images to the dataset.Type: ApplicationFiled: August 14, 2020Publication date: February 17, 2022Applicants: FUJITSU LIMITED, CARNEGIE MELLON UNIVERSITYInventors: Koichiro NIINUMA, Laszlo A. JENI, Itir Onal ERTUGRUL, Jeffrey F. COHN
-
Patent number: 11244206Abstract: A method may include obtaining a base facial image, and obtaining a first set of base facial features within the base facial image, the first set of base facial features associated with a first facial AU to be detected in an analysis facial image. The method may also include obtaining a second set of base facial features within the base facial image, the second set of facial features associated with a second facial AU to be detected. The method may include obtaining the analysis facial image, and applying a first image normalization to the analysis facial image using the first set of base facial features to facilitate prediction of a probability of the first facial AU. The method may include applying a second image normalization to the analysis facial image using the second set of base facial features to facilitate prediction of a probability of the second facial AU.Type: GrantFiled: September 6, 2019Date of Patent: February 8, 2022Assignees: FUJITSU LIMITED, CARNEGIE MELLON UNIVERSITYInventors: Koichiro Niinuma, Laszlo A. Jeni, Itir Onal Ertugrul, Jeffrey F. Cohn
-
Publication number: 20210073600Abstract: A method may include obtaining a base facial image, and obtaining a first set of base facial features within the base facial image, the first set of base facial features associated with a first facial AU to be detected in an analysis facial image. The method may also include obtaining a second set of base facial features within the base facial image, the second set of facial features associated with a second facial AU to be detected. The method may include obtaining the analysis facial image, and applying a first image normalization to the analysis facial image using the first set of base facial features to facilitate prediction of a probability of the first facial AU. The method may include applying a second image normalization to the analysis facial image using the second set of base facial features to facilitate prediction of a probability of the second facial AU.Type: ApplicationFiled: September 6, 2019Publication date: March 11, 2021Applicants: FUJITSU LIMITED, CARNEGIE MELLON UNIVERSITYInventors: Koichiro NIINUMA, Laszlo A. JENI, Itir Onal ERTUGRUL, Jeffrey F. COHN
-
Patent number: 10335045Abstract: Recent studies in computer vision have shown that, while practically invisible to a human observer, skin color changes due to blood flow can be captured on face videos and, surprisingly, be used to estimate the heart rate (HR). While considerable progress has been made in the last few years, still many issues remain open. In particular, state-of-the-art approaches are not robust enough to operate in natural conditions (e.g. in case of spontaneous movements, facial expressions, or illumination changes). Opposite to previous approaches that estimate the HR by processing all the skin pixels inside a fixed region of interest, we introduce a strategy to dynamically select face regions useful for robust HR estimation. The present approach, inspired by recent advances on matrix completion theory, allows us to predict the HR while simultaneously discover the best regions of the face to be used for estimation.Type: GrantFiled: June 23, 2017Date of Patent: July 2, 2019Assignees: Universita degli Studi Di Trento, Fondazione Bruno Kessler, The Research Foundation for the State University of New York, University of Pittsburgh of the Commonwealth of Higher EducationInventors: Niculae Sebe, Xavier Alameda-Pineda, Sergey Tulyakov, Elisa Ricci, Lijun Yin, Jeffrey F. Cohn
-
Publication number: 20170367590Abstract: Recent studies in computer vision have shown that, while practically invisible to a human observer, skin color changes due to blood flow can be captured on face videos and, surprisingly, be used to estimate the heart rate (HR). While considerable progress has been made in the last few years, still many issues remain open. In particular, state-of-the-art approaches are not robust enough to operate in natural conditions (e.g. in case of spontaneous movements, facial expressions, or illumination changes). Opposite to previous approaches that estimate the HR by processing all the skin pixels inside a fixed region of interest, we introduce a strategy to dynamically select face regions useful for robust HR estimation. The present approach, inspired by recent advances on matrix completion theory, allows us to predict the HR while simultaneously discover the best regions of the face to be used for estimation.Type: ApplicationFiled: June 23, 2017Publication date: December 28, 2017Inventors: Niculae Sebe, Xavier Alameda-Pineda, Sergey Tulyakov, Elisa Ricci, Lijun Yin, Jeffrey F. Cohn
-
Patent number: 9799096Abstract: A system and method for real-time image and video face de-identification that removes the identity of the subject while preserving the facial behavior is described The facial features of the source face are replaced with that of the target face while preserving the facial actions of the source face on the target face. The facial actions of the source face are transferred to the target face using personalized Facial Action Transfer (FAT), and the color and illumination is adapted. Finally, the source image or video containing the target facial features is outputted for display. Alternatively, the system can run in real-time.Type: GrantFiled: July 8, 2015Date of Patent: October 24, 2017Assignee: CARNEGIE MELLON UNIVERSITYInventors: Fernando De la Torre, Jeffrey F. Cohn, Dong Huang