Patents by Inventor Fernando De La Torre
Fernando De La Torre has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12159339Abstract: A method for training a three-dimensional model face animation model from speech, is provided. The method includes determining a first correlation value for a facial feature based on an audio waveform from a first subject, generating a first mesh for a lower portion of a human face, based on the facial feature and the first correlation value, updating the first correlation value when a difference between the first mesh and a ground truth image of the first subject is greater than a pre-selected threshold, and providing a three-dimensional model of the human face animated by speech to an immersive reality application accessed by a client device based on the difference between the first mesh and the ground truth image of the first subject. A non-transitory, computer-readable medium storing instructions to cause a system to perform the above method, and the system, are also provided.Type: GrantFiled: September 6, 2023Date of Patent: December 3, 2024Assignee: Meta Platforms Technologies, LLCInventors: Alexander Richard, Michael Zollhoefer, Fernando De la Torre, Yaser Sheikh
-
Publication number: 20230419579Abstract: A method for training a three-dimensional model face animation model from speech, is provided. The method includes determining a first correlation value for a facial feature based on an audio waveform from a first subject, generating a first mesh for a lower portion of a human face, based on the facial feature and the first correlation value, updating the first correlation value when a difference between the first mesh and a ground truth image of the first subject is greater than a pre-selected threshold, and providing a three-dimensional model of the human face animated by speech to an immersive reality application accessed by a client device based on the difference between the first mesh and the ground truth image of the first subject. A non-transitory, computer-readable medium storing instructions to cause a system to perform the above method, and the system, are also provided.Type: ApplicationFiled: September 6, 2023Publication date: December 28, 2023Inventors: Alexander Richard, Michael Zollhoefer, Fernando De la Torre, Yaser Sheikh
-
Patent number: 11756250Abstract: A method for training a three-dimensional model face animation model from speech, is provided. The method includes determining a first correlation value for a facial feature based on an audio waveform from a first subject, generating a first mesh for a lower portion of a human face, based on the facial feature and the first correlation value, updating the first correlation value when a difference between the first mesh and a ground truth image of the first subject is greater than a pre-selected threshold, and providing a three-dimensional model of the human face animated by speech to an immersive reality application accessed by a client device based on the difference between the first mesh and the ground truth image of the first subject. A non-transitory, computer-readable medium storing instructions to cause a system to perform the above method, and the system, are also provided.Type: GrantFiled: February 10, 2022Date of Patent: September 12, 2023Assignee: Meta Platforms Technologies, LLCInventors: Alexander Richard, Michael Zollhoefer, Fernando De la Torre, Yaser Sheikh
-
Patent number: 11734888Abstract: A method for providing real-time three-dimensional facial animation from video is provided. The method includes collecting images of a subject, and forming a three-dimensional mesh for the subject based on a facial expression factor and a head pose of the subject extracted from the images of the subject. The method also includes forming a texture transformation based on an illumination parameter associated with an illumination configuration for the images from the subject, forming a three-dimensional model for the subject based on the three-dimensional mesh and the texture transformation, determining a loss factor based on selected points in a test image from the subject and a rendition of the test image by the three-dimensional model, and updating the three-dimensional model according to the loss factor. A system and a non-transitory, computer-readable medium storing instructions to perform the above method are also provided.Type: GrantFiled: August 6, 2021Date of Patent: August 22, 2023Assignee: Meta Platforms Technologies, LLCInventors: Chen Cao, Vasu Agrawal, Fernando De la Torre, Lele Chen, Jason Saragih, Tomas Simon Kreuz, Yaser Sheikh
-
Publication number: 20220358719Abstract: A method for providing real-time three-dimensional facial animation from video is provided. The method includes collecting images of a subject, and forming a three-dimensional mesh for the subject based on a facial expression factor and a head pose of the subject extracted from the images of the subject. The method also includes forming a texture transformation based on an illumination parameter associated with an illumination configuration for the images from the subject, forming a three-dimensional model for the subject based on the three-dimensional mesh and the texture transformation, determining a loss factor based on selected points in a test image from the subject and a rendition of the test image by the three-dimensional model, and updating the three-dimensional model according to the loss factor. A system and a non-transitory, computer-readable medium storing instructions to perform the above method are also provided.Type: ApplicationFiled: August 6, 2021Publication date: November 10, 2022Inventors: Chen Cao, Vasu Agrawal, Fernando De la Torre, Lele Chen, Jason Saragih, Tomas Simon Kreuz, Yaser Sheikh
-
Publication number: 20220309724Abstract: A method for training a three-dimensional model face animation model from speech, is provided. The method includes determining a first correlation value for a facial feature based on an audio waveform from a first subject, generating a first mesh for a lower portion of a human face, based on the facial feature and the first correlation value, updating the first correlation value when a difference between the first mesh and a ground truth image of the first subject is greater than a pre-selected threshold, and providing a three-dimensional model of the human face animated by speech to an immersive reality application accessed by a client device based on the difference between the first mesh and the ground truth image of the first subject. A non-transitory, computer-readable medium storing instructions to cause a system to perform the above method, and the system, are also provided.Type: ApplicationFiled: February 10, 2022Publication date: September 29, 2022Inventors: Alexander Richard, Michael Zollhoefer, Fernando De la Torre, Yaser Sheikh
-
Patent number: 11010944Abstract: Systems, methods, and non-transitory computer readable media can obtain a first image depicting a face of a user. A plurality of images depicting the face of the user can be identified. A second image of the plurality of images can be identified based on one or more factors. The face or a portion of the face of the user in the first image can be replaced with the face or a portion of the face of the user in the second image.Type: GrantFiled: December 28, 2017Date of Patent: May 18, 2021Assignee: Facebook, Inc.Inventors: Fernando De la Torre, Dong Huang, Francisco Vicente Carrasco
-
Patent number: 10573349Abstract: Systems, methods, and non-transitory computer readable media can obtain a first image of a first user depicting a face of the first user with a neutral expression or position. A first image of a second user depicting a face of the second user with a neutral expression or position can be identified, wherein the face of the second user is similar to the face of the first user based on satisfaction of a threshold value. A second image of the first user depicting the face of the first user with an expression different from the neutral expression or position can be generated based on a second image of the second user depicting the face of the second user with an expression or position different from the neutral expression or position.Type: GrantFiled: December 28, 2017Date of Patent: February 25, 2020Assignee: Facebook, Inc.Inventors: Fernando De la Torre, Dong Huang, Francisco Vicente Carrasco
-
Publication number: 20190303909Abstract: An authorization system comprises a point-of-sale terminal (POS) having a magnetic stripe reader. An image scanner is located in proximity to the magnetic stripe reader of the POS terminal. The image scanner is a self-contained unit comprises an image sensor to capture an image of a matrix code having coded payment credentials unique for a particular payment transaction, wherein the matrix code is displayed on any of: a display of a mobile device, a display of a wearable device, a printed code, or a printed gift or loyalty card. A processor decodes the coded payment credentials, and formats the decoded payment credentials into magnetic stripe formatted data according to international standards. An antenna wirelessly transmits the magnetic stripe formatted data via magnetic pulses to the magnetic stripe reader of the POS terminal.Type: ApplicationFiled: March 20, 2019Publication date: October 3, 2019Inventor: Fernando De la Torre
-
Publication number: 20190206441Abstract: Systems, methods, and non-transitory computer readable media can obtain a first image of a first user depicting a face of the first user with a neutral expression or position. A first image of a second user depicting a face of the second user with a neutral expression or position can be identified, wherein the face of the second user is similar to the face of the first user based on satisfaction of a threshold value. A second image of the first user depicting the face of the first user with an expression different from the neutral expression or position can be generated based on a second image of the second user depicting the face of the second user with an expression or position different from the neutral expression or position.Type: ApplicationFiled: December 28, 2017Publication date: July 4, 2019Inventors: Fernando De la Torre, Dong Huang, Francisco Vicente Carrasco
-
Publication number: 20190205627Abstract: Systems, methods, and non-transitory computer readable media can obtain a first image of a user depicting a face of the user with a particular expression. Key points of the particular expression can be determined. The key points of the particular expression can be amplified. A second image of the user depicting the face of the user with an amplified version of the particular expression can be generated based on the amplified key points.Type: ApplicationFiled: December 28, 2017Publication date: July 4, 2019Inventors: Fernando De la Torre, Dong Huang, Francisco Vicente Carrasco
-
Publication number: 20190206101Abstract: Systems, methods, and non-transitory computer readable media can obtain a first image depicting a face of a user. A plurality of images depicting the face of the user can be identified. A second image of the plurality of images can be identified based on one or more factors. The face or a portion of the face of the user in the first image can be replaced with the face or a portion of the face of the user in the second image.Type: ApplicationFiled: December 28, 2017Publication date: July 4, 2019Inventors: Fernando De la Torre, Dong Huang, Francisco Vicente Carrasco
-
Patent number: 9928405Abstract: The present invention relates to a system for detecting and tracking facial features in images and can be used in conjunction with a camera. Given a camera, the system will detect facial landmarks in images. The present invention includes software for real time, accurate facial feature detection and tracking in unconstrained images and videos. The present invention is better, more robust and faster than existing approaches and can be implemented very efficiently allowing real-time processing, even on low-power devices, such as mobile phones.Type: GrantFiled: May 27, 2016Date of Patent: March 27, 2018Assignee: CARNEGIE MELLON UNIVERSITYInventors: Fernando De la Torre, Xuehan Xiong
-
Patent number: 9799096Abstract: A system and method for real-time image and video face de-identification that removes the identity of the subject while preserving the facial behavior is described The facial features of the source face are replaced with that of the target face while preserving the facial actions of the source face on the target face. The facial actions of the source face are transferred to the target face using personalized Facial Action Transfer (FAT), and the color and illumination is adapted. Finally, the source image or video containing the target facial features is outputted for display. Alternatively, the system can run in real-time.Type: GrantFiled: July 8, 2015Date of Patent: October 24, 2017Assignee: CARNEGIE MELLON UNIVERSITYInventors: Fernando De la Torre, Jeffrey F. Cohn, Dong Huang
-
Patent number: 9773179Abstract: A method for monitoring a vehicle operator can be executed by a controller and includes the following steps: (a) receiving image data of a vehicle operator's head; (b) tracking facial feature points of the vehicle operator based on the image data; (c) creating a 3D model of the vehicle operator's head based on the facial feature points in order to determine a 3D position of the vehicle operator's head; (d) determining a gaze direction of the vehicle operator based on a position of the facial feature points and the 3D model of the vehicle operator's head; (e) determining a gaze vector based on the gaze direction and the 3D position of the vehicle operator's head; and (f) commanding an indicator to activate when the gaze vector is outside a predetermined parameter.Type: GrantFiled: January 26, 2016Date of Patent: September 26, 2017Assignee: GM Global Technology Operations LLCInventors: Francisco Vicente, Zehua Huang, Xuehan Xiong, Fernando De La Torre, Wende Zhang, Dan Levi, Debbie E. Nachtegall
-
Patent number: 9659210Abstract: The present invention relates to a system for detecting and tracking facial features in images and can be used in conjunction with a camera. Given a camera, the system will detect facial landmarks in images. The present invention includes software for real time, accurate facial feature detection and tracking in unconstrained images and videos. The present invention is better, more robust and faster than existing approaches and can be implemented very efficiently allowing real-time processing, even on low-power devices, such as mobile phones.Type: GrantFiled: January 13, 2015Date of Patent: May 23, 2017Assignee: Carnegie Mellon UniversityInventors: Fernando De la Torre, Xuehan Xiong
-
Publication number: 20160275339Abstract: The present invention relates to a system for detecting and tracking facial features in images and can be used in conjunction with a camera. Given a camera, the system will detect facial landmarks in images. The present invention includes software for real time, accurate facial feature detection and tracking in unconstrained images and videos. The present invention is better, more robust and faster than existing approaches and can be implemented very efficiently allowing real-time processing, even on low-power devices, such as mobile phones.Type: ApplicationFiled: May 27, 2016Publication date: September 22, 2016Inventors: Fernando De la Torre, Xuehan Xiong
-
Publication number: 20160224852Abstract: A method for monitoring a vehicle operator can be executed by a controller and includes the following steps: (a) receiving image data of a vehicle operator's head; (b) tracking facial feature points of the vehicle operator based on the image data; (c) creating a 3D model of the vehicle operator's head based on the facial feature points in order to determine a 3D position of the vehicle operator's head; (d) determining a gaze direction of the vehicle operator based on a position of the facial feature points and the 3D model of the vehicle operator's head; (e) determining a gaze vector based on the gaze direction and the 3D position of the vehicle operator's head; and (f) commanding an indicator to activate when the gaze vector is outside a predetermined parameter.Type: ApplicationFiled: January 26, 2016Publication date: August 4, 2016Applicants: GM GLOBAL TECHNOLOGY OPERATIONS LLC, Carnegie Mellon UniversityInventors: Francisco Vicente, Zehua Huang, Xuehan Xiong, Fernando De La Torre, Wende Zhang, Dan Levi, Debbie E. Nachtegall
-
Patent number: 9405982Abstract: A method for detecting an eyes-off-the-road condition based on an estimated gaze direction of a driver of a vehicle includes monitoring facial feature points of the driver within image input data captured by an in-vehicle camera device. A location for each of a plurality of eye features for an eyeball of the driver is detected based on the monitored facial features. A head pose of the driver is estimated based on the monitored facial feature points. The gaze direction of the driver is estimated based on the detected location for each of the plurality of eye features and the estimated head pose.Type: GrantFiled: September 30, 2013Date of Patent: August 2, 2016Assignees: GM GLOBAL TECHNOLOGY OPERATIONS LLC, CARNEGIE MELLON UNIVERSITYInventors: Wende Zhang, Dan Levi, Debbie E. Nachtegall, Fernando De La Torre, Dong Huang
-
Patent number: 9230180Abstract: A method for determining an Eyes-Off-The-Road (EOTR) condition exists includes capturing image data corresponding to a driver from a monocular camera device. A detection of whether the driver is wearing eye glasses based on the image data using an eye glasses classifier. When it is detected that the driver is wearing eye glasses, a driver face location is detected from the captured image data and it is determined whether the EOTR condition exists based on the driver face location using an EOTR classifier.Type: GrantFiled: September 30, 2013Date of Patent: January 5, 2016Assignees: GM GLOBAL TECHNOLOGY OPERATIONS LLC, CARNEGIE MELLON UNIVERSITYInventors: Wende Zhang, Dan Levi, Debbie E. Nachtegall, Fernando De la Torre, Franscisco Vicente