Patents by Inventor Elif Albuz

Elif Albuz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240078754
    Abstract: In one embodiment, a computing system may access a first image of a first portion of a face of a user captured by a first camera from a first viewpoint and a second image of a second portion of the face captured by a second camera from a second viewpoint. The system may generate, using a machine-learning model and the first and second images, a synthesized image corresponding to a third portion of the face of the user as viewed from a third viewpoint. The system may access a three-dimensional (3D) facial model representative of the face and generate a texture image for the face by projecting at least the synthesized image onto the 3D facial model from a predetermined camera pose corresponding to the third viewpoint. The system may cause an output image to be rendered using at least the 3D facial model and the texture image.
    Type: Application
    Filed: October 31, 2023
    Publication date: March 7, 2024
    Inventors: James Allan Booth, Elif Albuz, Peihong Guo, Tong Xiao
  • Patent number: 11842442
    Abstract: In one embodiment, one or more computing systems may access a plurality of images corresponding to a portion of a face of a user. The plurality of images is captured from different viewpoints by a plurality of cameras coupled to an artificial-reality system worn by the user. The one or more computing systems may synthesize, using a machine-learning model, to generate a synthesized image corresponding to the portion of the face of the user. The one or more computing systems may access a three-dimensional (3D) facial model representative of the face of the user and generate a texture image by projecting the synthesized image onto the 3D facial model from a specific camera pose. The one or more computing systems may cause an output image of a facial representation of the user to be rendered using at least the 3D facial model and the texture image.
    Type: Grant
    Filed: December 22, 2022
    Date of Patent: December 12, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: James Allan Booth, Elif Albuz, Peihong Guo, Tong Xiao
  • Patent number: 11734952
    Abstract: Examples described herein include systems for reconstructing facial image data from partial frame data and landmark data. Systems for generating the partial frame data and landmark data are described. Neural networks may be used to reconstruct the facial image data and/or generate the partial frame data. In this manner, compression of facial image data may be achieved in some examples.
    Type: Grant
    Filed: August 22, 2022
    Date of Patent: August 22, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Elif Albuz, Bryan Michael Anenberg, Peihong Guo, Ronit Kassis
  • Publication number: 20230206560
    Abstract: In one embodiment, one or more computing systems may access a plurality of images corresponding to a portion of a face of a user. The plurality of images is captured from different viewpoints by a plurality of cameras coupled to an artificial-reality system worn by the user. The one or more computing systems may synthesize, using a machine-learning model, to generate a synthesized image corresponding to the portion of the face of the user. The one or more computing systems may access a three-dimensional (3D) facial model representative of the face of the user and generate a texture image by projecting the synthesized image onto the 3D facial model from a specific camera pose. The one or more computing systems may cause an output image of a facial representation of the user to be rendered using at least the 3D facial model and the texture image.
    Type: Application
    Filed: December 22, 2022
    Publication date: June 29, 2023
    Inventors: James Allan Booth, Elif Albuz, Peihong Guo, Tong Xiao
  • Patent number: 11562535
    Abstract: In one embodiment, one or more computing systems may receive an image of a portion of a face of a first user. The one or more computing systems may access a three-dimensional (3D) facial model representative of the face of the first user. The one or more computing systems may identify one or more facial features captured in the image and determine a camera pose relative to the 3D facial model based on the identified one or more facial features in the image and predetermined feature locations on the 3D facial model. The one or more computing systems may determine a mapping relationship between the image and the 3D facial model by projecting the image of the portion of the face of the first user onto the 3D facial model from the camera pose and cause an output image of a facial representation of the first user to be rendered.
    Type: Grant
    Filed: September 22, 2020
    Date of Patent: January 24, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: James Allan Booth, Elif Albuz, Peihong Guo, Tong Xiao
  • Patent number: 11423692
    Abstract: Examples described herein include systems for reconstructing facial image data from partial frame data and landmark data. Systems for generating the partial frame data and landmark data are described. Neural networks may be used to reconstruct the facial image data and/or generate the partial frame data. In this manner, compression of facial image data may be achieved in some examples.
    Type: Grant
    Filed: October 24, 2019
    Date of Patent: August 23, 2022
    Assignee: META PLATFORMS TECHNOLOGIES, LLC
    Inventors: Elif Albuz, Bryan Michael Anenberg, Peihong Guo, Ronit Kassis
  • Publication number: 20220092853
    Abstract: In one embodiment, one or more computing systems may receive an image of a portion of a face of a first user. The one or more computing systems may access a three-dimensional (3D) facial model representative of the face of the first user. The one or more computing systems may identify one or more facial features captured in the image and determine a camera pose relative to the 3D facial model based on the identified one or more facial features in the image and predetermined feature locations on the 3D facial model. The one or more computing systems may determine a mapping relationship between the image and the 3D facial model by projecting the image of the portion of the face of the first user onto the 3D facial model from the camera pose and cause an output image of a facial representation of the first user to be rendered.
    Type: Application
    Filed: September 22, 2020
    Publication date: March 24, 2022
    Inventors: James Allan Booth, Elif Albuz, Peihong Guo, Tong Xiao
  • Patent number: 11217036
    Abstract: An avatar personalization engine can generate personalized avatars for a user by creating a 3D user model based on one or more images of the user. The avatar personalization engine can compute a delta between the 3D user model and an average person model, which is a model created based on the average measurements from multiple people. The avatar personalization engine can then apply the delta to a generic avatar model by changing measurements of particular features of the generic avatar model by amounts specified for corresponding features identified in the delta. This personalizes the generic avatar model to resemble the user. Additional features matching characteristics of the user can be added to further personalize the avatar model, such as a hairstyle, eyebrow geometry, facial hair, glasses, etc.
    Type: Grant
    Filed: October 7, 2019
    Date of Patent: January 4, 2022
    Assignee: Facebook Technologies, LLC
    Inventors: Elif Albuz, Chad Vernon, Shu Liang, Peihong Guo
  • Patent number: 10970907
    Abstract: Disclosed herein includes a system, a method, and a non-transitory computer readable medium for applying an expression to an avatar. In one aspect, a class of an expression of a face can be determined according to a set of attributes indicating states of portions of the face. In one aspect, a set of blendshapes with respective weights corresponding to the expression of the face can be determined according to the class of the expression of the face. In one aspect, the set of blendshapes with respective weights can be provided as an input to train a machine learning model. In one aspect, the machine learning model can be configured, via training, to generate an output set of blendshapes with respective weights, according to an input image. An image of an avatar may be rendered according to the output set of blendshapes with respective weights.
    Type: Grant
    Filed: July 2, 2019
    Date of Patent: April 6, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Elif Albuz, Melinda Ozel, Tong Xiao, Sidi Fu
  • Patent number: 10755463
    Abstract: In one embodiment, a method includes receiving an audio signal comprising a plurality of speech units, processing the audio signal to associate each of the speech units with a corresponding lip animation, determining pitch information associated with each of the plurality of speech units, processing the pitch information of each of the plurality of speech units to associate at least one of the speech units with a facial-component animation, and presenting the audio signal with a displayed animation of a face, wherein the animation of the face displays the lip animation associated with each of the speech units and the facial-component animation associated with the at least one speech unit. The animation of the face may be displayed in real time with the audio signal. The facial component animation may include animation of the lips, eyebrows, eyelids, and other portion of the upper face.
    Type: Grant
    Filed: July 20, 2018
    Date of Patent: August 25, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Elif Albuz, Bryan Michael Anenberg, Colin Lea, Samuel Alan Johnson, Nikita Shulga, Xingze He
  • Patent number: 9971959
    Abstract: In one embodiment of the present invention, a graphics processing unit (GPU) is configured to detect an object in an image using a random forest classifier that includes multiple, identically structured decision trees. Notably, the application of each of the decision trees is independent of the application of the other decision trees. In operation, the GPU partitions the image into subsets of pixels, and associates an execution thread with each of the pixels in the subset of pixels. The GPU then causes each of the execution threads to apply the random forest classifier to the associated pixel, thereby determining a likelihood that the pixel corresponds to the object. Advantageously, such a distributed approach to object detection more fully leverages the parallel architecture of the parallel processing unit (PPU) than conventional approaches. In particular, the PPU performs object detection more efficiently using the random forest classifier than using a cascaded classifier.
    Type: Grant
    Filed: September 17, 2013
    Date of Patent: May 15, 2018
    Assignee: NVIDIA Corporation
    Inventors: Mateusz Jerzy Baranowski, Shalini Gupta, Elif Albuz
  • Patent number: 9747527
    Abstract: In one embodiment of the present invention, a graphics processing unit (GPU) is configured to detect an object in an image using a random forest classifier that includes multiple, identically structured decision trees. Notably, the application of each of the decision trees is independent of the application of the other decision trees. In operation, the GPU partitions the image into subsets of pixels, and associates an execution thread with each of the pixels in the subset of pixels. The GPU then causes each of the execution threads to apply the random forest classifier to the associated pixel, thereby determining a likelihood that the pixel corresponds to the object. Advantageously, such a distributed approach to object detection more fully leverages the parallel architecture of the PPU than conventional approaches. In particular, the PPU performs object detection more efficiently using the random forest classifier than using a cascaded classifier.
    Type: Grant
    Filed: September 17, 2013
    Date of Patent: August 29, 2017
    Assignee: NVIDIA Corporation
    Inventors: Mateusz Jerzy Baranowski, Shalini Gupta, Elif Albuz
  • Publication number: 20140369554
    Abstract: A face beautification system and a method of face beautification. On embodiment of the face beautification system includes: (1) a coarse feature detector configured to generate an approximation of facial features in an image, (2) an edge-preserving filter configured to reduce distortions in the approximation, and (3) a feature enhancer operable to selectively filter a facial feature from said approximation and carry out an enhancement.
    Type: Application
    Filed: September 19, 2013
    Publication date: December 18, 2014
    Applicant: Nvidia Corporation
    Inventors: Elif Albuz, Colin Tracey, Navjot Garg, Yun-Ta Tsai, Dawid Pajak
  • Publication number: 20140270364
    Abstract: In one embodiment of the present invention, a graphics processing unit (GPU) is configured to detect an object in an image using a random forest classifier that includes multiple, identically structured decision trees. Notably, the application of each of the decision trees is independent of the application of the other decision trees. In operation, the GPU partitions the image into subsets of pixels, and associates an execution thread with each of the pixels in the subset of pixels. The GPU then causes each of the execution threads to apply the random forest classifier to the associated pixel, thereby determining a likelihood that the pixel corresponds to the object. Advantageously, such a distributed approach to object detection more fully leverages the parallel architecture of the PPU than conventional approaches. In particular, the PPU performs object detection more efficiently using the random forest classifier than using a cascaded classifier.
    Type: Application
    Filed: September 17, 2013
    Publication date: September 18, 2014
    Applicant: NVIDIA CORPORATION
    Inventors: Mateusz Jerzy BARANOWSKI, Shalini GUPTA, Elif ALBUZ
  • Publication number: 20140270551
    Abstract: In one embodiment of the present invention, a graphics processing unit (GPU) is configured to detect an object in an image using a random forest classifier that includes multiple, identically structured decision trees. Notably, the application of each of the decision trees is independent of the application of the other decision trees. In operation, the GPU partitions the image into subsets of pixels, and associates an execution thread with each of the pixels in the subset of pixels. The GPU then causes each of the execution threads to apply the random forest classifier to the associated pixel, thereby determining a likelihood that the pixel corresponds to the object. Advantageously, such a distributed approach to object detection more fully leverages the parallel architecture of the PPU than conventional approaches. In particular, the PPU performs object detection more efficiently using the random forest classifier than using a cascaded classifier.
    Type: Application
    Filed: September 17, 2013
    Publication date: September 18, 2014
    Applicant: NVIDIA CORPORATION
    Inventors: Mateusz Jerzy BARANOWSKI, Shalini GUPTA, Elif ALBUZ
  • Patent number: 8442327
    Abstract: A method for more efficiently detecting faces in images is disclosed. The integral image of an image may be calculated. The integral image may be sub-sampled to generate one or more sub-sampled integral images. A plurality of classifiers may be applied in one or more stages to regions of each sub-sampled integral image, where the application of the classifiers may produce classification data. The classification data may be used to determine if a face is associated with any of the regions of each sub-sampled integral image. The face determination results may be used to modify the original image such that, when rendered, the image is displayed with a graphical object identifying the face in the image. Accordingly, face detection processing efficiency may be increased by reducing the number of integral image calculations and processing localized data through application of classifiers to sub-sampled integral images.
    Type: Grant
    Filed: November 21, 2008
    Date of Patent: May 14, 2013
    Assignee: Nvidia Corporation
    Inventors: Ismail Oner Sebe, Elif Albuz
  • Patent number: 8411751
    Abstract: A method includes projecting motion vectors describing a transformation from a previous video frame to a future video frame onto a plane between the previous video frame and the future video frame, detecting potential artifacts at the plane based on an intersection of a cover region and an uncover region on the plane, and analyzing a dissimilarity between a trial video frame and both the previous video frame and the future video frame. The trial video frame is generated between the previous video frame and the future video frame based on a frame rate conversion ratio derived from a source frame rate and a desired frame rate. The method also includes estimating reliability of the projected motion vectors based on the potential artifact detection and the dissimilarity analysis.
    Type: Grant
    Filed: December 15, 2009
    Date of Patent: April 2, 2013
    Assignee: Nvidia Corporation
    Inventors: Elif Albuz, Tarik Arici, Robert Jan Schutten, Santanu Dutta
  • Patent number: 8175160
    Abstract: A system, method, and computer program product are provided for refining motion vectors. In operation, a plurality of motion vectors associated with a current frame and a first resolution are created. Furthermore, the motion vectors are refined utilizing information including at least one of first information describing motion vectors associated with a previous frame and second information describing motion vectors associated with the current frame and a second resolution.
    Type: Grant
    Filed: June 9, 2008
    Date of Patent: May 8, 2012
    Assignee: NVIDIA Corporation
    Inventors: Tarik Arici, Elif Albuz
  • Publication number: 20110141349
    Abstract: A method includes projecting motion vectors describing a transformation from a previous video frame to a future video frame onto a plane between the previous video frame and the future video frame, detecting potential artifacts at the plane based on an intersection of a cover region and an uncover region on the plane, and analyzing a dissimilarity between a trial video frame and both the previous video frame and the future video frame. The trial video frame is generated between the previous video frame and the future video frame based on a frame rate conversion ratio derived from a source frame rate and a desired frame rate. The method also includes estimating reliability of the projected motion vectors based on the potential artifact detection and the dissimilarity analysis.
    Type: Application
    Filed: December 15, 2009
    Publication date: June 16, 2011
    Inventors: Elif Albuz, Tarik Arici, Robert Jan Schutten, Santanu Dutta
  • Publication number: 20100128993
    Abstract: A method for more efficiently detecting faces in images is disclosed. The integral image of an image may be calculated. The integral image may be sub-sampled to generate one or more sub-sampled integral images. A plurality of classifiers may be applied in one or more stages to regions of each sub-sampled integral image, where the application of the classifiers may produce classification data. The classification data may be used to determine if a face is associated with any of the regions of each sub-sampled integral image. The face determination results may be used to modify the original image such that, when rendered, the image is displayed with a graphical object identifying the face in the image. Accordingly, face detection processing efficiency may be increased by reducing the number of integral image calculations and processing localized data through application of classifiers to sub-sampled integral images.
    Type: Application
    Filed: November 21, 2008
    Publication date: May 27, 2010
    Applicant: NVIDIA CORPORATION
    Inventors: Ismail Oner Sebe, Elif Albuz