Patents by Inventor Oran Gafni

Oran Gafni has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240155071
    Abstract: A method and system for text-to-video generation. The method includes receiving a text input, generating a representation frame based on the text input using a model trained on text-image pairs, generating a set of frames based on the representation frame and a first frame rate, interpolating the set of frames to a higher frame rate, generating a first video based on the interpolated set of frames, increasing a resolution of the first video based on a first and second super-resolution model, and generating an output video based on a result of the super-resolution models.
    Type: Application
    Filed: September 29, 2023
    Publication date: May 9, 2024
    Inventors: Sonal Gupta, Adam Polyak, Thomas Falstad Hayes, Xi Yin, Jie An, Chao Yang, Oron Ashual, Oran Gafni, Devi Niru Parikh, Yaniv Nechemia Taigman, Uriel Singer, Songyang Zhang, Qiyuan Hu
  • Patent number: 11854203
    Abstract: In one embodiment, a method includes receiving a first image depicting a context including one or more persons having one or more respective poses, receiving a second image depicting a target person having an original pose, where the target person is to be inserted into the context depicted in the first image, generating a target segmentation mask specifying a new pose for the target person in the context of the first image based on the first image, generating a third image depicting the target person having the new pose based on the second image and the target segmentation mask, and generating an output image based on the first image and the third image, the output image depicting the one or more persons having the one or more respective poses and the target person having the new pose.
    Type: Grant
    Filed: December 18, 2020
    Date of Patent: December 26, 2023
    Assignee: Meta Platforms, Inc.
    Inventors: Oran Gafni, Lior Wolf
  • Patent number: 11727596
    Abstract: A video generation system is described that extracts one or more characters or other objects from a video, re-animates the character, and generates a new video in which the extracted characters. The system enables the extracted character(s) to be positioned and controlled within a new background scene different from the original background scene of the source video. In one example, the video generation system comprises a pose prediction neural network having a pose model trained with (i) a set of character pose training images extracted from an input video of the character and (ii) a simulated motion control signal generated from the input video. In operation, the pose prediction neural network generates, in response to a motion control input from a user, a sequence of images representing poses of a character. A frame generation neural network generates output video frames that render the character within a scene.
    Type: Grant
    Filed: May 17, 2021
    Date of Patent: August 15, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Oran Gafni, Lior Wolf, Yaniv Nechemia Taigman
  • Patent number: 11373352
    Abstract: In one embodiment, a method includes generating a keypoint pose and a dense pose for a first person in a first pose based on a first image comprising the first person in the first pose, generating an input semantic segmentation map corresponding to a second person in a second pose based on a second image comprising the second person in the second pose, generating a target semantic segmentation map corresponding to the second person in the first pose by processing the keypoint pose, the dense pose, and the input segmentation map using a first machine-learning model, generating an encoding vector representing the second person based on the second image, and generating a target image of the second person in the first pose by processing the encoding vector and the target segmentation map using a second machine-learning model.
    Type: Grant
    Filed: March 4, 2021
    Date of Patent: June 28, 2022
    Assignee: Meta Platforms, Inc.
    Inventors: Oran Gafni, Oron Ashual, Lior Wolf
  • Publication number: 20220198617
    Abstract: In one embodiment, a method includes generating a first identity encoding representing a first facial identity of the person based on an image of a person, generating a second identity encoding representing a second facial identity different from the first facial identity of the person based on the first identity encoding, generating a source encoding by using an encoder to process a source image of the person having an expression, generating an intermediate image by using a decoder to process the source encoding and the second identity encoding, the intermediate image including a face having the second facial identity and the expression of the person in the source image, and generating an output image by blending the source image with facial features of the face in the intermediate image.
    Type: Application
    Filed: December 18, 2020
    Publication date: June 23, 2022
    Inventors: Oran Gafni, Lior Wolf
  • Publication number: 20220156981
    Abstract: In one embodiment, a first device may receive, from a second device, a reference landmark map identifying locations of facial features of a user of the second device depicted in a reference image and a feature map, generated based on the reference image, representing an identity of the user. The first device may receive, from the second device, a current compressed landmark map based on a current image of the user and decompress the current compressed landmark map to generate a current landmark map. The first device may update the feature map based on a motion field generated using the reference landmark map and the current landmark map. The first device may generate scaling factors based on a normalization facial mask of pre-determined facial features of the user. The first device may generate an output image of the user by decoding the updated feature map using the scaling factors.
    Type: Application
    Filed: April 6, 2021
    Publication date: May 19, 2022
    Inventors: Maxime Mohamad Oquab, Pierre Stock, Oran Gafni, Daniel Raynald David Haziza, Tao Xu, Peizhao Zhang, Onur Çelebi, Patrick Labatut, Thibault Michel Max Peyronel, Camille Couprie
  • Patent number: 11017560
    Abstract: A video generation system is described that extracts one or more characters or other objects from a video, re-animates the character, and generates a new video in which the extracted characters. The system enables the extracted character(s) to be positioned and controlled within a new background scene different from the original background scene of the source video. In one example, the video generation system comprises a pose prediction neural network having a pose model trained with (i) a set of character pose training images extracted from an input video of the character and (ii) a simulated motion control signal generated from the input video. In operation, the pose prediction neural network generates, in response to a motion control input from a user, a sequence of images representing poses of a character. A frame generation neural network generates output video frames that render the character within a scene.
    Type: Grant
    Filed: April 15, 2019
    Date of Patent: May 25, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Oran Gafni, Lior Wolf, Yaniv Taigman