Patents by Inventor Sofien Bouaziz

Sofien Bouaziz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11954899
    Abstract: Systems and methods for training models to predict dense correspondences across images such as human images. A model may be trained using synthetic training data created from one or more 3D computer models of a subject. In addition, one or more geodesic distances derived from the surfaces of one or more of the 3D models may be used to generate one or more loss values, which may in turn be used in modifying the model's parameters during training.
    Type: Grant
    Filed: March 11, 2021
    Date of Patent: April 9, 2024
    Assignee: GOOGLE LLC
    Inventors: Yinda Zhang, Feitong Tan, Danhang Tang, Mingsong Dou, Kaiwen Guo, Sean Ryan Francesco Fanello, Sofien Bouaziz, Cem Keskin, Ruofei Du, Rohit Kumar Pandey, Deqing Sun
  • Patent number: 11948238
    Abstract: Embodiments relate to a method for real-time facial animation, and a processing device for real-time facial animation. The method includes providing a dynamic expression model, receiving tracking data corresponding to a facial expression of a user, estimating tracking parameters based on the dynamic expression model and the tracking data, and refining the dynamic expression model based on the tracking data and estimated tracking parameters. The method may further include generating a graphical representation corresponding to the facial expression of the user based on the tracking parameters. Embodiments pertain to a real-time facial animation system.
    Type: Grant
    Filed: May 27, 2022
    Date of Patent: April 2, 2024
    Assignee: Apple Inc.
    Inventors: Sofien Bouaziz, Mark Pauly
  • Publication number: 20240046618
    Abstract: Systems and methods for training models to predict dense correspondences across images such as human images. A model may be trained using synthetic training data created from one or more 3D computer models of a subject. In addition, one or more geodesic distances derived from the surfaces of one or more of the 3D models may be used to generate one or more loss values, which may in turn be used in modifying the model's parameters during training.
    Type: Application
    Filed: March 11, 2021
    Publication date: February 8, 2024
    Inventors: Yinda Zhang, Feitong Tan, Danhang Tang, Mingsong Dou, Kaiwen Guo, Sean Ryan Francesco Fanello, Sofien Bouaziz, Cem Keskin, Ruofei Du, Rohit Kumar Pandey, Deqing Sun
  • Patent number: 11868523
    Abstract: Techniques of tracking a user's gaze includes identifying a region of a display at which a gaze of a user is directed, the region including a plurality of pixels. By determining a region rather than a point, when the regions correspond to elements of a user interface, the improved technique enables a system to activate the element to which a determined region is selected. In some implementations, the system makes the determination using a classification engine including a convolutional neural network; such an engine takes as input images of the user's eye and outputs a list of probabilities that the gaze is directed to each of the regions.
    Type: Grant
    Filed: July 1, 2021
    Date of Patent: January 9, 2024
    Assignee: GOOGLE LLC
    Inventors: Ivana Tosic Rodgers, Sean Ryan Francesco Fanello, Sofien Bouaziz, Rohit Kumar Pandey, Eric Aboussouan, Adarsh Prakash Murthy Kowdle
  • Publication number: 20240005590
    Abstract: Techniques of image synthesis using a neural radiance field (NeRF) includes generating a deformation model of movement experienced by a subject in a non-rigidly deforming scene. For example, when an image synthesis system uses NeRFs, the system takes as input multiple poses of subjects for training data. In contrast to conventional NeRFs, the technical solution first expresses the positions of the subjects from various perspectives in an observation frame. The technical solution then involves deriving a deformation model, i.e., a mapping between the observation frame and a canonical frame in which the subject's movements are taken into account. This mapping is accomplished using latent deformation codes for each pose that are determined using a multilayer perceptron (MLP). A NeRF is then derived from positions and casted ray directions in the canonical frame using another MLP. New poses for the subject may then be derived using the NeRF.
    Type: Application
    Filed: January 14, 2021
    Publication date: January 4, 2024
    Inventors: Ricardo Martin Brualla, Keunhong Park, Utkarsh Sinha, Sofien Bouaziz, Daniel Goldman, Jonathan Tilton Barron, Steven Maxwell Seitz
  • Publication number: 20230419600
    Abstract: Example embodiments relate to techniques for volumetric performance capture with neural rendering. A technique may involve initially obtaining images that depict a subject from multiple viewpoints and under various lighting conditions using a light stage and depth data corresponding to the subject using infrared cameras. A neural network may extract features of the subject from the images based on the depth data and map the features into a texture space (e.g., the UV texture space). A neural renderer can be used to generate an output image depicting the subject from a target view such that illumination of the subject in the output image aligns with the target view. The neural render may resample the features of the subject from the texture space to an image space to generate the output image.
    Type: Application
    Filed: November 5, 2020
    Publication date: December 28, 2023
    Inventors: Sean Ryan Francesco FANELLO, Abhi MEKA, Rohit Kumar PANDEY, Christian HAENE, Sergio Orts ESCOLANO, Christoph RHEMANN, Paul DEBEVEC, Sofien BOUAZIZ, Thabo BEELER, Ryan OVERBECK, Peter BARNUM, Daniel ERICKSON, Philip DAVIDSON, Yinda ZHANG, Jonathan TAYLOR, Chloe LeGENDRE, Shahram IZADI
  • Publication number: 20230393665
    Abstract: Techniques of identifying gestures include detecting and classifying inner-wrist muscle motions at a user's wrist using micron-resolution radar sensors. For example, a user of an AR system may wear a band around their wrist. When the user makes a gesture to manipulate a virtual object in the AR system as seen in a head-mounted display (HMD), muscles and ligaments in the user's wrist make small movements on the order of 1-3 mm. The band contains a small radar device that has a transmitter and a number of receivers (e.g., three) of electromagnetic (EM) radiation on a chip (e.g., a Soli chip. This radiation reflects off the wrist muscles and ligaments and is received by the receivers on the chip in the band. The received reflected signal, or signal samples, are then sent to processing circuitry for classification to identify the wrist movement as a gesture.
    Type: Application
    Filed: February 24, 2023
    Publication date: December 7, 2023
    Inventors: Dongeek Shin, Shahram Izadi, David Kim, Sofien Bouaziz, Steven Benjamin Goldberg, Ivan Poupyrev, Shwetak N. Patel
  • Patent number: 11836838
    Abstract: A method of animating a digital character according to facial expressions of a user, comprising the steps of, (a) obtaining a 2D image and 3D depth map of the face of the user, (b) determining expression parameters for a user expression model so that a facial expression of the user-specific expression model represents the face of the user shown in the 2D image and 3D depth map (c) using the expression parameters and an animation prior to determine animation parameters usable to animate a digital character, wherein the animation prior is a sequence of animation parameters which represent predefined animations of a digital character (d) using the animation parameters to animate a digital character so that the digital character mimics the face of the user.
    Type: Grant
    Filed: December 3, 2020
    Date of Patent: December 5, 2023
    Assignee: Apple Inc.
    Inventors: Thibaut Weise, Sofien Bouaziz, Hao Li, Mark Pauly
  • Publication number: 20230360182
    Abstract: Apparatus and methods related to applying lighting models to images of objects are provided. An example method includes applying a geometry model to an input image to determine a surface orientation map indicative of a distribution of lighting on an object based on a surface geometry. The method further includes applying an environmental light estimation model to the input image to determine a direction of synthetic lighting to be applied to the input image. The method also includes applying, based on the surface orientation map and the direction of synthetic lighting, a light energy model to determine a quotient image indicative of an amount of light energy to be applied to each pixel of the input image. The method additionally includes enhancing, based on the quotient image, a portion of the input image. One or more neural networks can be trained to perform one or more of the aforementioned aspects.
    Type: Application
    Filed: May 17, 2021
    Publication date: November 9, 2023
    Inventors: Sean Ryan Francesco Fanello, Yun-Ta Tsai, Rohit Kumar Pandey, Paul Debevec, Michael Milne, Chloe LeGendre, Jonathan Tilton Barron, Christoph Rhemann, Sofien Bouaziz, Navin Padman Sarma
  • Patent number: 11810313
    Abstract: According to an aspect, a real-time active stereo system includes a capture system configured to capture stereo data, where the stereo data includes a first input image and a second input image, and a depth sensing computing system configured to predict a depth map. The depth sensing computing system includes a feature extractor configured to extract features from the first and second images at a plurality of resolutions, an initialization engine configured to generate a plurality of depth estimations, where each of the plurality of depth estimations corresponds to a different resolution, and a propagation engine configured to iteratively refine the plurality of depth estimations based on image warping and spatial propagation.
    Type: Grant
    Filed: February 19, 2021
    Date of Patent: November 7, 2023
    Assignee: GOOGLE LLC
    Inventors: Vladimir Tankovich, Christian Haene, Sean Ryan Francesco Fanello, Yinda Zhang, Shahram Izadi, Sofien Bouaziz, Adarsh Prakash Murthy Kowdle, Sameh Khamis
  • Publication number: 20230343013
    Abstract: A method of animating a digital character according to facial expressions of a user, comprising the steps of, (a) obtaining a 2D image and 3D depth map of the face of the user, (b) determining expression parameters for a user expression model so that a facial expression of the user-specific expression model represents the face of the user shown in the 2D image and 3D depth map (c) using the expression parameters and an animation prior to determine animation parameters usable to animate a digital character, wherein the animation prior is a sequence of animation parameters which represent predefined animations of a digital character (d) using the animation parameters to animate a digital character so that the digital character mimics the face of the user.
    Type: Application
    Filed: May 9, 2023
    Publication date: October 26, 2023
    Inventors: Thibaut Weise, Sofien Bouaziz, Hao Li, Mark Pauly
  • Patent number: 11710287
    Abstract: Systems and methods are described for generating a plurality of three-dimensional (3D) proxy geometries of an object, generating, based on the plurality of 3D proxy geometries, a plurality of neural textures of the object, the neural textures defining a plurality of different shapes and appearances representing the object, providing the plurality of neural textures to a neural renderer, receiving, from the neural renderer and based on the plurality of neural textures, a color image and an alpha mask representing an opacity of at least a portion of the object, and generating a composite image based on the pose, the color image, and the alpha mask.
    Type: Grant
    Filed: August 4, 2020
    Date of Patent: July 25, 2023
    Assignee: GOOGLE LLC
    Inventors: Ricardo Martin Brualla, Daniel Goldman, Sofien Bouaziz, Rohit Kumar Pandey, Matthew Brown
  • Publication number: 20230154051
    Abstract: Systems and methods are directed to encoding and/or decoding of the textures/geometry of a three-dimensional volumetric representation. An encoding computing system can obtain voxel blocks from a three-dimensional volumetric representation of an object. The encoding computing system can encode voxel blocks with a machine-learned voxel encoding model to obtain encoded voxel blocks. The encoding computing system can decode the encoded voxel blocks with a machine-learned voxel decoding model to obtain reconstructed voxel blocks. The encoding computing system can generate a reconstructed mesh representation of the object based at least in part on the one or more reconstructed voxel blocks. The encoding computing system can encode textures associated with the voxel blocks according to an encoding scheme and based at least in part on the reconstructed mesh representation of the object to obtain encoded textures.
    Type: Application
    Filed: April 17, 2020
    Publication date: May 18, 2023
    Inventors: Danhang Tang, Saurabh Singh, Cem Keskin, Phillip Andrew Chou, Christian Haene, Mingsong Dou, Sean Ryan Francesco Fanello, Jonathan Taylor, Andrea Tagliasacchi, Philip Lindsley Davidson, Yinda Zhang, Onur Gonen Guleryuz, Shahram Izadi, Sofien Bouaziz
  • Publication number: 20230110964
    Abstract: A wearable computing device includes a frame, a camera mounted on the frame so as to capture images of an environment outside of the wearable computing device, a display device mounted on the frame so as to display the images captured by the camera, and at least one eye gaze tracking device mounted on the frame so as to track a gaze directed at the images displayed by the display device. In response to the detection of a fixation of the gaze on the display of images, the system may identify a pixel area corresponding to a fixation point of the fixation gaze on the display of images. The system may identify an object in the ambient environment corresponding to the identified pixel area, and set the identified object as a selected object for user interaction.
    Type: Application
    Filed: December 13, 2022
    Publication date: April 13, 2023
    Inventors: Jason Todd Spencer, Seth Raphael, Sofien Bouaziz
  • Publication number: 20230078756
    Abstract: Methods, systems, and apparatus including computer programs encoded on a computer storage medium, for generating convex decomposition of objects using neural network models. One of the methods includes receiving an input that depicts an object. The input is processed using a neural network to generate an output that defines a convex representation of the object. The output includes, for each of a plurality of convex elements, respective parameters that define a position of the convex element in the convex representation of the object.
    Type: Application
    Filed: November 18, 2022
    Publication date: March 16, 2023
    Inventors: Boyang Deng, Kyle Genova, Soroosh Yazdani, Sofien Bouaziz, Geoffrey E. Hinton, Andrea Tagliasacchi
  • Patent number: 11592908
    Abstract: Techniques of identifying gestures include detecting and classifying inner-wrist muscle motions at a user's wrist using micron-resolution radar sensors. For example, a user of an AR system may wear a band around their wrist. When the user makes a gesture to manipulate a virtual object in the AR system as seen in a head-mounted display (HMD), muscles and ligaments in the user's wrist make small movements on the order of 1-3 mm. The band contains a small radar device that has a transmitter and a number of receivers (e.g., three) of electromagnetic (EM) radiation on a chip (e.g., a Soli chip. This radiation reflects off the wrist muscles and ligaments and is received by the receivers on the chip in the band. The received reflected signal, or signal samples, are then sent to processing circuitry for classification to identify the wrist movement as a gesture.
    Type: Grant
    Filed: March 19, 2021
    Date of Patent: February 28, 2023
    Assignee: GOOGLE LLC
    Inventors: Dongeek Shin, Shahram Izadi, David Kim, Sofien Bouaziz, Steven Benjamin Goldberg, Ivan Poupyrev, Shwetak N. Patel
  • Patent number: 11567569
    Abstract: A wearable computing device includes a frame, a camera mounted on the frame so as to capture images of an environment outside of the wearable computing device, a display device mounted on the frame so as to display the images captured by the camera, and at least one eye gaze tracking device mounted on the frame so as to track a gaze directed at the images displayed by the display device. In response to the detection of a fixation of the gaze on the display of images, the system may identify a pixel area corresponding to a fixation point of the fixation gaze on the display of images. The system may identify an object in the ambient environment corresponding to the identified pixel area, and set the identified object as a selected object for user interaction.
    Type: Grant
    Filed: April 8, 2021
    Date of Patent: January 31, 2023
    Assignee: Google LLC
    Inventors: Jason Todd Spencer, Seth Raphael, Sofien Bouaziz
  • Publication number: 20230024768
    Abstract: Embodiments relate to a method for real-time facial animation, and a processing device for real-time facial animation. The method includes providing a dynamic expression model, receiving tracking data corresponding to a facial expression of a user, estimating tracking parameters based on the dynamic expression model and the tracking data, and refining the dynamic expression model based on the tracking data and estimated tracking parameters. The method may further include generating a graphical representation corresponding to the facial expression of the user based on the tracking parameters. Embodiments pertain to a real-time facial animation system.
    Type: Application
    Filed: May 27, 2022
    Publication date: January 26, 2023
    Inventors: SOFIEN BOUAZIZ, MARK PAULY
  • Publication number: 20230004216
    Abstract: Techniques of tracking a user's gaze includes identifying a region of a display at which a gaze of a user is directed, the region including a plurality of pixels. By determining a region rather than a point, when the regions correspond to elements of a user interface, the improved technique enables a system to activate the element to which a determined region is selected. In some implementations, the system makes the determination using a classification engine including a convolutional neural network; such an engine takes as input images of the user's eye and outputs a list of probabilities that the gaze is directed to each of the regions.
    Type: Application
    Filed: July 1, 2021
    Publication date: January 5, 2023
    Inventors: Ivana Tosic Rodgers, Sean Ryan Francesco Fanello, Sofien Bouaziz, Rohit Kumar Pandey, Eric Aboussouan, Adarsh Prakash Murthy Kowdle
  • Patent number: 11508167
    Abstract: Methods, systems, and apparatus including computer programs encoded on a computer storage medium, for generating convex decomposition of objects using neural network models. One of the methods includes receiving an input that depicts an object. The input is processed using a neural network to generate an output that defines a convex representation of the object. The output includes, for each of a plurality of convex elements, respective parameters that define a position of the convex element in the convex representation of the object.
    Type: Grant
    Filed: April 13, 2020
    Date of Patent: November 22, 2022
    Assignee: Google LLC
    Inventors: Boyang Deng, Kyle Genova, Soroosh Yazdani, Sofien Bouaziz, Geoffrey E. Hinton, Andrea Tagliasacchi