Patents by Inventor Sofien Bouaziz

Sofien Bouaziz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240331250
    Abstract: A method for predicting a lower body motion of an avatar is provided. The method includes generating an upper body avatar for a user of a headset, tracking a lower body posture of the user of the headset, retargeting the lower body posture to a lower body model, and merging the lower body model with the upper body avatar to form a full-body avatar for the user of the headset. A system including a memory storing instructions and a processor configured to execute the instructions and cause the system to perform the above method is also provided.
    Type: Application
    Filed: March 30, 2023
    Publication date: October 3, 2024
    Inventors: Jean-Charles Bazin, Sofien Bouaziz
  • Publication number: 20240303908
    Abstract: A method including generating a first vector based on a first grid and a three-dimensional (3D) position associated with a first implicit representation (IR) of a 3D object, generating at least one second vector based on at least one second grid and an upsampled first grid, decoding the first vector to generate a second IR of the 3D object, decoding the at least one second vector to generate at least one third IR of the 3D object, generating a composite IR of the 3D object based on the second IR of the 3D object and the at least one third IR of the 3D object, and generating a reconstructed volume representing the 3D object based on the composite IR of the 3D object.
    Type: Application
    Filed: April 30, 2021
    Publication date: September 12, 2024
    Inventors: Yinda Zhang, Danhang Tang, Ruofei Du, Zhang Chen, Kyle Genova, Sofien Bouaziz, Thomas Allen Funkhouser, Sean Ryan Francesco Fanello, Christian Haene
  • Patent number: 12073028
    Abstract: Techniques of identifying gestures include detecting and classifying inner-wrist muscle motions at a user's wrist using micron-resolution radar sensors. For example, a user of an AR system may wear a band around their wrist. When the user makes a gesture to manipulate a virtual object in the AR system as seen in a head-mounted display (HMD), muscles and ligaments in the user's wrist make small movements on the order of 1-3 mm. The band contains a small radar device that has a transmitter and a number of receivers (e.g., three) of electromagnetic (EM) radiation on a chip (e.g., a Soli chip. This radiation reflects off the wrist muscles and ligaments and is received by the receivers on the chip in the band. The received reflected signal, or signal samples, are then sent to processing circuitry for classification to identify the wrist movement as a gesture.
    Type: Grant
    Filed: February 24, 2023
    Date of Patent: August 27, 2024
    Assignee: GOOGLE LLC
    Inventors: Dongeek Shin, Shahram Izadi, David Kim, Sofien Bouaziz, Steven Benjamin Goldberg, Ivan Poupyrev, Shwetak N. Patel
  • Patent number: 12026833
    Abstract: Systems and methods are described for utilizing an image processing system with at least one processing device to perform operations including receiving a plurality of input images of a user, generating a three-dimensional mesh proxy based on a first set of features extracted from the plurality of input images and a second set of features extracted from the plurality of input images. The method may further include generating a neural texture based on a three-dimensional mesh proxy and the plurality of input images, generating a representation of the user including at least a neural texture, and sampling at least one portion of the neural texture from the three-dimensional mesh proxy. In response to providing the at least one sampled portion to a neural renderer, the method may include receiving, from the neural renderer, a synthesized image of the user that is previously not captured by the image processing system.
    Type: Grant
    Filed: October 28, 2020
    Date of Patent: July 2, 2024
    Assignee: Google LLC
    Inventors: Ricardo Martin Brualla, Moustafa Meshry, Daniel Goldman, Rohit Kumar Pandey, Sofien Bouaziz, Ke Li
  • Publication number: 20240212325
    Abstract: Systems and methods for training models to predict dense correspondences across images such as human images. A model may be trained using synthetic training data created from one or more 3D computer models of a subject. In addition, one or more geodesic distances derived from the surfaces of one or more of the 3D models may be used to generate one or more loss values, which may in turn be used in modifying the model's parameters during training.
    Type: Application
    Filed: March 6, 2024
    Publication date: June 27, 2024
    Inventors: Yinda Zhang, Feitong Tan, Danhang Tang, Mingsong Dou, Kaiwen Guo, Sean Ryan Francesco Fanello, Sofien Bouaziz, Cem Keskin, Ruofei Du, Rohit Kumar Pandey, Deqing Sun
  • Publication number: 20240212251
    Abstract: Embodiments relate to a method for real-time facial animation, and a processing device for real-time facial animation. The method includes providing a dynamic expression model, receiving tracking data corresponding to a facial expression of a user, estimating tracking parameters based on the dynamic expression model and the tracking data, and refining the dynamic expression model based on the tracking data and estimated tracking parameters. The method may further include generating a graphical representation corresponding to the facial expression of the user based on the tracking parameters. Embodiments pertain to a real-time facial animation system.
    Type: Application
    Filed: February 29, 2024
    Publication date: June 27, 2024
    Inventors: SOFIEN BOUAZIZ, MARK PAULY
  • Publication number: 20240212106
    Abstract: Apparatus and methods related to applying lighting models to images are provided. An example method includes receiving, via a computing device, an image comprising a subject. The method further includes relighting, via a neural network, a foreground of the image to maintain a consistent lighting of the foreground with a target illumination. The relighting is based on a per-pixel light representation indicative of a surface geometry of the foreground. The light representation includes a specular component, and a diffuse component, of surface reflection. The method additionally includes predicting, via the neural network, an output image comprising the subject in the relit foreground. One or more neural networks can be trained to perform one or more of the aforementioned aspects.
    Type: Application
    Filed: April 28, 2021
    Publication date: June 27, 2024
    Inventors: Chloe LeGendre, Paul Debevec, Sean Ryan Francesco Fanello, Rohit Kumar Pandey, Sergio Orts Escolano, Christian Haene, Sofien Bouaziz
  • Patent number: 11995899
    Abstract: A head-mounted device (HMD) can be configured to determine a request for recognizing at least one content item included within content framed within a display of the HMD. The HMD can be configured to initiate a head-tracking process that maintains a coordinate system with respect to the content, and a pointer-tracking process that tracks a pointer that is visible together with the content within the display. The HMD can be configured to capture a first image of the content and a second image of the content, the second image including the pointer. The HMD can be configured to map a location of the pointer within the second image to a corresponding image location within the first image, using the coordinate system, and provide the at least one content item from the corresponding image location.
    Type: Grant
    Filed: April 29, 2021
    Date of Patent: May 28, 2024
    Assignee: Google LLC
    Inventors: Qinge Wu, Grant Yoshida, Catherine Boulanger, Erik Hubert Dolly Goossens, Cem Keskin, Sofien Bouaziz, Jonathan James Taylor, Nidhi Rathi, Seth Raphael
  • Patent number: 11978268
    Abstract: Methods, systems, and apparatus including computer programs encoded on a computer storage medium, for generating convex decomposition of objects using neural network models. One of the methods includes receiving an input that depicts an object. The input is processed using a neural network to generate an output that defines a convex representation of the object. The output includes, for each of a plurality of convex elements, respective parameters that define a position of the convex element in the convex representation of the object.
    Type: Grant
    Filed: November 18, 2022
    Date of Patent: May 7, 2024
    Assignee: Google LLC
    Inventors: Boyang Deng, Kyle Genova, Soroosh Yazdani, Sofien Bouaziz, Geoffrey E. Hinton, Andrea Tagliasacchi
  • Patent number: 11954899
    Abstract: Systems and methods for training models to predict dense correspondences across images such as human images. A model may be trained using synthetic training data created from one or more 3D computer models of a subject. In addition, one or more geodesic distances derived from the surfaces of one or more of the 3D models may be used to generate one or more loss values, which may in turn be used in modifying the model's parameters during training.
    Type: Grant
    Filed: March 11, 2021
    Date of Patent: April 9, 2024
    Assignee: GOOGLE LLC
    Inventors: Yinda Zhang, Feitong Tan, Danhang Tang, Mingsong Dou, Kaiwen Guo, Sean Ryan Francesco Fanello, Sofien Bouaziz, Cem Keskin, Ruofei Du, Rohit Kumar Pandey, Deqing Sun
  • Patent number: 11948238
    Abstract: Embodiments relate to a method for real-time facial animation, and a processing device for real-time facial animation. The method includes providing a dynamic expression model, receiving tracking data corresponding to a facial expression of a user, estimating tracking parameters based on the dynamic expression model and the tracking data, and refining the dynamic expression model based on the tracking data and estimated tracking parameters. The method may further include generating a graphical representation corresponding to the facial expression of the user based on the tracking parameters. Embodiments pertain to a real-time facial animation system.
    Type: Grant
    Filed: May 27, 2022
    Date of Patent: April 2, 2024
    Assignee: Apple Inc.
    Inventors: Sofien Bouaziz, Mark Pauly
  • Publication number: 20240046618
    Abstract: Systems and methods for training models to predict dense correspondences across images such as human images. A model may be trained using synthetic training data created from one or more 3D computer models of a subject. In addition, one or more geodesic distances derived from the surfaces of one or more of the 3D models may be used to generate one or more loss values, which may in turn be used in modifying the model's parameters during training.
    Type: Application
    Filed: March 11, 2021
    Publication date: February 8, 2024
    Inventors: Yinda Zhang, Feitong Tan, Danhang Tang, Mingsong Dou, Kaiwen Guo, Sean Ryan Francesco Fanello, Sofien Bouaziz, Cem Keskin, Ruofei Du, Rohit Kumar Pandey, Deqing Sun
  • Patent number: 11868523
    Abstract: Techniques of tracking a user's gaze includes identifying a region of a display at which a gaze of a user is directed, the region including a plurality of pixels. By determining a region rather than a point, when the regions correspond to elements of a user interface, the improved technique enables a system to activate the element to which a determined region is selected. In some implementations, the system makes the determination using a classification engine including a convolutional neural network; such an engine takes as input images of the user's eye and outputs a list of probabilities that the gaze is directed to each of the regions.
    Type: Grant
    Filed: July 1, 2021
    Date of Patent: January 9, 2024
    Assignee: GOOGLE LLC
    Inventors: Ivana Tosic Rodgers, Sean Ryan Francesco Fanello, Sofien Bouaziz, Rohit Kumar Pandey, Eric Aboussouan, Adarsh Prakash Murthy Kowdle
  • Publication number: 20240005590
    Abstract: Techniques of image synthesis using a neural radiance field (NeRF) includes generating a deformation model of movement experienced by a subject in a non-rigidly deforming scene. For example, when an image synthesis system uses NeRFs, the system takes as input multiple poses of subjects for training data. In contrast to conventional NeRFs, the technical solution first expresses the positions of the subjects from various perspectives in an observation frame. The technical solution then involves deriving a deformation model, i.e., a mapping between the observation frame and a canonical frame in which the subject's movements are taken into account. This mapping is accomplished using latent deformation codes for each pose that are determined using a multilayer perceptron (MLP). A NeRF is then derived from positions and casted ray directions in the canonical frame using another MLP. New poses for the subject may then be derived using the NeRF.
    Type: Application
    Filed: January 14, 2021
    Publication date: January 4, 2024
    Inventors: Ricardo Martin Brualla, Keunhong Park, Utkarsh Sinha, Sofien Bouaziz, Daniel Goldman, Jonathan Tilton Barron, Steven Maxwell Seitz
  • Publication number: 20230419600
    Abstract: Example embodiments relate to techniques for volumetric performance capture with neural rendering. A technique may involve initially obtaining images that depict a subject from multiple viewpoints and under various lighting conditions using a light stage and depth data corresponding to the subject using infrared cameras. A neural network may extract features of the subject from the images based on the depth data and map the features into a texture space (e.g., the UV texture space). A neural renderer can be used to generate an output image depicting the subject from a target view such that illumination of the subject in the output image aligns with the target view. The neural render may resample the features of the subject from the texture space to an image space to generate the output image.
    Type: Application
    Filed: November 5, 2020
    Publication date: December 28, 2023
    Inventors: Sean Ryan Francesco FANELLO, Abhi MEKA, Rohit Kumar PANDEY, Christian HAENE, Sergio Orts ESCOLANO, Christoph RHEMANN, Paul DEBEVEC, Sofien BOUAZIZ, Thabo BEELER, Ryan OVERBECK, Peter BARNUM, Daniel ERICKSON, Philip DAVIDSON, Yinda ZHANG, Jonathan TAYLOR, Chloe LeGENDRE, Shahram IZADI
  • Publication number: 20230393665
    Abstract: Techniques of identifying gestures include detecting and classifying inner-wrist muscle motions at a user's wrist using micron-resolution radar sensors. For example, a user of an AR system may wear a band around their wrist. When the user makes a gesture to manipulate a virtual object in the AR system as seen in a head-mounted display (HMD), muscles and ligaments in the user's wrist make small movements on the order of 1-3 mm. The band contains a small radar device that has a transmitter and a number of receivers (e.g., three) of electromagnetic (EM) radiation on a chip (e.g., a Soli chip. This radiation reflects off the wrist muscles and ligaments and is received by the receivers on the chip in the band. The received reflected signal, or signal samples, are then sent to processing circuitry for classification to identify the wrist movement as a gesture.
    Type: Application
    Filed: February 24, 2023
    Publication date: December 7, 2023
    Inventors: Dongeek Shin, Shahram Izadi, David Kim, Sofien Bouaziz, Steven Benjamin Goldberg, Ivan Poupyrev, Shwetak N. Patel
  • Patent number: 11836838
    Abstract: A method of animating a digital character according to facial expressions of a user, comprising the steps of, (a) obtaining a 2D image and 3D depth map of the face of the user, (b) determining expression parameters for a user expression model so that a facial expression of the user-specific expression model represents the face of the user shown in the 2D image and 3D depth map (c) using the expression parameters and an animation prior to determine animation parameters usable to animate a digital character, wherein the animation prior is a sequence of animation parameters which represent predefined animations of a digital character (d) using the animation parameters to animate a digital character so that the digital character mimics the face of the user.
    Type: Grant
    Filed: December 3, 2020
    Date of Patent: December 5, 2023
    Assignee: Apple Inc.
    Inventors: Thibaut Weise, Sofien Bouaziz, Hao Li, Mark Pauly
  • Publication number: 20230360182
    Abstract: Apparatus and methods related to applying lighting models to images of objects are provided. An example method includes applying a geometry model to an input image to determine a surface orientation map indicative of a distribution of lighting on an object based on a surface geometry. The method further includes applying an environmental light estimation model to the input image to determine a direction of synthetic lighting to be applied to the input image. The method also includes applying, based on the surface orientation map and the direction of synthetic lighting, a light energy model to determine a quotient image indicative of an amount of light energy to be applied to each pixel of the input image. The method additionally includes enhancing, based on the quotient image, a portion of the input image. One or more neural networks can be trained to perform one or more of the aforementioned aspects.
    Type: Application
    Filed: May 17, 2021
    Publication date: November 9, 2023
    Inventors: Sean Ryan Francesco Fanello, Yun-Ta Tsai, Rohit Kumar Pandey, Paul Debevec, Michael Milne, Chloe LeGendre, Jonathan Tilton Barron, Christoph Rhemann, Sofien Bouaziz, Navin Padman Sarma
  • Patent number: 11810313
    Abstract: According to an aspect, a real-time active stereo system includes a capture system configured to capture stereo data, where the stereo data includes a first input image and a second input image, and a depth sensing computing system configured to predict a depth map. The depth sensing computing system includes a feature extractor configured to extract features from the first and second images at a plurality of resolutions, an initialization engine configured to generate a plurality of depth estimations, where each of the plurality of depth estimations corresponds to a different resolution, and a propagation engine configured to iteratively refine the plurality of depth estimations based on image warping and spatial propagation.
    Type: Grant
    Filed: February 19, 2021
    Date of Patent: November 7, 2023
    Assignee: GOOGLE LLC
    Inventors: Vladimir Tankovich, Christian Haene, Sean Ryan Francesco Fanello, Yinda Zhang, Shahram Izadi, Sofien Bouaziz, Adarsh Prakash Murthy Kowdle, Sameh Khamis
  • Publication number: 20230343013
    Abstract: A method of animating a digital character according to facial expressions of a user, comprising the steps of, (a) obtaining a 2D image and 3D depth map of the face of the user, (b) determining expression parameters for a user expression model so that a facial expression of the user-specific expression model represents the face of the user shown in the 2D image and 3D depth map (c) using the expression parameters and an animation prior to determine animation parameters usable to animate a digital character, wherein the animation prior is a sequence of animation parameters which represent predefined animations of a digital character (d) using the animation parameters to animate a digital character so that the digital character mimics the face of the user.
    Type: Application
    Filed: May 9, 2023
    Publication date: October 26, 2023
    Inventors: Thibaut Weise, Sofien Bouaziz, Hao Li, Mark Pauly