Patents by Inventor Sofien Bouaziz
Sofien Bouaziz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20220350997Abstract: A head-mounted device (HMD) can be configured to determine a request for recognizing at least one content item included within content framed within a display of the HMD. The HMD can be configured to initiate a head-tracking process that maintains a coordinate system with respect to the content, and a pointer-tracking process that tracks a pointer that is visible together with the content within the display. The HMD can be configured to capture a first image of the content and a second image of the content, the second image including the pointer. The HMD can be configured to map a location of the pointer within the second image to a corresponding image location within the first image, using the coordinate system, and provide the at least one content item from the corresponding image location.Type: ApplicationFiled: April 29, 2021Publication date: November 3, 2022Inventors: Qinge Wu, Grant Yoshida, Catherine Boulanger, Erik Hubert Dolly Goossens, Cem Keskin, Sofien Bouaziz, Jonathan James Taylor, Nidhi Rathi, Seth Raphael
-
Publication number: 20220326766Abstract: A wearable computing device includes a frame, a camera mounted on the frame so as to capture images of an environment outside of the wearable computing device, a display device mounted on the frame so as to display the images captured by the camera, and at least one eye gaze tracking device mounted on the frame so as to track a gaze directed at the images displayed by the display device. In response to the detection of a fixation of the gaze on the display of images, the system may identify a pixel area corresponding to a fixation point of the fixation gaze on the display of images. The system may identify an object in the ambient environment corresponding to the identified pixel area, and set the identified object as a selected object for user interaction.Type: ApplicationFiled: April 8, 2021Publication date: October 13, 2022Inventors: Jason Todd Spencer, Seth Raphael, Sofien Bouaziz
-
Publication number: 20220300082Abstract: Techniques of identifying gestures include detecting and classifying inner-wrist muscle motions at a user's wrist using micron-resolution radar sensors. For example, a user of an AR system may wear a band around their wrist. When the user makes a gesture to manipulate a virtual object in the AR system as seen in a head-mounted display (HMD), muscles and ligaments in the user's wrist make small movements on the order of 1-3 mm. The band contains a small radar device that has a transmitter and a number of receivers (e.g., three) of electromagnetic (EM) radiation on a chip (e.g., a Soli chip. This radiation reflects off the wrist muscles and ligaments and is received by the receivers on the chip in the band. The received reflected signal, or signal samples, are then sent to processing circuitry for classification to identify the wrist movement as a gesture.Type: ApplicationFiled: March 19, 2021Publication date: September 22, 2022Inventors: Dongeek Shin, Shahram Izadi, David Kim, Sofien Bouaziz, Steven Benjamin Goldberg, Ivan Poupyrev, Shwetak N. Patel
-
Patent number: 11348299Abstract: Embodiments relate to a method for real-time facial animation, and a processing device for real-time facial animation. The method includes providing a dynamic expression model, receiving tracking data corresponding to a facial expression of a user, estimating tracking parameters based on the dynamic expression model and the tracking data, and refining the dynamic expression model based on the tracking data and estimated tracking parameters. The method may further include generating a graphical representation corresponding to the facial expression of the user based on the tracking parameters. Embodiments pertain to a real-time facial animation system.Type: GrantFiled: January 27, 2020Date of Patent: May 31, 2022Assignee: Apple Inc.Inventors: Sofien Bouaziz, Mark Pauly
-
Patent number: 11347320Abstract: A computing device, such as a wearable device, may include a gesture sensor that generates a gesture signal in response to a gesture of a user performed while the computing device is being worn or held by the user. A calibration sensor may generate a calibration signal characterizing a degree of tightness with which the computing device is being worn or held by the user. The gesture signal may be calibrated using the calibration signal, to obtain a calibrated gesture signal that is calibrated with respect to the degree of tightness. At least one function of the at least one computing device may be implemented, based on the calibrated gesture signal.Type: GrantFiled: June 29, 2021Date of Patent: May 31, 2022Assignee: Google LLCInventors: Dongeek Shin, David Kim, Sofien Bouaziz
-
Patent number: 11335023Abstract: According to an aspect, a method for pose estimation using a convolutional neural network includes extracting features from an image, downsampling the features to a lower resolution, arranging the features into sets of features, where each set of features corresponds to a separate keypoint of a pose of a subject, updating, by at least one convolutional block, each set of features based on features of one or more neighboring keypoints using a kinematic structure, and predicting the pose of the subject using the updated sets of features.Type: GrantFiled: May 22, 2020Date of Patent: May 17, 2022Assignee: Google LLCInventors: Sameh Khamis, Christian Haene, Hossam Isack, Cem Keskin, Sofien Bouaziz, Shahram Izadi
-
Publication number: 20220130111Abstract: Systems and methods are described for utilizing an image processing system with at least one processing device to perform operations including receiving a plurality of input images of a user, generating a three-dimensional mesh proxy based on a first set of features extracted from the plurality of input images and a second set of features extracted from the plurality of input images. The method may further include generating a neural texture based on a three-dimensional mesh proxy and the plurality of input images, generating a representation of the user including at least a neural texture, and sampling at least one portion of the neural texture from the three-dimensional mesh proxy. In response to providing the at least one sampled portion to a neural renderer, the method may include receiving, from the neural renderer, a synthesized image of the user that is previously not captured by the image processing system.Type: ApplicationFiled: October 28, 2020Publication date: April 28, 2022Inventors: Ricardo Martin Brualla, Moustafa Meshry, Daniel Goldman, Rohit Kumar Pandey, Sofien Bouaziz, Ke Li
-
Publication number: 20220051485Abstract: Systems and methods are described for generating a plurality of three-dimensional (3D) proxy geometries of an object, generating, based on the plurality of 3D proxy geometries, a plurality of neural textures of the object, the neural textures defining a plurality of different shapes and appearances representing the object, providing the plurality of neural textures to a neural renderer, receiving, from the neural renderer and based on the plurality of neural textures, a color image and an alpha mask representing an opacity of at least a portion of the object, and generating a composite image based on the pose, the color image, and the alpha mask.Type: ApplicationFiled: August 4, 2020Publication date: February 17, 2022Inventors: Ricardo Martin Brualla, Daniel Goldman, Sofien Bouaziz, Rohit Kumar Pandey, Matthew Brown
-
Publication number: 20210366146Abstract: According to an aspect, a method for pose estimation using a convolutional neural network includes extracting features from an image, downsampling the features to a lower resolution, arranging the features into sets of features, where each set of features corresponds to a separate keypoint of a pose of a subject, updating, by at least one convolutional block, each set of features based on features of one or more neighboring keypoints using a kinematic structure, and predicting the pose of the subject using the updated sets of features.Type: ApplicationFiled: May 22, 2020Publication date: November 25, 2021Inventors: Sameh Khamis, Christian Haene, Hossam Isack, Cem Keskin, Sofien Bouaziz, Shahram Izadi
-
Publication number: 20210319209Abstract: Methods, systems, and apparatus including computer programs encoded on a computer storage medium, for generating convex decomposition of objects using neural network models. One of the methods includes receiving an input that depicts an object. The input is processed using a neural network to generate an output that defines a convex representation of the object. The output includes, for each of a plurality of convex elements, respective parameters that define a position of the convex element in the convex representation of the object.Type: ApplicationFiled: April 13, 2020Publication date: October 14, 2021Inventors: Boyang Deng, Kyle Genova, Soroosh Yazdani, Sofien Bouaziz, Geoffrey E. Hinton, Andrea Tagliasacchi
-
Publication number: 20210264632Abstract: According to an aspect, a real-time active stereo system includes a capture system configured to capture stereo data, where the stereo data includes a first input image and a second input image, and a depth sensing computing system configured to predict a depth map. The depth sensing computing system includes a feature extractor configured to extract features from the first and second images at a plurality of resolutions, an initialization engine configured to generate a plurality of depth estimations, where each of the plurality of depth estimations corresponds to a different resolution, and a propagation engine configured to iteratively refine the plurality of depth estimations based on image warping and spatial propagation.Type: ApplicationFiled: February 19, 2021Publication date: August 26, 2021Inventors: Vladimir Tankovich, Christian Haene, Sean Rayn Francesco Fanello, Yinda Zhang, Shahram Izadi, Sofien Bouaziz, Adarsh Prakash Murthy Kowdle, Sameh Khamis
-
Patent number: 11068698Abstract: A three-dimensional model (e.g., motion capture model) of a user is generated from captured images or captured video of the user. A machine learning network may track poses and expressions of the user to generate and refine the three-dimensional model. Refinement of the three-dimensional model may provide more accurate tracking of the user's face. Refining of the three-dimensional model may include refining the determinations of poses and expressions at defined locations (e.g., eye corners and/or nose) in the three-dimensional model. The refining may occur in an iterative process. Tracking of the three-dimensional model over time (e.g., during video capture) may be used to generate an animated three-dimensional model (e.g., an animated puppet) of the user that simulates the user's poses and expressions.Type: GrantFiled: September 27, 2019Date of Patent: July 20, 2021Assignee: Apple Inc.Inventors: Sofien Bouaziz, Brian Amberg, Thibaut Weise, Patrick Snape, Stefan Brugger, Alex Mansfield, Reinhard Knothe, Thomas Kiser
-
Publication number: 20210174567Abstract: A method of animating a digital character according to facial expressions of a user, comprising the steps of, (a) obtaining a 2D image and 3D depth map of the face of the user, (b) determining expression parameters for a user expression model so that a facial expression of the user-specific expression model represents the face of the user shown in the 2D image and 3D depth map (c) using the expression parameters and an animation prior to determine animation parameters usable to animate a digital character, wherein the animation prior is a sequence of animation parameters which represent predefined animations of a digital character (d) using the animation parameters to animate a digital character so that the digital character mimics the face of the user.Type: ApplicationFiled: December 3, 2020Publication date: June 10, 2021Inventors: Thibaut Weise, Sofien Bouaziz, Hao Li, Mark Pauly
-
Patent number: 10997457Abstract: Methods, systems, and media for relighting images using predicted deep reflectance fields are provided.Type: GrantFiled: October 16, 2019Date of Patent: May 4, 2021Assignee: Google LLCInventors: Christoph Rhemann, Abhimitra Meka, Matthew Whalen, Jessica Lynn Busch, Sofien Bouaziz, Geoffrey Douglas Harvey, Andrea Tagliasacchi, Jonathan Taylor, Paul Debevec, Peter Joseph Denny, Sean Ryan Francesco Fanello, Graham Fyffe, Jason Angelo Dourgarian, Xueming Yu, Adarsh Prakash Murthy Kowdle, Julien Pascal Christophe Valentin, Peter Christopher Lincoln, Rohit Kumar Pandey, Christian Häne, Shahram Izadi
-
Patent number: 10861211Abstract: A method of animating a digital character according to facial expressions of a user, comprising the steps of, (a) obtaining a 2D image and 3D depth map of the face of the user, (b) determining expression parameters for a user expression model so that a facial expression of the user-specific expression model represents the face of the user shown in the 2D image and 3D depth map (c) using the expression parameters and an animation prior to determine animation parameters usable to animate a digital character, wherein the animation prior is a sequence of animation parameters which represent predefined animations of a digital character (d) using the animation parameters to animate a digital character so that the digital character mimics the face of the user.Type: GrantFiled: July 2, 2018Date of Patent: December 8, 2020Assignee: Apple Inc.Inventors: Thibaut Weise, Sofien Bouaziz, Hao Li, Mark Pauly
-
Publication number: 20200372284Abstract: Methods, systems, and media for relighting images using predicted deep reflectance fields are provided.Type: ApplicationFiled: October 16, 2019Publication date: November 26, 2020Inventors: Christoph Rhemann, Abhimitra Meka, Matthew Whalen, Jessica Lynn Busch, Sofien Bouaziz, Geoffrey Douglas Harvey, Andrea Tagliasacchi, Jonathan Taylor, Paul Debevec, Peter Joseph Denny, Sean Ryan Francesco Fanello, Graham Fyffe, Jason Angelo Dourgarian, Xueming Yu, Adarsh Prakash Murthy Kowdle, Julien Pascal Christophe Valentin, Peter Christopher Lincoln, Rohit Kumar Pandey, Christian Häne, Shahram Izadi
-
Publication number: 20200160582Abstract: Embodiments relate to a method for real-time facial animation, and a processing device for real-time facial animation. The method includes providing a dynamic expression model, receiving tracking data corresponding to a facial expression of a user, estimating tracking parameters based on the dynamic expression model and the tracking data, and refining the dynamic expression model based on the tracking data and estimated tracking parameters. The method may further include generating a graphical representation corresponding to the facial expression of the user based on the tracking parameters. Embodiments pertain to a real-time facial animation system.Type: ApplicationFiled: January 27, 2020Publication date: May 21, 2020Inventors: SOFIEN BOUAZIZ, MARK PAULY
-
Publication number: 20200125835Abstract: A three-dimensional model (e.g., motion capture model) of a user is generated from captured images or captured video of the user. A machine learning network may track poses and expressions of the user to generate and refine the three-dimensional model. Refinement of the three-dimensional model may provide more accurate tracking of the user's face. Refining of the three-dimensional model may include refining the determinations of poses and expressions at defined locations (e.g., eye corners and/or nose) in the three-dimensional model. The refining may occur in an iterative process. Tracking of the three-dimensional model over time (e.g., during video capture) may be used to generate an animated three-dimensional model (e.g., an animated puppet) of the user that simulates the user's poses and expressions.Type: ApplicationFiled: September 27, 2019Publication date: April 23, 2020Applicant: Apple Inc.Inventors: Sofien Bouaziz, Brian Amberg, Thibaut Weise, Patrick Snape, Stefan Brugger, Alex Mansfield, Reinhard Knothe, Thomas Kiser
-
Patent number: 10586372Abstract: Embodiments relate to a method for real-time facial animation, and a processing device for real-time facial animation. The method includes providing a dynamic expression model, receiving tracking data corresponding to a facial expression of a user, estimating tracking parameters based on the dynamic expression model and the tracking data, and refining the dynamic expression model based on the tracking data and estimated tracking parameters. The method may further include generating a graphical representation corresponding to the facial expression of the user based on the tracking parameters. Embodiments pertain to a real-time facial animation system.Type: GrantFiled: January 28, 2019Date of Patent: March 10, 2020Assignee: Apple Inc.Inventors: Sofien Bouaziz, Mark Pauly
-
Patent number: 10452896Abstract: Techniques are disclosed for creating avatars from image data of a person. According to these techniques, spatial facial attributes of a subject may be measured from an image representing the subject. The measured facial attributes may be matching to a three-dimensional avatar template. Other attributes of the subject may be identified, such as hair type, hair color, eye color, skin color and the like. For hair type, hair may be generated for the avatar by measuring spatial locations of hair of the subject and the measured hair locations may be compared to locations of hair represented by a plurality of hair templates. A matching hair template may be selected from the plurality of hair templates, which may be used in generating the avatar. An avatar may be generated from the matching avatar template, which may be deformed according to the measured spatial facial attributes, and from the other attributes.Type: GrantFiled: September 6, 2017Date of Patent: October 22, 2019Assignee: Apple Inc.Inventors: Thibaut Weise, Sofien Bouaziz, Atulit Kumar, Sarah Amsellem