Patents by Inventor Jason Saragih
Jason Saragih has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240061499Abstract: A method for updating a gaze direction for a transmitter avatar in a receiver headset is provided. The method includes verifying, in a receiver device, that a visual tracking of a transmitter avatar is active in a transmitter device, and adjusting, in the receiver device, a gaze direction of the transmitter avatar to a fixation point. Adjusting the gaze direction of the transmitter avatar comprises estimating a coordinate of the fixation point in a receiver frame at a later time, and rotating, in the receiver device, two eyeballs of the transmitter avatar to a point in a direction of the fixation point. A headset, a memory in the headset storing instructions, and a processor configured to execute the instructions to perform the above method, are also provided.Type: ApplicationFiled: December 16, 2022Publication date: February 22, 2024Inventors: Jason Saragih, Tomas Simon Kreuz, Shunsuke Saito, Stephen Anthony Lombardi, Gabriel Bailowitz Schwartz, Shih-En Wei
-
Publication number: 20240064413Abstract: A method to generate relightable avatars with an arbitrary illumination configuration is provided. The method includes selecting a lighting configuration for collecting a sequence of pictures of a subject, the lighting configuration including a spatial pattern with multiple lights surrounding the subject and a time lapse pattern with multiple camera exposure windows, modifying the spatial pattern and the time lapse pattern based on an average illumination intensity provided to the subject, activating the lights in a sequence based on the spatial pattern and the time lapse pattern, and collecting multiple pictures from multiple cameras surrounding the subject at each of the camera exposure windows. A memory storing instructions, a processor configured to execute the instructions, and a system which is caused, upon the executed instructions, to perform the above method, are also provided.Type: ApplicationFiled: December 16, 2022Publication date: February 22, 2024Inventors: Jason Saragih, Tomas Simon Kreuz, Kevyn Alex Anthony McPhail, María Murcia López
-
Publication number: 20230326112Abstract: A method for providing a relightable avatar of a subject to a virtual reality application is provided. The method includes retrieving multiple images including multiple views of a subject and generating an expression-dependent texture map and a view-dependent texture map for the subject, based on the images. The method also includes generating, based on the expression-dependent texture map and the view-dependent texture map, a view of the subject illuminated by a light source selected from an environment in an immersive reality application, and providing the view of the subject to an immersive reality application running in a client device. A non-transitory, computer-readable medium storing instructions and a system that executes the instructions to perform the above method are also provided.Type: ApplicationFiled: June 13, 2023Publication date: October 12, 2023Inventors: Jason Saragih, Stephen Anthony Lombardi, Shunsuke Saito, Tomas Simon Kreuz, Shih-En Wei, Kevyn Alex Anthony McPhail, Yaser Sheikh, Sai Bi
-
Patent number: 11734888Abstract: A method for providing real-time three-dimensional facial animation from video is provided. The method includes collecting images of a subject, and forming a three-dimensional mesh for the subject based on a facial expression factor and a head pose of the subject extracted from the images of the subject. The method also includes forming a texture transformation based on an illumination parameter associated with an illumination configuration for the images from the subject, forming a three-dimensional model for the subject based on the three-dimensional mesh and the texture transformation, determining a loss factor based on selected points in a test image from the subject and a rendition of the test image by the three-dimensional model, and updating the three-dimensional model according to the loss factor. A system and a non-transitory, computer-readable medium storing instructions to perform the above method are also provided.Type: GrantFiled: August 6, 2021Date of Patent: August 22, 2023Assignee: Meta Platforms Technologies, LLCInventors: Chen Cao, Vasu Agrawal, Fernando De la Torre, Lele Chen, Jason Saragih, Tomas Simon Kreuz, Yaser Sheikh
-
Publication number: 20230254300Abstract: A method for authenticating an avatar for use in a virtual reality/augmented reality (VR/AR) application is provided. The method includes receiving, from a client device, a request for authenticating an identity of a user of an immersive reality application running in the client device, wherein the user is associated with a subject-based avatar in the immersive reality application. The computer-implemented method also includes verifying, in a server, a public key provided by the client device against a private key stored in the server, the private key associated with the subject-based avatar, providing, to the client device, a certificate of validity of the identity of the user when the public key matches the private key, and storing an encrypted version of the certificate of validity in a memory. A memory storing instructions and a system to perform the above method are also provided.Type: ApplicationFiled: February 4, 2022Publication date: August 10, 2023Inventors: Barry David Silverstein, Tomas Simon Kreuz, Jason Saragih
-
Publication number: 20230245365Abstract: A method for generating a subject avatar using a mobile phone scan is provided. The method includes receiving, from a mobile device, multiple images of a first subject, extracting multiple image features from the images of the first subject based on a set of learnable weights, inferring a three-dimensional model of the first subject from the image features and an existing three-dimensional model of a second subject, animating the three-dimensional model of the first subject based on an immersive reality application running on a headset used by a viewer, and providing, to a display on the headset, an image of the three-dimensional model of the first subject. A system and a non-transitory, computer-readable medium storing instructions to perform the above method, are also provided.Type: ApplicationFiled: December 2, 2022Publication date: August 3, 2023Inventors: Chen Cao, Stuart Anderson, Tomas Simon Kreuz, Jin Kyu Kim, Gabriel Bailowitz Schwartz, Michael Zollhoefer, Shunsuke Saito, Stephen Anthony Lombardi, Shih-En Wei, Danielle Belko, Shoou-I Yu, Yaser Sheikh, Jason Saragih
-
Patent number: 11715248Abstract: A method for providing a relightable avatar of a subject to a virtual reality application is provided. The method includes retrieving multiple images including multiple views of a subject and generating an expression-dependent texture map and a view-dependent texture map for the subject, based on the images. The method also includes generating, based on the expression-dependent texture map and the view-dependent texture map, a view of the subject illuminated by a light source selected from an environment in an immersive reality application, and providing the view of the subject to an immersive reality application running in a client device. A non-transitory, computer-readable medium storing instructions and a system that executes the instructions to perform the above method are also provided.Type: GrantFiled: January 20, 2022Date of Patent: August 1, 2023Assignee: Meta Platforms Technologies, LLCInventors: Jason Saragih, Stephen Anthony Lombardi, Shunsuke Saito, Tomas Simon Kreuz, Shih-En Wei, Kevyn Alex Anthony McPhail, Yaser Sheikh, Sai Bi
-
Publication number: 20220358719Abstract: A method for providing real-time three-dimensional facial animation from video is provided. The method includes collecting images of a subject, and forming a three-dimensional mesh for the subject based on a facial expression factor and a head pose of the subject extracted from the images of the subject. The method also includes forming a texture transformation based on an illumination parameter associated with an illumination configuration for the images from the subject, forming a three-dimensional model for the subject based on the three-dimensional mesh and the texture transformation, determining a loss factor based on selected points in a test image from the subject and a rendition of the test image by the three-dimensional model, and updating the three-dimensional model according to the loss factor. A system and a non-transitory, computer-readable medium storing instructions to perform the above method are also provided.Type: ApplicationFiled: August 6, 2021Publication date: November 10, 2022Inventors: Chen Cao, Vasu Agrawal, Fernando De la Torre, Lele Chen, Jason Saragih, Tomas Simon Kreuz, Yaser Sheikh
-
Patent number: 11423616Abstract: In one embodiment, a system may access an input image of an object captured by cameras, and the input image depicts appearance information associated with an object. The system may generate a first mesh of the object based on features identified from the input image of the object. The system may generate, by processing the first mesh using a machine-learning model, a position map that defines a contour of the object. Each pixel in the position map corresponds to a three-dimensional coordinate. The system may further generate a second mesh based on the position map, wherein the second mesh has a higher resolution than the first mesh. The system may render an output image of the object based on the second mesh. The system disclosed in the present application can render a dense mesh which has a higher resolution to provide details which cannot be compensated by texture information.Type: GrantFiled: March 27, 2020Date of Patent: August 23, 2022Assignee: Facebook Technologies, LLC.Inventors: Tomas Simon Kreuz, Jason Saragih, Stephen Anthony Lombardi, Shugao Ma, Gabriel Bailowitz Schwartz
-
Publication number: 20220245910Abstract: A method for training a real-time, modeling for animating an avatar for a subject is provided. The method includes collecting multiple images of a subject. The method also includes selecting a plurality of vertex positions in a guide mesh, indicative of a volumetric primitive enveloping the subject, determining a geometric attribute for the volumetric primitive including a position, a rotation, and a scale factor of the volumetric primitive, determining a payload attribute for each of the volumetric primitive, the payload attribute including a color value and an opacity value for each voxel in a voxel grid defining the volumetric primitive, determining a loss factor for each point in the volumetric primitive based on the geometric attribute, the payload attribute and a ground truth value, and updating a three-dimensional model for the subject. A system and a non-transitory, computer-readable medium storing instructions to perform the above method are also provided.Type: ApplicationFiled: December 17, 2021Publication date: August 4, 2022Inventors: Stephen Anthony Lombardi, Tomas Simon Kreuz, Jason Saragih, Gabriel Bailowitz Schwartz, Michael Zollhoefer, Yaser Sheikh
-
Publication number: 20220237843Abstract: A method for providing a relightable avatar of a subject to a virtual reality application is provided. The method includes retrieving multiple images including multiple views of a subject and generating an expression-dependent texture map and a view-dependent texture map for the subject, based on the images. The method also includes generating, based on the expression-dependent texture map and the view-dependent texture map, a view of the subject illuminated by a light source selected from an environment in an immersive reality application, and providing the view of the subject to an immersive reality application running in a client device. A non-transitory, computer-readable medium storing instructions and a system that executes the instructions to perform the above method are also provided.Type: ApplicationFiled: January 20, 2022Publication date: July 28, 2022Inventors: Jason Saragih, Stephen Anthony Lombardi, Shunsuke Saito, Tomas Simon Kreuz, Shih-En Wei, Kevyn Alex Anthony McPhail, Yaser Sheikh, Sai Bi
-
Publication number: 20220207831Abstract: A method for simulating a solid body animation of a subject includes retrieving a first frame that includes a body image of a subject. The method also includes selecting, from the first frame, multiple key points within the body image of the subject that define a hull of a body part and multiple joint points that define a joint between two body parts, identifying a geometry, a speed, and a mass of the body part to include in a dynamic model of the subject, based on the key points and the joint points, determining, based on the dynamic model of the subject, a pose of the subject in a second frame after the first frame in a video stream, and providing the video stream to an immersive reality application running on a client device.Type: ApplicationFiled: December 20, 2021Publication date: June 30, 2022Inventors: Jason Saragih, Shih-En Wei, Tomas Simon Kreuz, Kris Makoto Kitani, Ye Yuan
-
Publication number: 20220198731Abstract: A method of forming a pixel-aligned volumetric avatar includes receiving multiple two-dimensional images having at least two or more fields of view of a subject. The method also includes extracting multiple image features from the two-dimensional images using a set of learnable weights, projecting the image features along a direction between a three-dimensional model of the subject and a selected observation point for a viewer, and providing, to the viewer, an image of the three-dimensional model of the subject. A system and a non-transitory, computer readable medium storing instructions to perform the above method, are also provided.Type: ApplicationFiled: December 20, 2021Publication date: June 23, 2022Inventors: Stephen Anthony Lombardi, Jason Saragih, Tomas Simon Kreuz, Shunsuke Saito, Michael Zollhoefer, Amit Raj, James Henry Hays
-
Publication number: 20220201273Abstract: A device for providing a reverse pass-through view of a user of a headset display to an onlooker includes an eyepiece comprising an optical surface configured to provide an image to a user on a first side of the optical surface. The device also includes a first camera configured to collect an image of a portion of a face of the user reflected from the optical surface in a first field of view, a display adjacent to the optical surface and configured to project forward an image of the face of the user, and a screen configured to receive light from the display and provide the image of the face of the user to an onlooker.Type: ApplicationFiled: December 17, 2021Publication date: June 23, 2022Inventors: Nathan Matsuda, Brian Wheelwright, Joel Hegland, Stephen Anthony Lombardi, Jason Saragih, Tomas Simon Kreuz, Shunsuke Saito, Michael Zollhoefer, Amit Raj, James Henry Hays
-
Patent number: 11182947Abstract: In one embodiment, a system may access a codec that encodes an appearance associated with a subject and comprise codec portions that respectively correspond to body parts of the subject. The system may generate a training codec that comprises a first subset of the codec portions (a first set of body parts) and a modified second subset of the codec portions (muted body parts). The system may decode the training codec using a machine-learning model to generate a mesh of the subject. The system may transform the mesh of the subject based on a predetermined pose. The system may update the machine-learning model based on a comparison between the transformed mesh and a target mesh of the subject having the predetermined pose. The system in the present application can train a machine-learning model to render an avatar with a pose using uncorrelated codec portions corresponding to different body parts.Type: GrantFiled: April 17, 2020Date of Patent: November 23, 2021Assignee: Facebook Technologies, LLC.Inventors: Chenglei Wu, Jason Saragih, Tomas Simon Kreuz, Takaaki Shiratori
-
Patent number: 11087521Abstract: The disclosed computer system may include an input module, an autoencoder, and a rendering module. The input module may receive geometry information and images of a subject. The geometry information may be indicative of variation in geometry of the subject over time. Each image may be associated with a respective viewpoint and may include a view-dependent texture map of the subject. The autoencoder may jointly encode texture information and the geometry information to provide a latent vector. The autoencoder may infer, using the latent vector, an inferred geometry and an inferred view-dependent texture of the subject for a predicted viewpoint. The rendering module may be configured to render a reconstructed image of the subject for the predicted viewpoint using the inferred geometry and the inferred view-dependent texture. Various other systems and methods are also disclosed.Type: GrantFiled: January 29, 2020Date of Patent: August 10, 2021Assignee: Facebook Technologies, LLCInventors: Stephen Anthony Lombardi, Jason Saragih, Yaser Sheikh, Takaaki Shiratori, Shoou-I Yu, Tomas Simon Kreuz, Chenglei Wu
-
Patent number: 11062502Abstract: In one embodiment, a method includes accessing a number of pictures of an object, constructing a modeling volume for three-dimensional modeling of the object by processing the number of pictures using a machine-learning framework, where the modeling volume is associated with a number of color and opacity information that are associated with a number of regions in the modeling volume, and rendering an image of the object from a view-point using the modeling volume, where each pixel of the image is rendered by projecting a virtual ray from the view-point and through the modeling volume, determining one or more of the number of regions in the modeling volume intersected by the virtual ray, and determining a color and an opacity of the pixel based on an accumulation of the color and opacity information associated with the one or more of the number of regions intersected by the virtual ray.Type: GrantFiled: April 9, 2019Date of Patent: July 13, 2021Assignee: Facebook Technologies, LLCInventors: Jason Saragih, Stephen Anthony Lombardi, Tomas Simon Kreuz, Gabriel Bailowitz Schwartz
-
Patent number: 11010951Abstract: In one embodiment, a system may capture one or more images of a user using one or more cameras, the one or more images depicting at least an eye and a face of the user. The system may determine a direction of a gaze of the user based on the eye depicted in the one or more images. The system may generate a facial mesh based on depth measurements of one or more features of the face depicted in the one or more images. The system may generate an eyeball texture for an eyeball mesh by processing the direction of the gaze and the facial mesh using a machine-learning model. The system may render an avatar of the user based on the eyeball mesh, the eyeball texture, the facial mesh, and a facial texture.Type: GrantFiled: January 9, 2020Date of Patent: May 18, 2021Assignee: Facebook Technologies, LLCInventors: Gabriel Bailowitz Schwartz, Jason Saragih, Tomas Simon Kreuz, Shih-En Wei, Stephen Anthony Lombardi
-
Patent number: 10885693Abstract: In one embodiment, a computing system may access a plurality of first captured images that are captured in a first spectral domain, generate, using a first machine-learning model, a plurality of first domain-transferred images based on the first captured images, wherein the first domain-transferred images are in a second spectral domain, render, based on a first avatar, a plurality of first rendered images comprising views of the first avatar, and update the first machine-learning model based on comparisons between the first domain-transferred images and the first rendered images, wherein the first machine-learning model is configured to translate images in the first spectral domain to the second spectral domain. The system may also generate, using a second machine-learning model, the first avatar based on the first captured images. The first avatar may be rendered using a parametric face model based on a plurality of avatar parameters.Type: GrantFiled: June 21, 2019Date of Patent: January 5, 2021Assignee: Facebook Technologies, LLCInventors: Jason Saragih, Shih-En Wei
-
Publication number: 20200402284Abstract: In one embodiment, a computing system may access a plurality of first captured images that are captured in a first spectral domain, generate, using a first machine-learning model, a plurality of first domain-transferred images based on the first captured images, wherein the first domain-transferred images are in a second spectral domain, render, based on a first avatar, a plurality of first rendered images comprising views of the first avatar, and update the first machine-learning model based on comparisons between the first domain-transferred images and the first rendered images, wherein the first machine-learning model is configured to translate images in the first spectral domain to the second spectral domain. The system may also generate, using a second machine-learning model, the first avatar based on the first captured images. The first avatar may be rendered using a parametric face model based on a plurality of avatar parameters.Type: ApplicationFiled: June 21, 2019Publication date: December 24, 2020Inventors: Jason Saragih, Shih-En Wei