Patents by Inventor Shih-En Wei
Shih-En Wei has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240061499Abstract: A method for updating a gaze direction for a transmitter avatar in a receiver headset is provided. The method includes verifying, in a receiver device, that a visual tracking of a transmitter avatar is active in a transmitter device, and adjusting, in the receiver device, a gaze direction of the transmitter avatar to a fixation point. Adjusting the gaze direction of the transmitter avatar comprises estimating a coordinate of the fixation point in a receiver frame at a later time, and rotating, in the receiver device, two eyeballs of the transmitter avatar to a point in a direction of the fixation point. A headset, a memory in the headset storing instructions, and a processor configured to execute the instructions to perform the above method, are also provided.Type: ApplicationFiled: December 16, 2022Publication date: February 22, 2024Inventors: Jason Saragih, Tomas Simon Kreuz, Shunsuke Saito, Stephen Anthony Lombardi, Gabriel Bailowitz Schwartz, Shih-En Wei
-
Publication number: 20230326112Abstract: A method for providing a relightable avatar of a subject to a virtual reality application is provided. The method includes retrieving multiple images including multiple views of a subject and generating an expression-dependent texture map and a view-dependent texture map for the subject, based on the images. The method also includes generating, based on the expression-dependent texture map and the view-dependent texture map, a view of the subject illuminated by a light source selected from an environment in an immersive reality application, and providing the view of the subject to an immersive reality application running in a client device. A non-transitory, computer-readable medium storing instructions and a system that executes the instructions to perform the above method are also provided.Type: ApplicationFiled: June 13, 2023Publication date: October 12, 2023Inventors: Jason Saragih, Stephen Anthony Lombardi, Shunsuke Saito, Tomas Simon Kreuz, Shih-En Wei, Kevyn Alex Anthony McPhail, Yaser Sheikh, Sai Bi
-
Publication number: 20230245365Abstract: A method for generating a subject avatar using a mobile phone scan is provided. The method includes receiving, from a mobile device, multiple images of a first subject, extracting multiple image features from the images of the first subject based on a set of learnable weights, inferring a three-dimensional model of the first subject from the image features and an existing three-dimensional model of a second subject, animating the three-dimensional model of the first subject based on an immersive reality application running on a headset used by a viewer, and providing, to a display on the headset, an image of the three-dimensional model of the first subject. A system and a non-transitory, computer-readable medium storing instructions to perform the above method, are also provided.Type: ApplicationFiled: December 2, 2022Publication date: August 3, 2023Inventors: Chen Cao, Stuart Anderson, Tomas Simon Kreuz, Jin Kyu Kim, Gabriel Bailowitz Schwartz, Michael Zollhoefer, Shunsuke Saito, Stephen Anthony Lombardi, Shih-En Wei, Danielle Belko, Shoou-I Yu, Yaser Sheikh, Jason Saragih
-
Patent number: 11715248Abstract: A method for providing a relightable avatar of a subject to a virtual reality application is provided. The method includes retrieving multiple images including multiple views of a subject and generating an expression-dependent texture map and a view-dependent texture map for the subject, based on the images. The method also includes generating, based on the expression-dependent texture map and the view-dependent texture map, a view of the subject illuminated by a light source selected from an environment in an immersive reality application, and providing the view of the subject to an immersive reality application running in a client device. A non-transitory, computer-readable medium storing instructions and a system that executes the instructions to perform the above method are also provided.Type: GrantFiled: January 20, 2022Date of Patent: August 1, 2023Assignee: Meta Platforms Technologies, LLCInventors: Jason Saragih, Stephen Anthony Lombardi, Shunsuke Saito, Tomas Simon Kreuz, Shih-En Wei, Kevyn Alex Anthony McPhail, Yaser Sheikh, Sai Bi
-
Publication number: 20220237843Abstract: A method for providing a relightable avatar of a subject to a virtual reality application is provided. The method includes retrieving multiple images including multiple views of a subject and generating an expression-dependent texture map and a view-dependent texture map for the subject, based on the images. The method also includes generating, based on the expression-dependent texture map and the view-dependent texture map, a view of the subject illuminated by a light source selected from an environment in an immersive reality application, and providing the view of the subject to an immersive reality application running in a client device. A non-transitory, computer-readable medium storing instructions and a system that executes the instructions to perform the above method are also provided.Type: ApplicationFiled: January 20, 2022Publication date: July 28, 2022Inventors: Jason Saragih, Stephen Anthony Lombardi, Shunsuke Saito, Tomas Simon Kreuz, Shih-En Wei, Kevyn Alex Anthony McPhail, Yaser Sheikh, Sai Bi
-
Publication number: 20220207831Abstract: A method for simulating a solid body animation of a subject includes retrieving a first frame that includes a body image of a subject. The method also includes selecting, from the first frame, multiple key points within the body image of the subject that define a hull of a body part and multiple joint points that define a joint between two body parts, identifying a geometry, a speed, and a mass of the body part to include in a dynamic model of the subject, based on the key points and the joint points, determining, based on the dynamic model of the subject, a pose of the subject in a second frame after the first frame in a video stream, and providing the video stream to an immersive reality application running on a client device.Type: ApplicationFiled: December 20, 2021Publication date: June 30, 2022Inventors: Jason Saragih, Shih-En Wei, Tomas Simon Kreuz, Kris Makoto Kitani, Ye Yuan
-
Patent number: 11010951Abstract: In one embodiment, a system may capture one or more images of a user using one or more cameras, the one or more images depicting at least an eye and a face of the user. The system may determine a direction of a gaze of the user based on the eye depicted in the one or more images. The system may generate a facial mesh based on depth measurements of one or more features of the face depicted in the one or more images. The system may generate an eyeball texture for an eyeball mesh by processing the direction of the gaze and the facial mesh using a machine-learning model. The system may render an avatar of the user based on the eyeball mesh, the eyeball texture, the facial mesh, and a facial texture.Type: GrantFiled: January 9, 2020Date of Patent: May 18, 2021Assignee: Facebook Technologies, LLCInventors: Gabriel Bailowitz Schwartz, Jason Saragih, Tomas Simon Kreuz, Shih-En Wei, Stephen Anthony Lombardi
-
Patent number: 10885693Abstract: In one embodiment, a computing system may access a plurality of first captured images that are captured in a first spectral domain, generate, using a first machine-learning model, a plurality of first domain-transferred images based on the first captured images, wherein the first domain-transferred images are in a second spectral domain, render, based on a first avatar, a plurality of first rendered images comprising views of the first avatar, and update the first machine-learning model based on comparisons between the first domain-transferred images and the first rendered images, wherein the first machine-learning model is configured to translate images in the first spectral domain to the second spectral domain. The system may also generate, using a second machine-learning model, the first avatar based on the first captured images. The first avatar may be rendered using a parametric face model based on a plurality of avatar parameters.Type: GrantFiled: June 21, 2019Date of Patent: January 5, 2021Assignee: Facebook Technologies, LLCInventors: Jason Saragih, Shih-En Wei
-
Publication number: 20200402284Abstract: In one embodiment, a computing system may access a plurality of first captured images that are captured in a first spectral domain, generate, using a first machine-learning model, a plurality of first domain-transferred images based on the first captured images, wherein the first domain-transferred images are in a second spectral domain, render, based on a first avatar, a plurality of first rendered images comprising views of the first avatar, and update the first machine-learning model based on comparisons between the first domain-transferred images and the first rendered images, wherein the first machine-learning model is configured to translate images in the first spectral domain to the second spectral domain. The system may also generate, using a second machine-learning model, the first avatar based on the first captured images. The first avatar may be rendered using a parametric face model based on a plurality of avatar parameters.Type: ApplicationFiled: June 21, 2019Publication date: December 24, 2020Inventors: Jason Saragih, Shih-En Wei
-
Patent number: 10795436Abstract: A virtual reality (VR) or augmented reality (AR) head mounted display (HMD) includes multiple image capture devices positioned within the HMD to capture portions of a face of a user wearing the HMD. Images from an image capture device include a user's eye, while additional images from another image capture device include the user's other eye. The images and the additional images are provided to a controller, which applies a trained model to the images and the additional images to generate a vector identifying a position of the user's head and positions of the user's eye and fixation of each of the user's eyes. Additionally, illumination sources illuminating portions of the user's face include in the images and in the additional images are configured when the user wears the HMD to prevent over-saturation or under-saturation of the images and the additional images.Type: GrantFiled: November 13, 2019Date of Patent: October 6, 2020Assignee: Facebook Technologies, LLCInventors: Shih-En Wei, Jason Saragih, Hernan Badino, Alexander Trenor Hypes, Mohsen Shahmohammadi, Dawei Wang, Michal Perdoch
-
Publication number: 20200201430Abstract: A virtual reality (VR) or augmented reality (AR) head mounted display (HMD) includes multiple image capture devices positioned within the HMD to capture portions of a face of a user wearing the HMD. Images from an image capture device include a user's eye, while additional images from another image capture device include the user's other eye. The images and the additional images are provided to a controller, which applies a trained model to the images and the additional images to generate a vector identifying a position of the user's head and positions of the user's eye and fixation of each of the user's eyes. Additionally, illumination sources illuminating portions of the user's face include in the images and in the additional images are configured when the user wears the HMD to prevent over-saturation or under-saturation of the images and the additional images.Type: ApplicationFiled: November 13, 2019Publication date: June 25, 2020Inventors: Shih-En Wei, Jason Saragih, Hernan Badino, Alexander Trenor Hypes, Mohsen Shahmohammadi, Dawei Wang, Michal Perdoch
-
Patent number: 10636192Abstract: A virtual reality (VR) or augmented reality (AR) head mounted display (HMD) includes various facial sensors, such as cameras, that capture images of portions of the user's face outside of the HMD. For example, multiple facial sensors capture images of a portion of the user's face below the HMD. Through image analysis, points of the portion of the user's face are identified from the images and their movement is tracked. The identified points are mapped to a three dimensional model of a face. Additionally, a parametric representation of the user's face is determined for each captured image, resulting in various representations indicating the user's facial expressions. From the parametric representations and transforms mapping the captured images to three dimensions, a rendering model is used and applied to the three dimensional model of the face to render the user's facial expressions.Type: GrantFiled: June 29, 2018Date of Patent: April 28, 2020Assignee: Facebook Technologies, LLCInventors: Jason Saragih, Hernan Badino, Shih-En Wei
-
Patent number: 10636193Abstract: A virtual reality (VR) or augmented reality (AR) head mounted display (HMD) includes various image capture devices that capture images of portions of the user's face and body. Through image analysis, points of each portion of the user's face and body are identified from the images and their movement is tracked. The identified points are mapped to a three dimensional model of a face and to a three dimensional model of a body. From the identified points, animation parameters describing positioning of various points of the user's face and body are determined for each captured image. From the animation parameters and transforms mapping the captured images to three dimensions, the three dimensional model of the face and the three dimensional model of the body is altered to render movement of the user's face and body.Type: GrantFiled: June 29, 2018Date of Patent: April 28, 2020Assignee: Facebook Technologies, LLCInventors: Yaser Sheikh, Hernan Badino, Alexander Trenor Hypes, Dawei Wang, Mohsen Shahmohammadi, Michal Perdoch, Jason Saragih, Shih-En Wei
-
Patent number: 10529113Abstract: A virtual reality (VR) or augmented reality (AR) head mounted display (HMD) includes various image capture devices that capture images of portions of the user's face. Through image analysis, points of each portion of the user's face are identified from the images and their movement is tracked. The identified points are mapped to a three dimensional model of a face. From the identified points, a blendshape vector is determined for each captured image, resulting in various vectors indicating the user's facial expressions. A direct expression model that directly maps images to blendshape coefficients for a set of facial expressions based on captured information from a set of users may augment the blendshape vector in various embodiments. From the blendshape vectors and transforms mapping the captured images to three dimensions, the three dimensional model of the face is altered to render the user's facial expressions.Type: GrantFiled: January 4, 2019Date of Patent: January 7, 2020Assignee: Facebook Technologies, LLCInventors: Yaser Sheikh, Hernan Badino, Jason Saragih, Shih-En Wei, Alexander Trenor Hypes, Dawei Wang, Mohsen Shahmohammadi, Michal Perdoch
-
Patent number: 10509467Abstract: A virtual reality (VR) or augmented reality (AR) head mounted display (HMD) includes multiple image capture devices positioned within the HMD to capture portions of a face of a user wearing the HMD. Images from an image capture device include a user's eye, while additional images from another image capture device include the user's other eye. The images and the additional images are provided to a controller, which applies a trained model to the images and the additional images to generate a vector identifying a position of the user's head and positions of the user's eye and fixation of each of the user's eyes. Additionally, illumination sources illuminating portions of the user's face include in the images and in the additional images are configured when the user wears the HMD to prevent over-saturation or under-saturation of the images and the additional images.Type: GrantFiled: June 1, 2018Date of Patent: December 17, 2019Assignee: Facebook Technologies, LLCInventors: Shih-En Wei, Jason Saragih, Hernan Badino, Alexander Trenor Hypes, Mohsen Shahmohammadi, Dawei Wang, Michal Perdoch
-
Publication number: 20190369718Abstract: A virtual reality (VR) or augmented reality (AR) head mounted display (HMD) includes multiple image capture devices positioned within the HMD to capture portions of a face of a user wearing the HMD. Images from an image capture device include a user's eye, while additional images from another image capture device include the user's other eye. The images and the additional images are provided to a controller, which applies a trained model to the images and the additional images to generate a vector identifying a position of the user's head and positions of the user's eye and fixation of each of the user's eyes. Additionally, illumination sources illuminating portions of the user's face include in the images and in the additional images are configured when the user wears the HMD to prevent over-saturation or under-saturation of the images and the additional images.Type: ApplicationFiled: June 1, 2018Publication date: December 5, 2019Inventors: Shih-En Wei, Jason Saragih, Hernan Badino, Alexander Trenor Hypes, Mohsen Shahmohammadi, Dawei Wang, Michal Perdoch
-
Patent number: 10495882Abstract: A virtual reality (VR) or augmented reality (AR) head mounted display (HMD) includes multiple image capture devices positioned within and on the HMD to capture portions of a face of a user wearing the HMD. Multiple image capture devices are included within the HMD to capture different portions of the face of the user within the HMD, and one or more other image capture devices are positioned to capture portions of the face of the user external to the HMD. Captured images from various image capture devices may be communicated to a console or a controller that generates a graphical representation of the user's face based on the captured images.Type: GrantFiled: June 4, 2018Date of Patent: December 3, 2019Assignee: Facebook Technologies, LLCInventors: Hernan Badino, Yaser Sheikh, Alexander Trenor Hypes, Dawei Wang, Mohsen Shahmohammadi, Michal Perdoch, Jason Saragih, Shih-En Wei