Patents by Inventor Danhang TANG

Danhang TANG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250124634
    Abstract: A method includes determining a trajectory of an object based on a mass of the object, and determining a motion of a hand based on the mass of the object and the trajectory of the object. The method can further include generating an animation of the hand interacting with the object based on the trajectory of the object and the motion of the hand.
    Type: Application
    Filed: October 16, 2024
    Publication date: April 17, 2025
    Inventors: Thabo Beeler, Jan Bednarík, Bardia Doosti, Bernd Bickel, Danhang Tang, Jonathan James Taylor, Soshi Shimada, Franziska Müller
  • Publication number: 20250045968
    Abstract: Nonlinear peri-codec optimization for image and video coding includes obtaining a source image including pixel values expressed in a first defined image sample space, generating a neuralized image representing the source image, the neuralized image including pixel values that are expressed as neural latent space values, encoding the input image wherein the neural latent space values are used as pixel values in a second defined image sample space and the input image is in an operative image format of the encoder, such that a decoder decodes the encoded image to obtain a reconstructed image in the second defined image sample space, wherein the reconstructed image is a reconstructed neuralized image including reconstructed neural latent space values, such that a deneuralized reconstructed image corresponding to the source image is obtained by a nonlinear post-codec image processor in the first defined image sample space.
    Type: Application
    Filed: June 16, 2021
    Publication date: February 6, 2025
    Inventors: Onur G. Guleryuz, Ruofei Du, Hugues H. Hoppe, Sean Ryan Francesco Fanello, Philip Andrew Chou, Danhang Tang, Philip Davidson
  • Publication number: 20240303908
    Abstract: A method including generating a first vector based on a first grid and a three-dimensional (3D) position associated with a first implicit representation (IR) of a 3D object, generating at least one second vector based on at least one second grid and an upsampled first grid, decoding the first vector to generate a second IR of the 3D object, decoding the at least one second vector to generate at least one third IR of the 3D object, generating a composite IR of the 3D object based on the second IR of the 3D object and the at least one third IR of the 3D object, and generating a reconstructed volume representing the 3D object based on the composite IR of the 3D object.
    Type: Application
    Filed: April 30, 2021
    Publication date: September 12, 2024
    Inventors: Yinda Zhang, Danhang Tang, Ruofei Du, Zhang Chen, Kyle Genova, Sofien Bouaziz, Thomas Allen Funkhouser, Sean Ryan Francesco Fanello, Christian Haene
  • Publication number: 20240290025
    Abstract: A method comprises receiving a first sequence of images of a portion of a user, the first sequence of images being monocular images; generating an avatar based on the first sequence of images, the avatar being based on a model including a feature vector associated with a vertex; receiving a second sequence of images of the portion of the user; and based on the second sequence of images, modifying the avatar with a displacement of the vertex to represent a gesture of the avatar.
    Type: Application
    Filed: February 27, 2024
    Publication date: August 29, 2024
    Inventors: Yinda Zhang, Sean Ryan Francesco Fanello, Ziqian Bai, Feitong Tan, Zeng Huang, Kripasindhu Sarkar, Danhang Tang, Di Qiu, Abhimitra Meka, Ruofei Du, Mingsong Dou, Sergio Orts Escolano, Rohit Kumar Pandey, Thabo Beeler
  • Patent number: 12066282
    Abstract: A lighting stage includes a plurality of lights that project alternating spherical color gradient illumination patterns onto an object or human performer at a predetermined frequency. The lighting stage also includes a plurality of cameras that capture images of an object or human performer corresponding to the alternating spherical color gradient illumination patterns. The lighting stage also includes a plurality of depth sensors that capture depth maps of the object or human performer at the predetermined frequency. The lighting stage also includes (or is associated with) one or more processors that implement a machine learning algorithm to produce a three-dimensional (3D) model of the object or human performer. The 3D model includes relighting parameters used to relight the 3D model under different lighting conditions.
    Type: Grant
    Filed: November 11, 2020
    Date of Patent: August 20, 2024
    Assignee: GOOGLE LLC
    Inventors: Sean Ryan Francesco Fanello, Kaiwen Guo, Peter Christopher Lincoln, Philip Lindsley Davidson, Jessica L. Busch, Xueming Yu, Geoffrey Harvey, Sergio Orts Escolano, Rohit Kumar Pandey, Jason Dourgarian, Danhang Tang, Adarsh Prakash Murthy Kowdle, Emily B. Cooper, Mingsong Dou, Graham Fyffe, Christoph Rhemann, Jonathan James Taylor, Shahram Izadi, Paul Ernest Debevec
  • Publication number: 20240212184
    Abstract: A method including predicting a stereo depth associated with a first panoramic image and a second panoramic image, the first panoramic image and the second panoramic image being captured with a time interlude between the capture of the first panoramic image and the second panoramic image, generating a first mesh representation based on the first panoramic image and a stereo depth corresponding to the first panoramic image, generating a second mesh representation based on the second panoramic image and a stereo depth corresponding to the second panoramic image, and synthesizing a third panoramic image based on fusing the first mesh representation with the second mesh representation.
    Type: Application
    Filed: April 30, 2021
    Publication date: June 27, 2024
    Inventors: Ruofei Du, David Li, Danhang Tang, Yinda Zhang
  • Publication number: 20240212325
    Abstract: Systems and methods for training models to predict dense correspondences across images such as human images. A model may be trained using synthetic training data created from one or more 3D computer models of a subject. In addition, one or more geodesic distances derived from the surfaces of one or more of the 3D models may be used to generate one or more loss values, which may in turn be used in modifying the model's parameters during training.
    Type: Application
    Filed: March 6, 2024
    Publication date: June 27, 2024
    Inventors: Yinda Zhang, Feitong Tan, Danhang Tang, Mingsong Dou, Kaiwen Guo, Sean Ryan Francesco Fanello, Sofien Bouaziz, Cem Keskin, Ruofei Du, Rohit Kumar Pandey, Deqing Sun
  • Patent number: 11954899
    Abstract: Systems and methods for training models to predict dense correspondences across images such as human images. A model may be trained using synthetic training data created from one or more 3D computer models of a subject. In addition, one or more geodesic distances derived from the surfaces of one or more of the 3D models may be used to generate one or more loss values, which may in turn be used in modifying the model's parameters during training.
    Type: Grant
    Filed: March 11, 2021
    Date of Patent: April 9, 2024
    Assignee: GOOGLE LLC
    Inventors: Yinda Zhang, Feitong Tan, Danhang Tang, Mingsong Dou, Kaiwen Guo, Sean Ryan Francesco Fanello, Sofien Bouaziz, Cem Keskin, Ruofei Du, Rohit Kumar Pandey, Deqing Sun
  • Publication number: 20240046618
    Abstract: Systems and methods for training models to predict dense correspondences across images such as human images. A model may be trained using synthetic training data created from one or more 3D computer models of a subject. In addition, one or more geodesic distances derived from the surfaces of one or more of the 3D models may be used to generate one or more loss values, which may in turn be used in modifying the model's parameters during training.
    Type: Application
    Filed: March 11, 2021
    Publication date: February 8, 2024
    Inventors: Yinda Zhang, Feitong Tan, Danhang Tang, Mingsong Dou, Kaiwen Guo, Sean Ryan Francesco Fanello, Sofien Bouaziz, Cem Keskin, Ruofei Du, Rohit Kumar Pandey, Deqing Sun
  • Publication number: 20240020915
    Abstract: Techniques include introducing a neural generator configured to produce novel faces that can be rendered at free camera viewpoints (e.g., at any angle with respect to the camera) and relit under an arbitrary high dynamic range (HDR) light map. A neural implicit intrinsic field takes a randomly sampled latent vector as input and produces as output per-point albedo, volume density, and reflectance properties for any queried 3D location. These outputs are aggregated via a volumetric rendering to produce low resolution albedo, diffuse shading, specular shading, and neural feature maps. The low resolution maps are then upsampled to produce high resolution maps and input into a neural renderer to produce relit images.
    Type: Application
    Filed: July 17, 2023
    Publication date: January 18, 2024
    Inventors: Yinda Zhang, Feitong Tan, Sean Ryan Francesco Fanello, Abhimitra Meka, Sergio Orts Escolano, Danhang Tang, Rohit Kumar Pandey, Jonathan James Taylor
  • Patent number: 11868583
    Abstract: Systems and methods are provided in which physical objects in the ambient environment can function as user interface implements in an augmented reality environment. A physical object detected within a field of view of a camera of a computing device may be designated as a user interface implement in response to a user command. User interfaces may be attached to the designated physical object, to provide a tangible user interface implement for user interaction with the augmented reality environment.
    Type: Grant
    Filed: March 28, 2022
    Date of Patent: January 9, 2024
    Assignee: Google LLC
    Inventors: Ruofei Du, Alex Olwal, Mathieu Simon Le Goc, David Kim, Danhang Tang
  • Publication number: 20230305672
    Abstract: Systems and methods are provided in which physical objects in the ambient environment can function as user interface implements in an augmented reality environment. A physical object detected within a field of view of a camera of a computing device may be designated as a user interface implement in response to a user command. User interfaces may be attached to the designated physical object, to provide a tangible user interface implement for user interaction with the augmented reality environment.
    Type: Application
    Filed: March 28, 2022
    Publication date: September 28, 2023
    Inventors: Ruofei Du, Alex Olwal, Mathieu Simon Le Goc, David Kim, Danhang Tang
  • Publication number: 20230154051
    Abstract: Systems and methods are directed to encoding and/or decoding of the textures/geometry of a three-dimensional volumetric representation. An encoding computing system can obtain voxel blocks from a three-dimensional volumetric representation of an object. The encoding computing system can encode voxel blocks with a machine-learned voxel encoding model to obtain encoded voxel blocks. The encoding computing system can decode the encoded voxel blocks with a machine-learned voxel decoding model to obtain reconstructed voxel blocks. The encoding computing system can generate a reconstructed mesh representation of the object based at least in part on the one or more reconstructed voxel blocks. The encoding computing system can encode textures associated with the voxel blocks according to an encoding scheme and based at least in part on the reconstructed mesh representation of the object to obtain encoded textures.
    Type: Application
    Filed: April 17, 2020
    Publication date: May 18, 2023
    Inventors: Danhang Tang, Saurabh Singh, Cem Keskin, Phillip Andrew Chou, Christian Haene, Mingsong Dou, Sean Ryan Francesco Fanello, Jonathan Taylor, Andrea Tagliasacchi, Philip Lindsley Davidson, Yinda Zhang, Onur Gonen Guleryuz, Shahram Izadi, Sofien Bouaziz
  • Publication number: 20220065620
    Abstract: A lighting stage includes a plurality of lights that project alternating spherical color gradient illumination patterns onto an object or human performer at a predetermined frequency. The lighting stage also includes a plurality of cameras that capture images of an object or human performer corresponding to the alternating spherical color gradient illumination patterns. The lighting stage also includes a plurality of depth sensors that capture depth maps of the object or human performer at the predetermined frequency. The lighting stage also includes (or is associated with) one or more processors that implement a machine learning algorithm to produce a three-dimensional (3D) model of the object or human performer. The 3D model includes relighting parameters used to relight the 3D model under different lighting conditions.
    Type: Application
    Filed: November 11, 2020
    Publication date: March 3, 2022
    Inventors: Sean Ryan Francesco Fanello, Kaiwen Guo, Peter Christopher Lincoln, Philip Lindsley Davidson, Jessica L. Busch, Xueming Yu, Geoffrey Harvey, Sergio Orts Escolano, Rohit Kumar Pandey, Jason Dourgarian, Danhang Tang, Adarsh Prakash Murthy Kowdle, Emily B. Cooper, Mingsong Dou, Graham Fyffe, Christoph Rhemann, Jonathan James Taylor, Shahram Izadi, Paul Ernest Debevec
  • Patent number: 11030773
    Abstract: An electronic device estimates a pose of a hand by volumetrically deforming a signed distance field using a skinned tetrahedral mesh to locate a local minimum of an energy function, wherein the local minimum corresponds to the hand pose. The electronic device identifies a pose of the hand by fitting an implicit surface model of a hand to the pixels of a depth image that correspond to the hand. The electronic device uses a skinned tetrahedral mesh to warp space from a base pose to a deformed pose to define an articulated signed distance field from which the hand tracking module derives candidate poses of the hand. The electronic device then minimizes an energy function based on the distance of each corresponding pixel to identify the candidate pose that most closely approximates the pose of the hand.
    Type: Grant
    Filed: February 24, 2020
    Date of Patent: June 8, 2021
    Assignee: Google LLC
    Inventors: Jonathan James Taylor, Vladimir Tankovich, Danhang Tang, Cem Keskin, Adarsh Prakash Murthy Kowdle, Philip L. Davidson, Shahram Izadi, David Kim
  • Patent number: 10839539
    Abstract: An electronic device estimates a depth map of an environment based on stereo depth images captured by depth cameras having exposure times that are offset from each other in conjunction with illuminators pulsing illumination patterns into the environment. A processor of the electronic device matches small sections of the depth images from the cameras to each other and to corresponding patches of immediately preceding depth images (e.g., a spatio-temporal image patch “cube”). The processor computes a matching cost for each spatio-temporal image patch cube by converting each spatio-temporal image patch into binary codes and defining a cost function between two stereo image patches as the difference between the binary codes. The processor minimizes the matching cost to generate a disparity map, and optimizes the disparity map by rejecting outliers using a decision tree with learned pixel offsets and refining subpixels to generate a depth map of the environment.
    Type: Grant
    Filed: May 31, 2018
    Date of Patent: November 17, 2020
    Assignee: GOOGLE LLC
    Inventors: Adarsh Prakash Murthy Kowdle, Vladimir Tankovich, Danhang Tang, Cem Keskin, Jonathan James Taylor, Philip L. Davidson, Shahram Izadi, Sean Ryan Fanello, Julien Pascal Christophe Valentin, Christoph Rhemann, Mingsong Dou, Sameh Khamis, David Kim
  • Publication number: 20200193638
    Abstract: An electronic device estimates a pose of a hand by volumetrically deforming a signed distance field using a skinned tetrahedral mesh to locate a local minimum of an energy function, wherein the local minimum corresponds to the hand pose. The electronic device identifies a pose of the hand by fitting an implicit surface model of a hand to the pixels of a depth image that correspond to the hand. The electronic device uses a skinned tetrahedral mesh to warp space from a base pose to a deformed pose to define an articulated signed distance field from which the hand tracking module derives candidate poses of the hand. The electronic device then minimizes an energy function based on the distance of each corresponding pixel to identify the candidate pose that most closely approximates the pose of the hand.
    Type: Application
    Filed: February 24, 2020
    Publication date: June 18, 2020
    Inventors: Jonathan James TAYLOR, Vladimir TANKOVICH, Danhang TANG, Cem KESKIN, Adarsh Prakash Murthy KOWDLE, Philip L. DAVIDSON, Shahram IZADI, David KIM
  • Patent number: 10614591
    Abstract: An electronic device estimates a pose of a hand by volumetrically deforming a signed distance field using a skinned tetrahedral mesh to locate a local minimum of an energy function, wherein the local minimum corresponds to the hand pose. The electronic device identifies a pose of the hand by fitting an implicit surface model of a hand to the pixels of a depth image that correspond to the hand. The electronic device uses a skinned tetrahedral mesh to warp space from a base pose to a deformed pose to define an articulated signed distance field from which the hand tracking module derives candidate poses of the hand. The electronic device then minimizes an energy function based on the distance of each corresponding pixel to identify the candidate pose that most closely approximates the pose of the hand.
    Type: Grant
    Filed: May 31, 2018
    Date of Patent: April 7, 2020
    Assignee: GOOGLE LLC
    Inventors: Jonathan James Taylor, Vladimir Tankovich, Danhang Tang, Cem Keskin, Adarsh Prakash Murthy Kowdle, Philip L. Davidson, Shahram Izadi, David Kim
  • Publication number: 20180350105
    Abstract: An electronic device estimates a pose of a hand by volumetrically deforming a signed distance field using a skinned tetrahedral mesh to locate a local minimum of an energy function, wherein the local minimum corresponds to the hand pose. The electronic device identifies a pose of the hand by fitting an implicit surface model of a hand to the pixels of a depth image that correspond to the hand. The electronic device uses a skinned tetrahedral mesh to warp space from a base pose to a deformed pose to define an articulated signed distance field from which the hand tracking module derives candidate poses of the hand. The electronic device then minimizes an energy function based on the distance of each corresponding pixel to identify the candidate pose that most closely approximates the pose of the hand.
    Type: Application
    Filed: May 31, 2018
    Publication date: December 6, 2018
    Inventors: Jonathan James TAYLOR, Vladimir TANKOVICH, Danhang TANG, Cem KESKIN, Adarsh Prakash Murthy KOWDLE, Philip L. DAVIDSON, Shahram IZADI, David KIM
  • Publication number: 20180350087
    Abstract: An electronic device estimates a depth map of an environment based on stereo depth images captured by depth cameras having exposure times that are offset from each other in conjunction with illuminators pulsing illumination patterns into the environment. A processor of the electronic device matches small sections of the depth images from the cameras to each other and to corresponding patches of immediately preceding depth images (e.g., a spatio-temporal image patch “cube”). The processor computes a matching cost for each spatio-temporal image patch cube by converting each spatio-temporal image patch into binary codes and defining a cost function between two stereo image patches as the difference between the binary codes. The processor minimizes the matching cost to generate a disparity map, and optimizes the disparity map by rejecting outliers using a decision tree with learned pixel offsets and refining subpixels to generate a depth map of the environment.
    Type: Application
    Filed: May 31, 2018
    Publication date: December 6, 2018
    Inventors: Adarsh Prakash Murthy KOWDLE, Vladimir TANKOVICH, Danhang TANG, Cem KESKIN, Jonathan James Taylor, Philip L. DAVIDSON, Shahram IZADI, Sean Ryan FANELLO, Julien Pascal Christophe VALENTIN, Christoph RHEMANN, Mingsong DOU, Sameh KHAMIS, David KIM