Patents by Inventor Huixuan Tang

Huixuan Tang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11740075
    Abstract: A depth camera assembly (DCA) determines depth information. The DCA projects a dynamic structured light pattern into a local area and captures images including a portion of the dynamic structured light pattern. The DCA determines regions of interest in which it may be beneficial to increase or decrease an amount of texture added to the region of interest using the dynamic structured light pattern. For example, the DCA may identify the regions of interest based on contrast values calculated using a contrast algorithm, or based on the parameters received from a mapping server including a virtual model of the local area. The DCA may selectively increase or decrease an amount of texture added by the dynamic structured light pattern in portions of the local area. By selectively controlling portions of the dynamic structured light pattern, the DCA may decrease power consumption and/or increase the accuracy of depth sensing measurements.
    Type: Grant
    Filed: October 18, 2021
    Date of Patent: August 29, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Michael Hall, Xinqiao Liu, Zhaoming Zhu, Rajesh Lachhmandas Chhabria, Huixuan Tang, Shuochen Su, Zihe Gao
  • Patent number: 11703323
    Abstract: A depth estimation system is described capable of determining depth information using two images from two cameras. A first camera captures a first image and a second camera captures a second image, both images including a plurality of light channels. A scan direction is selected from a plurality of scan directions. For the selected scan direction, along each of a plurality of scanlines, the system compares pixels from the first image to pixels from the second image. The comparison is based on calculating a census transform for each pixel in the first image and a census transform for each pixel in the second image. This comparison is used to determine a stereo correspondence between the pixels in the first image and the pixels in the second image. The system generates a depth map based on the stereo correspondence.
    Type: Grant
    Filed: April 14, 2021
    Date of Patent: July 18, 2023
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Michael Hall, Xinqiao Liu, Zhaoming Zhu, Rajesh Lachhmandas Chhabria, Huixuan Tang, Shuochen Su
  • Patent number: 11587254
    Abstract: Raycast-based calibration techniques are described for determining calibration parameters associated with components of a head mounted display (HMD) of an augmented reality (AR) system having one or more off-axis reflective combiners. In an example, a system comprises an image capture device and a processor executing a calibration engine. The calibration engine is configured to determine correspondences between target points and camera pixels based on images of the target acquired through an optical system, the optical system including optical surfaces and an optical combiner. Each optical surface is defined by a difference of optical index on opposing sides of the surface. At least one calibration parameter for the optical system is determined by mapping rays from each camera pixel to each target point via raytracing through the optical system, the raytracing being based on the index differences, shapes, and positions of the optical surfaces relative to the one or more cameras.
    Type: Grant
    Filed: June 17, 2020
    Date of Patent: February 21, 2023
    Assignee: META PLATFORMS TECHNOLOGIES, LLC
    Inventors: Huixuan Tang, Hauke Malte Strasdat, Qi Guo, Steven John Lovegrove
  • Publication number: 20220036571
    Abstract: A depth camera assembly (DCA) determines depth information. The DCA projects a dynamic structured light pattern into a local area and captures images including a portion of the dynamic structured light pattern. The DCA determines regions of interest in which it may be beneficial to increase or decrease an amount of texture added to the region of interest using the dynamic structured light pattern. For example, the DCA may identify the regions of interest based on contrast values calculated using a contrast algorithm, or based on the parameters received from a mapping server including a virtual model of the local area. The DCA may selectively increase or decrease an amount of texture added by the dynamic structured light pattern in portions of the local area. By selectively controlling portions of the dynamic structured light pattern, the DCA may decrease power consumption and/or increase the accuracy of depth sensing measurements.
    Type: Application
    Filed: October 18, 2021
    Publication date: February 3, 2022
    Inventors: Michael Hall, Xinqiao Liu, Zhaoming Zhu, Rajesh Lachhmandas Chhabria, Huixuan Tang, Shuochen Su, Zihe Gao
  • Publication number: 20220028103
    Abstract: A depth estimation system is described capable of determining depth information using two images from two cameras. A first camera captures a first image and a second camera captures a second image, both images including a plurality of light channels. A scan direction is selected from a plurality of scan directions. For the selected scan direction, along each of a plurality of scanlines, the system compares pixels from the first image to pixels from the second image. The comparison is based on calculating a census transform for each pixel in the first image and a census transform for each pixel in the second image. This comparison is used to determine a stereo correspondence between the pixels in the first image and the pixels in the second image. The system generates a depth map based on the stereo correspondence.
    Type: Application
    Filed: April 14, 2021
    Publication date: January 27, 2022
    Inventors: Michael Hall, Xinqiao Liu, Zhaoming Zhu, Rajesh Lachhmandas Chhabria, Huixuan Tang, Shuochen Su
  • Patent number: 11195291
    Abstract: A depth camera assembly (DCA) optimizes illumination and image capture of a local area to generate depth information of the local area. The DCA determines depth information for a first portion of the local area viewable at a first pose. The DCA is moved from the first pose to a second pose, where a second portion of the local area is viewable and overlaps with the first portion. The overlapping region is not illuminated by the DCA. A non-overlapping portion of the second portion is illuminated, captured, and depth information determined.
    Type: Grant
    Filed: February 19, 2020
    Date of Patent: December 7, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Fengqiang Li, Zihe Gao, Michael Hall, Zhaoming Zhu, Shuochen Su, Huixuan Tang, Xinqiao Liu, Nicholas Daniel Trail
  • Patent number: 11182914
    Abstract: A depth camera assembly (DCA) determines depth information. The DCA projects a dynamic structured light pattern into a local area and captures images including a portion of the dynamic structured light pattern. The DCA determines regions of interest in which it may be beneficial to increase or decrease an amount of texture added to the region of interest using the dynamic structured light pattern. For example, the DCA may identify the regions of interest based on contrast values calculated using a contrast algorithm, or based on the parameters received from a mapping server including a virtual model of the local area. The DCA may selectively increase or decrease an amount of texture added by the dynamic structured light pattern in portions of the local area. By selectively controlling portions of the dynamic structured light pattern, the DCA may decrease power consumption and/or increase the accuracy of depth sensing measurements.
    Type: Grant
    Filed: May 17, 2019
    Date of Patent: November 23, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Michael Hall, Xinqiao Liu, Zhaoming Zhu, Rajesh Lachhmandas Chhabria, Huixuan Tang, Shuochen Su, Zihe Gao
  • Publication number: 20210183102
    Abstract: Raycast-based calibration techniques are described for determining calibration parameters associated with components of a head mounted display (HMD) of an augmented reality (AR) system having one or more off-axis reflective combiners. In an example, a system comprises an image capture device and a processor executing a calibration engine. The calibration engine is configured to determine correspondences between target points and camera pixels based on images of the target acquired through an optical system, the optical system including optical surfaces and an optical combiner. Each optical surface is defined by a difference of optical index on opposing sides of the surface. At least one calibration parameter for the optical system is determined by mapping rays from each camera pixel to each target point via raytracing through the optical system, the raytracing being based on the index differences, shapes, and positions of the optical surfaces relative to the one or more cameras.
    Type: Application
    Filed: June 17, 2020
    Publication date: June 17, 2021
    Inventors: Huixuan Tang, Hauke Malte Strasdat, Qi Guo, Steven John Lovegrove
  • Patent number: 11010911
    Abstract: A depth estimation system is described capable of determining depth information using two images from two cameras. A first camera captures a first image and a second camera captures a second image, both images including a plurality of light channels. In a first light channel of the plurality of light channels, the system calculates a census transform for each pixel of the first image and a census transform for each pixel of the second image. In a second light channel of the plurality of light channels, the system calculates a census transform for each pixel of the first image and a census transform for each pixel of the second image. The system generates a depth map based in part on the census transforms for each pixel of the first image and the second image in the first light channel and in the second light channel.
    Type: Grant
    Filed: May 21, 2019
    Date of Patent: May 18, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Michael Hall, Xinqiao Liu, Zhaoming Zhu, Rajesh Lachhmandas Chhabria, Huixuan Tang, Shuochen Su
  • Patent number: 10972715
    Abstract: A depth camera assembly (DCA) determines depth information within a local area. The DCA may selectively process a subset of data captured by an imaging sensor and obtained from the imaging sensor, such as pixels corresponding to a region of interest, for depth information. Alternatively, the DCA may limit retrieval of data from the imaging sensor to pixels corresponding to the region of interest from the imaging sensor for processing for depth information. The depth processing may include a semi-global match (SGM) algorithm, and the DCA adjusts a number of neighboring pixels used for determining depth information for a specific pixel based on one or more criteria. In some embodiments, the DCA performs the depth processing by analyzing images from different image sensors using left to right and right to left correspondence checks that are performed in parallel.
    Type: Grant
    Filed: May 21, 2019
    Date of Patent: April 6, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Michael Hall, Xinqiao Liu, Zhaoming Zhu, Rajesh Lachhmandas Chhabria, Huixuan Tang, Shuochen Su
  • Patent number: 10929997
    Abstract: A depth camera assembly (DCA) determines depth information within a local area by capturing images of the local area including a local region using a plurality of imaging sensors. The local region is represented by a first set of pixels in each captured image. For each image, the DCA identifies the first set of pixels corresponding to the surface in the local region and determines a depth measurement from the DCA to the local region by comparing the first set of pixels from images captured by different imaging sensors. To determine depth measurements for second sets of pixels neighboring the first set of pixels, the DCA selectively propagates depth information from the first set of pixels to second sets of pixels satisfying one or more criteria (e.g., satisfying a threshold saturation measurement or a threshold contrast measurement).
    Type: Grant
    Filed: May 21, 2019
    Date of Patent: February 23, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Michael Hall, Xinqiao Liu, Zhaoming Zhu, Rajesh Lachhmandas Chhabria, Huixuan Tang, Shuochen Su, Zihe Gao
  • Patent number: 10848681
    Abstract: Methods and systems for reconstructing images from sensor data are provided. In one example, a method comprises: receiving input data generated by photodiodes each associated with a channel having a target wavelength range for photon-to-charge conversion; obtaining, for each channel, a plurality of channel coefficients, the plurality of channel coefficients being configured to, when combined with the input data to generate channel output data for the each channel, increase a main component of the channel output data contributed by a part of the incident light within the target wavelength range of the each channel with respect to a crosstalk component of the channel output data contributed by a part of the incident light out of the target wavelength range; and generating, for the each channel, the channel output data based on combining the input data with the plurality of channel coefficients to reconstruct an image for the each channel.
    Type: Grant
    Filed: April 16, 2019
    Date of Patent: November 24, 2020
    Assignee: FACEBOOK TECHNOLOGIES, LLC
    Inventors: Huixuan Tang, Song Chen, Xinqiao Liu
  • Publication number: 20190355138
    Abstract: A depth camera assembly (DCA) determines depth information. The DCA projects a dynamic structured light pattern into a local area and captures images including a portion of the dynamic structured light pattern. The DCA determines regions of interest in which it may be beneficial to increase or decrease an amount of texture added to the region of interest using the dynamic structured light pattern. For example, the DCA may identify the regions of interest based on contrast values calculated using a contrast algorithm, or based on the parameters received from a mapping server including a virtual model of the local area. The DCA may selectively increase or decrease an amount of texture added by the dynamic structured light pattern in portions of the local area. By selectively controlling portions of the dynamic structured light pattern, the DCA may decrease power consumption and/or increase the accuracy of depth sensing measurements.
    Type: Application
    Filed: May 17, 2019
    Publication date: November 21, 2019
    Inventors: Michael Hall, Xinqiao Liu, Zhaoming Zhu, Rajesh Lachhmandas Chhabria, Huixuan Tang, Shuochen Su, Zihe Gao
  • Publication number: 20190320105
    Abstract: Methods and systems for reconstructing images from sensor data are provided. In one example, a method comprises: receiving input data generated by photodiodes each associated with a channel having a target wavelength range for photon-to-charge conversion; obtaining, for each channel, a plurality of channel coefficients, the plurality of channel coefficients being configured to, when combined with the input data to generate channel output data for the each channel, increase a main component of the channel output data contributed by a part of the incident light within the target wavelength range of the each channel with respect to a crosstalk component of the channel output data contributed by a part of the incident light out of the target wavelength range; and generating, for the each channel, the channel output data based on combining the input data with the plurality of channel coefficients to reconstruct an image for the each channel.
    Type: Application
    Filed: April 16, 2019
    Publication date: October 17, 2019
    Inventors: Huixuan TANG, Song CHEN, Xinqiao LIU
  • Patent number: 9521391
    Abstract: Systems and methods are disclosed for identifying depth refinement image capture instructions for capturing images that may be used to refine existing depth maps. The depth refinement image capture instructions are determined by evaluating, at each image patch in an existing image corresponding to the existing depth map, a range of possible depth values over a set of configuration settings. Each range of possible depth values corresponds to an existing depth estimate of the existing depth map. This evaluation enables selection of one or more configuration settings in a manner such that there will be additional depth information derivable from one or more additional images captured with the selected configuration settings. When a refined depth map is generated using the one or more additional images, this additional depth information is used to increase the depth precision for at least one depth estimate from the existing depth map.
    Type: Grant
    Filed: February 29, 2016
    Date of Patent: December 13, 2016
    Assignee: Adobe Systems Incorporated
    Inventors: Huixuan Tang, Scott Cohen, Stephen Schiller, Brian Price
  • Patent number: 9479754
    Abstract: Depth maps are generated from two or more of images captured with a conventional digital camera from the same viewpoint using different configuration settings, which may be arbitrarily selected for each image. The configuration settings may include aperture and focus settings and/or other configuration settings capable of introducing blur into an image. The depth of a selected image patch is evaluated over a set of discrete depth hypotheses using a depth likelihood function modeled to analyze corresponding images patches convolved with blur kernels using a flat prior in the frequency domain. In this way, the depth likelihood function may be evaluated without first reconstructing an all-in-focus image. Blur kernels used in the depth likelihood function and are identified from a mapping of depths and configuration settings to the blur kernels. This mapping is determined from calibration data for the digital camera used to capture the two or more images.
    Type: Grant
    Filed: February 17, 2016
    Date of Patent: October 25, 2016
    Assignee: Adobe Systems Incorporated
    Inventors: Huixuan Tang, Scott Cohen, Stephen Schiller, Brian Price
  • Publication number: 20160182880
    Abstract: Systems and methods are disclosed for identifying depth refinement image capture instructions for capturing images that may be used to refine existing depth maps. The depth refinement image capture instructions are determined by evaluating, at each image patch in an existing image corresponding to the existing depth map, a range of possible depth values over a set of configuration settings. Each range of possible depth values corresponds to an existing depth estimate of the existing depth map. This evaluation enables selection of one or more configuration settings in a manner such that there will be additional depth information derivable from one or more additional images captured with the selected configuration settings. When a refined depth map is generated using the one or more additional images, this additional depth information is used to increase the depth precision for at least one depth estimate from the existing depth map.
    Type: Application
    Filed: February 29, 2016
    Publication date: June 23, 2016
    Inventors: Huixuan Tang, Scott Cohen, Stephen Schiller, Brian Price
  • Publication number: 20160163053
    Abstract: Depth maps are generated from two or more of images captured with a conventional digital camera from the same viewpoint using different configuration settings, which may be arbitrarily selected for each image. The configuration settings may include aperture and focus settings and/or other configuration settings capable of introducing blur into an image. The depth of a selected image patch is evaluated over a set of discrete depth hypotheses using a depth likelihood function modeled to analyze corresponding images patches convolved with blur kernels using a flat prior in the frequency domain. In this way, the depth likelihood function may be evaluated without first reconstructing an all-in-focus image. Blur kernels used in the depth likelihood function and are identified from a mapping of depths and configuration settings to the blur kernels. This mapping is determined from calibration data for the digital camera used to capture the two or more images.
    Type: Application
    Filed: February 17, 2016
    Publication date: June 9, 2016
    Inventors: Huixuan Tang, Scott Cohen, Stephen Schiller, Brian Price
  • Patent number: 9307221
    Abstract: Systems and methods are disclosed for identifying depth refinement image capture instructions for capturing images that may be used to refine existing depth maps. The depth refinement image capture instructions are determined by evaluating, at each image patch in an existing image corresponding to the existing depth map, a range of possible depth values over a set of configuration settings. Each range of possible depth values corresponds to an existing depth estimate of the existing depth map. This evaluation enables selection of one or more configuration settings in a manner such that there will be additional depth information derivable from one or more additional images captured with the selected configuration settings. When a refined depth map is generated using the one or more additional images, this additional depth information is used to increase the depth precision for at least one depth estimate from the existing depth map.
    Type: Grant
    Filed: December 19, 2014
    Date of Patent: April 5, 2016
    Assignee: Adobe Systems Incorporated
    Inventors: Huixuan Tang, Scott Cohen, Stephen Schiller, Brian Price
  • Patent number: 9307222
    Abstract: Systems and methods are disclosed for identifying image capture instructions for capturing images that may be used to generate quality depth maps. In some examples, the image capture instructions are generated by predictively determining in a scene-independent manner configuration settings to be used by a camera to capture a minimal quantity of images for generating the quality depth map. The image capture instructions may thus indicate a quantity of images to be captured and the aperture and focus settings to be used when capturing the images. The image capture instructions may be determined based in part on a distance estimate, camera calibration information and a predetermined range of optimal blur radii. The range of optimal blur radii ensures that there will be sufficient depth information for generating a depth map of a particular quality from the yet-to-be-captured images.
    Type: Grant
    Filed: December 19, 2014
    Date of Patent: April 5, 2016
    Assignee: Adobe Systems Incorporated
    Inventors: Huixuan Tang, Scott Cohen, Stephen Schiller, Brian Price