Patents by Inventor Chenyang Ge

Chenyang Ge has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11143880
    Abstract: The present disclosure relates to a ToF depth sensor based on laser speckle projection. The ToF depth sensor includes a laser projector, which is used for projecting periodic infrared laser signals with phase information to the detected space; a diffraction optical element (DOE), which is used for uniformly distributing a beam of incident infrared laser signals into L beams of emergent infrared laser signals, enabling each beam of the emergent infrared laser signals to carry respective phase information, as well as the beam diameter, divergence angle and wavefront of the emergent infrared laser signals to be identical with those of the incident infrared laser signals, while only changing the transmission direction and controlling coded patterns projected out by laser speckles; and an image sensor, which is used for calculating depth information of a measured object by matching the laser speckles with pixel points of the image sensor.
    Type: Grant
    Filed: December 18, 2019
    Date of Patent: October 12, 2021
    Assignee: XI'AN JIAOTONG UNIVERSITY
    Inventors: Chenyang Ge, Xin Qiao, Huimin Yao, Yanhui Zhou, Pengchao Deng
  • Patent number: 11037313
    Abstract: Disclosed are a self-correction method and device for a structured light depth camera of a smart phone. The self-correction device for the structured light depth camera of the smart phone consists of an infrared laser speckle projector, an image receiving sensor, a self-correction module, a depth calculating module and a mobile phone application processing AP.
    Type: Grant
    Filed: December 18, 2018
    Date of Patent: June 15, 2021
    Assignee: Ningbo YX Information Technology Co., Ltd.
    Inventors: Chenyang Ge, Yanmei Xie, Yanhui Zhou
  • Patent number: 10764559
    Abstract: There are provided a depth information acquisition method and device. The method includes: determining a relative geometric position relationship between a ToF camera and left and right cameras in a binocular camera, and internal parameters; collecting the depth map generated by the ToF camera and the images of two cameras; converting the depth map into a binocular disparity value between corresponding pixels in the images of the two cameras; mapping, by using the converted binocular disparity value, any pixel in the depth map generated by the ToF camera to corresponding pixel coordinates of the images of the two cameras to obtain a sparse disparity map; and performing calculation on all pixels in the depth map generated by the ToF camera to obtain a dense disparity map, thereby obtaining more accurate and denser depth information; or inversely calibrating collected depth map by the ToF camera with the sparse disparity map.
    Type: Grant
    Filed: January 30, 2019
    Date of Patent: September 1, 2020
    Assignee: XI'AN JIAOTONG UNIVERSITY
    Inventors: Chenyang Ge, Huimin Yao, Yanmei Xie, Yanhui Zhou
  • Patent number: 10740917
    Abstract: The present disclosure provides an automatic correction method and device for a structured-light 3D depth camera. When the optical axis of a laser encoded pattern projector and the optical axis of an image reception sensor change, an offset of an input encoded image relative to an image block in a reference encoded image is acquired, and then the position of the reference encoded image is oppositely adjusted upwards or downwards according to an offset change to form a self-feedback regulation closed-loop system between the center of the input encoded image and the center of the reference encoded image, so that the optimal matching relation can always be figured out when the optical axes of the input encoded image and the reference encoded image change drastically. Furthermore, depth calculation can be carried out according to the corrected offset.
    Type: Grant
    Filed: December 18, 2018
    Date of Patent: August 11, 2020
    Assignee: XI'AN JIAOTONG UNIVERSITY
    Inventors: Chenyang Ge, Yanmei Xie, Huimin Yao, Bing Zhou, Kangduo Zhang, Long Zuo
  • Publication number: 20200201064
    Abstract: The present disclosure relates to a ToF depth sensor based on laser speckle projection. The ToF depth sensor includes a laser projector, which is used for projecting periodic infrared laser signals with phase information to the detected space; a diffraction optical element (DOE), which is used for uniformly distributing a beam of incident infrared laser signals into L beams of emergent infrared laser signals, enabling each beam of the emergent infrared laser signals to carry respective phase information, as well as the beam diameter, divergence angle and wavefront of the emergent infrared laser signals to be identical with those of the incident infrared laser signals, while only changing the transmission direction and controlling coded patterns projected out by laser speckles; and an image sensor, which is used for calculating depth information of a measured object by matching the laser speckles with pixel points of the image sensor.
    Type: Application
    Filed: December 18, 2019
    Publication date: June 25, 2020
    Inventors: Chenyang Ge, Xin Qiao, Huimin Yao, Yanhui Zhou, Pengchao Deng
  • Patent number: 10655955
    Abstract: The present disclosure provides a time-space coding method and apparatus for generating a structured light coded pattern. Different coded patterns are outputted in time division by driving different light-emitting points based on a regular light-emitting lattice so as to time-space label a target object or a projection space. This method effectively overcomes the limitations of the fact that in the prior art, a plurality of projectors is needed to project different patterns in time division when it is required to project time-space labels of a plurality of patterns. According to the present method, by driving different light-emitting elements on the light-emitting substrate, different coded patterns are formed, and by using a time-division output technology, time-space labeling of an object may be finalized with only one projector. Moreover, this method may significantly improve the accuracy and robustness of three-dimensional depth measurement through time-space coding with a depth decoding algorithm.
    Type: Grant
    Filed: August 13, 2018
    Date of Patent: May 19, 2020
    Assignee: NINGBO YINGXIN INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Chenyang Ge, Huimin Yao, Yanhui Zhou
  • Publication number: 20200128225
    Abstract: There are provided a depth information acquisition method and device. The method includes: determining a relative geometric position relationship between a ToF camera and left and right cameras in a binocular camera, and internal parameters; collecting the depth map generated by the ToF camera and the images of two cameras; converting the depth map into a binocular disparity value between corresponding pixels in the images of the two cameras; mapping, by using the converted binocular disparity value, any pixel in the depth map generated by the ToF camera to corresponding pixel coordinates of the images of the two cameras to obtain a sparse disparity map; and performing calculation on all pixels in the depth map generated by the ToF camera to obtain a dense disparity map, thereby obtaining more accurate and denser depth information; or inversely calibrating collected depth map by the ToF camera with the sparse disparity map.
    Type: Application
    Filed: January 30, 2019
    Publication date: April 23, 2020
    Inventors: Chenyang GE, Huimin YAO, Yanmei XIE, Yanhui ZHOU
  • Patent number: 10381805
    Abstract: The present disclosure provides a VCSEL regular lattice-based laser speckle projector, comprising a VCSEL regular light-emitting lattice, a collimator, a diffractive optical element, an X-Y direction driving circuit, and a lattice display control module; multiple frames of different coded patterns are outputted by driving the light-emitting particles on the VCSEL regular light-emitting lattice to thereby implement a time-space labeling to the target object or projection space, and finally three-dimensional depth measurement of the target object is implemented through a depth decoding algorithm. Compared with the traditional structured-light space coding technology, the projector of the present disclosure facilitates implementation of time-space coding, such that a higher precision, robustness, and antijamming capability may be achieved during the process of depth decoding for the three-dimensional measurement.
    Type: Grant
    Filed: August 13, 2018
    Date of Patent: August 13, 2019
    Assignee: NINGBO YINGXIN INFORMATION TECHNOLOGY CO., LTD.
    Inventors: Chenyang Ge, Huimin Yao, Yanhui Zhou
  • Publication number: 20190188873
    Abstract: The present disclosure provides an automatic correction method and device for a structured-light 3D depth camera. When the optical axis of a laser encoded pattern projector and the optical axis of an image reception sensor change, an offset of an input encoded image relative to an image block in a reference encoded image is acquired, and then the position of the reference encoded image is oppositely adjusted upwards or downwards according to an offset change to form a self-feedback regulation closed-loop system between the center of the input encoded image and the center of the reference encoded image, so that the optimal matching relation can always be figured out when the optical axes of the input encoded image and the reference encoded image change drastically. Furthermore, depth calculation can be carried out according to the corrected offset.
    Type: Application
    Filed: December 18, 2018
    Publication date: June 20, 2019
    Inventors: Chenyang GE, Yanmei XIE, Huimin YAO, Bing ZHOU, Kangduo ZHANG, Long ZUO
  • Publication number: 20190188874
    Abstract: Disclosed are a self-correction method and device for a structured light depth camera of a smart phone. The self-correction device for the structured light depth camera of the smart phone consists of an infrared laser speckle projector, an image receiving sensor, a self-correction module, a depth calculating module and a mobile phone application processing AP.
    Type: Application
    Filed: December 18, 2018
    Publication date: June 20, 2019
    Inventors: Chenyang GE, Yanmei XIE, Yanhui ZHOU
  • Publication number: 20190178635
    Abstract: The present disclosure provides a time-space coding method and apparatus for generating a structured light coded pattern. Different coded patterns are outputted in time division by driving different light-emitting points based on a regular light-emitting lattice so as to time-space label a target object or a projection space. This method effectively overcomes the limitations of the fact that in the prior art, a plurality of projectors is needed to project different patterns in time division when it is required to project time-space labels of a plurality of patterns. According to the present method, by driving different light-emitting elements on the light-emitting substrate, different coded patterns are formed, and by using a time-division output technology, time-space labeling of an object may be finalized with only one projector. Moreover, this method may significantly improve the accuracy and robustness of three-dimensional depth measurement through time-space coding with a depth decoding algorithm.
    Type: Application
    Filed: August 13, 2018
    Publication date: June 13, 2019
    Applicant: Ningbo Yingxin Information Technology Co., Ltd.
    Inventors: Chenyang GE, Huimin YAO, Yanhui ZHOU
  • Publication number: 20190181618
    Abstract: The present disclosure provides a VCSEL regular lattice-based laser speckle projector, comprising a VCSEL regular light-emitting lattice, a collimator, a diffractive optical element, an X-Y direction driving circuit, and a lattice display control module; multiple frames of different coded patterns are outputted by driving the light-emitting particles on the VCSEL regular light-emitting lattice to thereby implement a time-space labeling to the target object or projection space, and finally three-dimensional depth measurement of the target object is implemented through a depth decoding algorithm. Compared with the traditional structured-light space coding technology, the projector of the present disclosure facilitates implementation of time-space coding, such that a higher precision, robustness, and antijamming capability may be achieved during the process of depth decoding for the three-dimensional measurement.
    Type: Application
    Filed: August 13, 2018
    Publication date: June 13, 2019
    Inventors: Chenyang GE, Huimin YAO, Yanhui ZHOU
  • Patent number: 10194138
    Abstract: The present invention discloses a structured light encoding-based vertical depth perception apparatus, the apparatus comprising: a laser pattern projector, an infrared receiving camera, a RGB receiving camera, and a depth perception module; the laser pattern projector, the RGB receiving camera, and the infrared receiving camera are disposed along a straight line vertical to a horizontal plane; the laser pattern projector is for projecting a laser encoded pattern; the infrared receiving camera is for consecutively acquiring the encoded patterns and generating an input encoded image sequence; the RGB receiving camera is for acquiring RGB video stream; and the depth perception module is for generating a depth map sequence.
    Type: Grant
    Filed: June 6, 2016
    Date of Patent: January 29, 2019
    Assignee: RGBDSENSE INFORMATION TECHNOLOGY LTD.
    Inventors: Yanhui Zhou, Chenyang Ge
  • Patent number: 10194135
    Abstract: A three-dimensional depth perception apparatus includes a synchronized trigger module, an MIPI receiving/transmitting module, and a multiplexing core computing module, a storage controller module, a memory, and an MUX selecting module. The synchronized trigger module is for generating a synchronized trigger signal transmitted to an image acquiring module; the MIPI receiving/transmitting module is for supporting input/output of the MIPI video streams and other formats of video streams; the multiplexing core computing module is for selecting a monocular structured or a binocular structured light depth perception working mode. The apparatus flexibly adopts a monocular or binocular structured-light depth sensing manner, so as to leverage the advantages of different modes: the MIPI in, MIPI out working manner is nearly transparent to the user, so as to facilitate the user to employ the apparatus, directly obtaining the depth graph.
    Type: Grant
    Filed: July 20, 2016
    Date of Patent: January 29, 2019
    Inventors: Chenyang Ge, Nanning Zheng, Yanhui Zhou
  • Publication number: 20180347967
    Abstract: The present disclosure discloses a method of generating a random coded pattern for coding structured light. According to the method, under a certain distribution rule, coding primitives are added one by one using a random probability distribution map, thereby generating a random coded pattern satisfying a distribution requirement of window uniqueness; the random coded pattern may be used independently as a structured light encoded pattern or may be spliced and expanded from basic image elements as the structured light coded pattern. The structured light coded pattern obtained from the method is projected by a projecting device according to a certain field of angle, which may perform spatial coding and feature calibration to the three-dimensional space or a target object, for depth identification. The present disclosure also discloses an apparatus for generating a random coded pattern for coding structured light.
    Type: Application
    Filed: July 31, 2017
    Publication date: December 6, 2018
    Applicant: RGBDsense Information Technology Ltd.
    Inventors: Chenyang GE, Huimin Yao, Yanhui ZHOU
  • Patent number: 10142612
    Abstract: The present invention provides a method of binocular depth perception based on active structured light, adopting a coded pattern projector to project a coded pattern for structured light coding of the projective space or target object (characteristic calibration), then obtaining the coded pattern by means of two cameras on the same baseline and respectively located symmetrically on both sides of the coded pattern projector, after preprocessing and projection shadow detection, estimating the block matching movement in two modes based on the image blocks (binocular block matching and automatic matching) to obtain the offset of the optimal matching block, finally working out the depth value according to the formula for depth calculation and compensating the depth of the projection shadows to generate high-resolution and high-precision depth information.
    Type: Grant
    Filed: January 7, 2015
    Date of Patent: November 27, 2018
    Inventors: Chenyang Ge, Nanning Zheng
  • Patent number: 9995578
    Abstract: In view of the active vision model based on structured light, a hardware structure of a depth perception device (a chip or an IP core) for high-precision images is disclosed. Simultaneously, the module is not only capable of serving as an independent chip, but also an embedded IP core in application. Main principle of the module is as follows. Speckle image sequence (obtained from an external image sensor and unknown depth information) is processed by adaptive and uniform pre-processing sub-module, then is inputted to the module to be compared with the standard speckle image (known depth information), then motion-vector information of the inputted speckle image is obtained by pattern matching of image blocks (similarity calculation) by the block-matching motion estimation sub-module, then depth image is obtained by depth calculation, and finally high-resolution sequence of depth image is outputted by post-processing the depth image.
    Type: Grant
    Filed: September 29, 2013
    Date of Patent: June 12, 2018
    Inventors: Chenyang Ge, Yanhui Zhou
  • Patent number: 9959626
    Abstract: The present invention discloses a three-dimensional depth perception method and apparatus with an adjustable working range. The method comprises: setting a working range mode from the external or by an adaptive adjustment, projecting encoded patterns into a corresponding working range by adjusting a driving current of a laser pattern projector driving circuit, adjusting a receiving camera focal length and a baseline distance, collecting a sequence of projected encoded images and feeding them into a depth perception module that adjusts control parameters for image preprocessing based on the working range mode, selecting, from a group of reference encoded images in coincidence with the working range mode to perform block-matching-based disparity computation and depth computation to the inputted encoded image sequence, and outputting a depth image sequence. A three-dimensional depth perception apparatus with an adjustable working range is implemented based on the method.
    Type: Grant
    Filed: February 19, 2016
    Date of Patent: May 1, 2018
    Inventors: Yanhui Zhou, Chenyang Ge
  • Patent number: 9888257
    Abstract: The present invention discloses a method and apparatus of storage control for depth perception computation. The method comprises: sequentially reading each part of image data for splicing a binarized spliced image according to a preset write mapping rule, the image data being originated from each frame of image in a group of binarized structured-light encoded image sequences; writing, by a read/write controller, the each part that is spliced into a binarized spliced image into a memory for storage, so as to generate a frame of complete binarized spliced image; then, through changing an address mapping, solidifying the generated binarized spliced image at a certain position within the memory; when in use, one or more frames of binarized spliced images are read out in sequence as reference encoded images for depth perception computation.
    Type: Grant
    Filed: June 2, 2016
    Date of Patent: February 6, 2018
    Assignee: RGBDSENSE INFORMATION TECHNOLOGY LTD.
    Inventors: Chenyang Ge, Yanhui Zhou
  • Patent number: 9829309
    Abstract: The present invention discloses a depth sensing method, device and system based on symbols array plane structured light. The coded symbols array pattern is projected by the laser pattern projector to the target object or space, and the image sensor collects and obtains the successive sequence of the encoded image of the input symbols. Firstly, the input image encoded with the symbols is decoded. The decoding process includes preprocessing, symbols location, symbols recognition and symbols correction. Secondly, the disparity of the decoded symbols are calculated by the symbols match calculation between the decoded image of the input symbols with completed symbols recognition and the decoded image of the reference symbols with the known distance. Finally the depth calculation formula is combined to generate depth point cloud information of the target object or projection space that is represented in the form of grid.
    Type: Grant
    Filed: July 23, 2015
    Date of Patent: November 28, 2017
    Assignee: RGBDsense Information Technology Ltd.
    Inventors: Chenyang Ge, Yanhui Zhou