Patents by Inventor Yebin Liu

Yebin Liu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11954870
    Abstract: Provided are a three-dimensional reconstruction method, apparatus and system of a dynamic scene, a server and a medium. The method includes: acquiring multiple continuous depth image sequences of the dynamic scene, where the multiple continuous depth image sequences are captured by an array of drones equipped with depth cameras; fusing the multiple continuous depth image sequences to establish a three-dimensional reconstruction model of the dynamic scene; obtaining target observation points of the array of drones through calculation according to the three-dimensional reconstruction model and current poses of the array of drones; and instructing the array of drones to move to the target observation points to capture, and updating the three-dimensional reconstruction model according to multiple continuous depth image sequences captured by the array of drones at the target observation points.
    Type: Grant
    Filed: April 23, 2019
    Date of Patent: April 9, 2024
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Lu Fang, Mengqi Ji, Yebin Liu, Lan Xu, Wei Cheng, Qionghai Dai
  • Publication number: 20210074012
    Abstract: Provided are a three-dimensional reconstruction method, apparatus and system of a dynamic scene, a server and a medium. The method includes: acquiring multiple continuous depth image sequences of the dynamic scene, where the multiple continuous depth image sequences are captured by an array of drones equipped with depth cameras; fusing the multiple continuous depth image sequences to establish a three-dimensional reconstruction model of the dynamic scene; obtaining target observation points of the array of drones through calculation according to the three-dimensional reconstruction model and current poses of the array of drones; and instructing the array of drones to move to the target observation points to capture, and updating the three-dimensional reconstruction model according to multiple continuous depth image sequences captured by the array of drones at the target observation points.
    Type: Application
    Filed: April 23, 2019
    Publication date: March 11, 2021
    Applicant: Tsinghua University
    Inventors: Lu Fang, Mengqi Ji, Yebin Liu, Lan Xu, Wei Cheng, Qionghai Dai
  • Patent number: 9472021
    Abstract: A method and a system for three-dimensionally reconstructing a non-rigid body based on a multi-depth-map are provided. The method comprises: obtaining a plurality of depth maps by shooting the non-rigid body in different postures and from different angles; transforming each depth map to one group of three-dimensional point clouds and obtaining a plurality of matching point pairs among a plurality of groups of three-dimensional point clouds; conducting a position transformation for each matching point and obtaining a transformation parameter corresponding to the each matching point after the position transformation; mosaicing all transformation parameters to obtain a mosaicing result and constructing an energy function according to the mosaicing result; and solving the energy function to obtain a solution result and reconstructing a three-dimensional model of the non-rigid body according to the solution result.
    Type: Grant
    Filed: April 25, 2014
    Date of Patent: October 18, 2016
    Assignee: Tsinghua University
    Inventors: Qionghai Dai, Genzhi Ye, Yebin Liu
  • Publication number: 20140320491
    Abstract: A method and a system for three-dimensionally reconstructing a non-rigid body based on a multi-depth-map are provided. The method comprises: obtaining a plurality of depth maps by shooting the non-rigid body in different postures and from different angles; transforming each depth map to one group of three-dimensional point clouds and obtaining a plurality of matching point pairs among a plurality of groups of three-dimensional point clouds; conducting a position transformation for each matching point and obtaining a transformation parameter corresponding to the each matching point after the position transformation; mosaicing all transformation parameters to obtain a mosaicing result and constructing an energy function according to the mosaicing result; and solving the energy function to obtain a solution result and reconstructing a three-dimensional model of the non-rigid body according to the solution result.
    Type: Application
    Filed: April 25, 2014
    Publication date: October 30, 2014
    Applicant: Tsinghua University
    Inventors: Qionghai Dai, Genzhi Ye, Yebin Liu
  • Patent number: 8335371
    Abstract: A method for vision field computing may comprise the following steps of: forming a sampling system for a multi-view dynamic scene; controlling cameras in the sampling system for the multi-view dynamic scene to perform spatial interleaved sampling, temporal interleaved exposure sampling and exposure-variant sampling; performing spatial intersection to the sampling information in the view subspace of the dynamic scene and temporal intersection to the sampling information in the time subspace of the dynamic scene to reconstruct a dynamic scene geometry model; performing silhouette back projection based on the dynamic scene geometry model to obtain silhouette motion constraints for the view angles of the cameras; performing temporal decoupling for motion de-blurring with the silhouette motion constraints; and reconstructing a dynamic scene 3D model with a resolution larger than nominal resolution of each camera by a 3D reconstructing algorithm.
    Type: Grant
    Filed: December 27, 2010
    Date of Patent: December 18, 2012
    Assignee: Tsinghua University
    Inventors: Qionghai Dai, Di Wu, Yebin Liu
  • Publication number: 20110158507
    Abstract: A method for vision field computing may comprise the following steps of: forming a sampling system for a multi-view dynamic scene; controlling cameras in the sampling system for the multi-view dynamic scene to perform spatial interleaved sampling, temporal interleaved exposure sampling and exposure-variant sampling; performing spatial intersection to the sampling information in the view subspace of the dynamic scene and temporal intersection to the sampling information in the time subspace of the dynamic scene to reconstruct a dynamic scene geometry model; performing silhouette back projection based on the dynamic scene geometry model to obtain silhouette motion constraints for the view angles of the cameras; performing temporal decoupling for motion de-blurring with the silhouette motion constraints; and reconstructing a dynamic scene 3D model with a resolution larger than nominal resolution of each camera by a 3D reconstructing algorithm.
    Type: Application
    Filed: December 27, 2010
    Publication date: June 30, 2011
    Applicant: Tsinghua University
    Inventors: Qionghai DAI, Di Wu, Yebin Liu