Patents by Inventor Mengqi JI

Mengqi JI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11954870
    Abstract: Provided are a three-dimensional reconstruction method, apparatus and system of a dynamic scene, a server and a medium. The method includes: acquiring multiple continuous depth image sequences of the dynamic scene, where the multiple continuous depth image sequences are captured by an array of drones equipped with depth cameras; fusing the multiple continuous depth image sequences to establish a three-dimensional reconstruction model of the dynamic scene; obtaining target observation points of the array of drones through calculation according to the three-dimensional reconstruction model and current poses of the array of drones; and instructing the array of drones to move to the target observation points to capture, and updating the three-dimensional reconstruction model according to multiple continuous depth image sequences captured by the array of drones at the target observation points.
    Type: Grant
    Filed: April 23, 2019
    Date of Patent: April 9, 2024
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Lu Fang, Mengqi Ji, Yebin Liu, Lan Xu, Wei Cheng, Qionghai Dai
  • Publication number: 20240078734
    Abstract: An information interaction method and apparatus, an electronic device and a storage medium are provided, the method comprising: displaying a preset first special effect element in a virtual reality space; determining whether a spatial relationship between a target control part controlled by a user and displayed in the virtual reality space and the first special effect element meets a preset condition; and in response to determining that the preset condition is met, displaying a second special effect associated with the first special effect element.
    Type: Application
    Filed: September 1, 2023
    Publication date: March 7, 2024
    Inventors: Peipei WU, Wenhui ZHAO, Keda FANG, Tan HE, Liyue JI, Mengqi TU, Weicheng ZHANG
  • Patent number: 11782285
    Abstract: The present disclosure provides a material identification method and a device based on laser speckle and modal fusion, an electronic device and a non-transitory computer readable storage medium. The method includes: performing data acquisition on an object by using a structured light camera to obtain a color modal image, a depth modal image and an infrared modal image; preprocessing the color modal image, the depth modal image and the infrared modal image; and inputting the color modal image, the depth modal image and the infrared modal image preprocessed into a preset depth neural network for training, to learn a material characteristic from a speckle structure and a coupling relation between color modal and depth modal, to generate a material classification model for classifying materials, and to generate a material prediction result in testing by the material classification model of the object.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: October 10, 2023
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Lu Fang, Mengqi Ji, Shi Mao, Qionghai Dai
  • Publication number: 20210118123
    Abstract: The present disclosure provides a material identification method and a device based on laser speckle and modal fusion, an electronic device and a non-transitory computer readable storage medium. The method includes: performing data acquisition on an object by using a structured light camera to obtain a color modal image, a depth modal image and an infrared modal image; preprocessing the color modal image, the depth modal image and the infrared modal image; and inputting the color modal image, the depth modal image and the infrared modal image preprocessed into a preset depth neural network for training, to learn a material characteristic from a speckle structure and a coupling relation between color modal and depth modal, to generate a material classification model for classifying materials, and to generate a material prediction result in testing by the material classification model of the object.
    Type: Application
    Filed: September 29, 2020
    Publication date: April 22, 2021
    Inventors: Lu Fang, Mengqi Ji, Shi Mao, Qionghai Dai
  • Publication number: 20210110599
    Abstract: Provided are a depth camera based three-dimensional reconstruction method and apparatus, a device and a storage medium. The method includes: acquiring at least two frames of images obtained by capturing a target scenario by a depth camera; determining, according to the at least two frames of images, relative camera poses in response to capturing the target scenario by the depth camera; by adopting a manner of at least two levels of nested screening, determining at least one feature voxel from each frame of image, where each level of screening adopts a respective voxel partitioning rule; fusing and calculating the at least one feature voxel of each frame of image according to a respective relative camera pose of each frame of image to obtain a grid voxel model of the target scenario; and generating an isosurface of the grid voxel model to obtain a three-dimensional reconstruction model of the target scenario.
    Type: Application
    Filed: April 28, 2019
    Publication date: April 15, 2021
    Applicant: Tsinghua University
    Inventors: Lu Fang, Mengqi Ji, Lei Han, Zhuo Su, Qionghai Dai
  • Publication number: 20210074012
    Abstract: Provided are a three-dimensional reconstruction method, apparatus and system of a dynamic scene, a server and a medium. The method includes: acquiring multiple continuous depth image sequences of the dynamic scene, where the multiple continuous depth image sequences are captured by an array of drones equipped with depth cameras; fusing the multiple continuous depth image sequences to establish a three-dimensional reconstruction model of the dynamic scene; obtaining target observation points of the array of drones through calculation according to the three-dimensional reconstruction model and current poses of the array of drones; and instructing the array of drones to move to the target observation points to capture, and updating the three-dimensional reconstruction model according to multiple continuous depth image sequences captured by the array of drones at the target observation points.
    Type: Application
    Filed: April 23, 2019
    Publication date: March 11, 2021
    Applicant: Tsinghua University
    Inventors: Lu Fang, Mengqi Ji, Yebin Liu, Lan Xu, Wei Cheng, Qionghai Dai
  • Patent number: 9704261
    Abstract: Embodiments of the present invention provide an image processing method and apparatus, where the method includes: determining a Gaussian mixture model of a first area in a first image, where the first area belongs to a background area of the first image, and the first image is a first frame of image in a video; determining a distance from a first pixel in a second image to the Gaussian mixture model; and when the distance from the first pixel to the Gaussian mixture model is less than or equal to a first preset threshold, determining that the first pixel belongs to a background area of the second image. In the embodiments of the present invention, an area to which a pixel belongs can be directly determined according to a distance from the pixel to a Gaussian mixture model, so that image foreground segmentation can be completed efficiently.
    Type: Grant
    Filed: November 13, 2015
    Date of Patent: July 11, 2017
    Assignee: Huawei Technologies Co., Ltd.
    Inventors: Mengqi Ji, Shan Gao, Haiyan Yang
  • Publication number: 20160140724
    Abstract: Embodiments of the present invention provide an image processing method and apparatus, where the method includes: determining a Gaussian mixture model of a first area in a first image, where the first area belongs to a background area of the first image, and the first image is a first frame of image in a video; determining a distance from a first pixel in a second image to the Gaussian mixture model; and when the distance from the first pixel to the Gaussian mixture model is less than or equal to a first preset threshold, determining that the first pixel belongs to a background area of the second image. In the embodiments of the present invention, an area to which a pixel belongs can be directly determined according to a distance from the pixel to a Gaussian mixture model, so that image foreground segmentation can be completed efficiently.
    Type: Application
    Filed: November 13, 2015
    Publication date: May 19, 2016
    Inventors: Mengqi JI, Shan GAO, Haiyan YANG