Patents by Inventor Lu Fang

Lu Fang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11972525
    Abstract: An example operation may include one or more of generating a three-dimensional (3D) model of an object via execution of a machine learning model on one or more images of the object, capturing a plurality of snapshots of the 3D model of the object at different angles to generate a plurality of snapshot images of the object, fusing a feature into each of the plurality of snapshots to generate a plurality of fused snapshots of the 3D model of the object, and storing the plurality of fused snapshots of the 3D model of the object in memory.
    Type: Grant
    Filed: February 21, 2022
    Date of Patent: April 30, 2024
    Assignee: International Business Machines Corporation
    Inventors: Kun Yan Yin, Zhong Fang Yuan, Yi Chen Zhong, Lu Yu, Tong Liu
  • Patent number: 11954870
    Abstract: Provided are a three-dimensional reconstruction method, apparatus and system of a dynamic scene, a server and a medium. The method includes: acquiring multiple continuous depth image sequences of the dynamic scene, where the multiple continuous depth image sequences are captured by an array of drones equipped with depth cameras; fusing the multiple continuous depth image sequences to establish a three-dimensional reconstruction model of the dynamic scene; obtaining target observation points of the array of drones through calculation according to the three-dimensional reconstruction model and current poses of the array of drones; and instructing the array of drones to move to the target observation points to capture, and updating the three-dimensional reconstruction model according to multiple continuous depth image sequences captured by the array of drones at the target observation points.
    Type: Grant
    Filed: April 23, 2019
    Date of Patent: April 9, 2024
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Lu Fang, Mengqi Ji, Yebin Liu, Lan Xu, Wei Cheng, Qionghai Dai
  • Publication number: 20240086625
    Abstract: An information processing method and apparatus, a terminal, and a storage medium. The information processing method comprises: determining first content in response to a first operation event of a first control in a first document (S11); and adding the first content to the first document on the basis of content information and type information of the first content (S12). The type information comprises first type information and/or second type information, the second type information having an association with the first type information. In the described method, first content can be added to a first document according to content information and type information of the first content, so as to distinguish different ways of adding the first content.
    Type: Application
    Filed: November 16, 2023
    Publication date: March 14, 2024
    Inventors: Lu ZHANG, Wenzong MA, Xinlei GUO, Xiaolin FANG, Hao HUANG, Liang CHEN, Lanjin ZHOU, Linghui ZHOU, Yingtao LIU, Dirun HUANG, Xuebing ZENG, Zejian LIN, Yingjie YOU, Yunzhao TONG, Yuxiang CHEN, Jiawei CHEN
  • Publication number: 20240078744
    Abstract: A method includes: acquiring a semantic primitive set of a multi-view image set; acquiring a coordinate offset by inputting coordinate information and a feature vector corresponding to a first grid sampling point of the semantic primitive set into a first network model, and acquiring a second grid of the semantic primitive set based on the coordinate offset and geometric attribute information of the semantic primitive set; acquiring first feature information of a second grid sampling point by inputting coordinate information and a feature vector corresponding to the second grid sampling point, and an observation angle value into a second network model, and acquiring second feature information of the semantic primitive set based on the first feature information; and acquiring a light field reconstruction result of the multi-view image set based on an observation angle value of the semantic primitive set and third feature information extracted from the second feature information.
    Type: Application
    Filed: August 23, 2023
    Publication date: March 7, 2024
    Inventors: Lu FANG, Haiyang YING, Jinzhi ZHANG
  • Patent number: 11914584
    Abstract: Embodiments of the present disclosure provide a method and apparatus for reset command configuration, a device, and a storage medium, the method, applied to an editor of target software, includes: starting a command group storage unit and starting a snapshot session through the inputted startGroup command; directly performing the reset command configuration on the target software in an command-type manner through the inputted operation execution command; converting change information of an object in the snapshot-type session into a command pair through the snapshot capture command, pressing the command pair into the command group storage unit, and using all command groups of the command pair as a reset command to realize performing the reset command configuration on the target software in the snapshot-type manner, thereby achieving that the target software supports both the command-type reset command configuration and the snapshot-type reset command configuration.
    Type: Grant
    Filed: November 14, 2022
    Date of Patent: February 27, 2024
    Assignee: LEMON INC.
    Inventors: Hai Quang Kim, Cheng Fang, Lu Tao
  • Patent number: 11908067
    Abstract: A method and a device for gigapixel-level light field intelligent reconstruction of a large-scale scene are provided. The method includes: obtaining a coarse three-dimensional geometric model based on a multi-view three-dimensional reconstruction system; constructing an implicit representation of the meta-deformed manifold on the coarse three-dimensional geometric model; and optimizing the implicit representation of the meta-deformed manifold to obtain the light field reconstruction in the form of free viewpoint rendering of the large-scale scene.
    Type: Grant
    Filed: July 13, 2023
    Date of Patent: February 20, 2024
    Assignee: Tsinghua University
    Inventors: Lu Fang, Guangyu Wang, Jinzhi Zhang
  • Publication number: 20230334682
    Abstract: An intelligent understanding apparatus for real-time reconstruction of a large-scale scene light field includes the following. A data obtaining module obtains a 3D instance depth map, and obtain 3D voxels and voxel color information through simultaneous positioning and map generation. The model constructing module constructs and trains a real-time light field reconstruction network model using a ScanNet dataset. The real-time light field reconstruction network model extracts features of the 3D voxels and voxel color information, and obtain a semantic segmentation result and an instance segmentation result. The semantic segmentation module inputs the 3D voxel and voxel color information corresponding to the 3D instance depth map into the trained real-time light field reconstruction network model, and determine an output as a semantic segmentation result and an instance segmentation result corresponding to the 3D instance depth map.
    Type: Application
    Filed: August 4, 2022
    Publication date: October 19, 2023
    Inventors: Lu FANG, Leyao LIU, Tian ZHENG, Ping LIU
  • Patent number: 11782285
    Abstract: The present disclosure provides a material identification method and a device based on laser speckle and modal fusion, an electronic device and a non-transitory computer readable storage medium. The method includes: performing data acquisition on an object by using a structured light camera to obtain a color modal image, a depth modal image and an infrared modal image; preprocessing the color modal image, the depth modal image and the infrared modal image; and inputting the color modal image, the depth modal image and the infrared modal image preprocessed into a preset depth neural network for training, to learn a material characteristic from a speckle structure and a coupling relation between color modal and depth modal, to generate a material classification model for classifying materials, and to generate a material prediction result in testing by the material classification model of the object.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: October 10, 2023
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Lu Fang, Mengqi Ji, Shi Mao, Qionghai Dai
  • Publication number: 20230316730
    Abstract: A method for processing three-dimensional (3D) point cloud data based on incremental sparse 3D convolution is provided. A computer device obtains 3D point cloud data and forms a training set by processing the 3D point cloud data. The computer device constructs and trains a sparse 3D convolutional network model by inputting the training set. The computer device constructs an incremental sparse 3D convolutional network model by performing incremental replacement of sparse convolutional layers of the trained sparse 3D convolutional network model. The computer device inputs real-time 3D point cloud data into the incremental sparse 3D convolutional network model, and determines an output result as a result of processing the real-time 3D point cloud data. Processing of the 3D point cloud data at least includes 3D semantic segmentation, target detection, 3D classification and video processing.
    Type: Application
    Filed: July 27, 2022
    Publication date: October 5, 2023
    Inventors: Lu FANG, Leyao LIU, Tian ZHENG
  • Patent number: 11763471
    Abstract: A method for large scene elastic semantic representation and self-supervised light field reconstruction is provided. The method includes acquiring a first depth map set corresponding to a target scene, in which the first depth map set includes a first depth map corresponding to at least one 5 angle of view; inputting the first depth map set into a target elastic semantic reconstruction model to obtain a second depth map set, in which the second depth map set includes a second depth map corresponding to the at least one angle of view; and fusing the second depth map corresponding to the at least one angle of view to obtain a target scene point cloud corresponding to the target scene.
    Type: Grant
    Filed: April 14, 2023
    Date of Patent: September 19, 2023
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Lu Fang, Jinzhi Zhang, Ruofan Tang
  • Patent number: 11715186
    Abstract: The present disclosure provides a multi-image-based image enhancement method and device, an electronic device and a non-transitory computer readable storage medium. The method includes: aligning a low-resolution target image and a reference image in an image domain; performing, an alignment in a feature domain; and synthesizing features corresponding to the low-resolution target image and features corresponding to the reference image to generate a final output.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: August 1, 2023
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Lu Fang, Yinheng Zhu, Qionghai Dai
  • Publication number: 20230177278
    Abstract: The present disclosure relates to a method and device of generating an extended pre-trained language model and a natural language processing method. The method of generating an extended pre-trained language model comprises training the extended pre-trained language model in an iterative manner. Training the extended pre-trained language model comprises: generating, based on a mask for randomly hiding a word in a sample sentence containing an unregistered word, an encoding feature of the sample sentence; generating a predicted hidden word based on the encoding feature; and adjusting the extended pre-trained language model based on the predicted hidden word.
    Type: Application
    Filed: November 17, 2022
    Publication date: June 8, 2023
    Applicant: Fujitsu Limited
    Inventors: Zhongguang ZHENG, Lu FANG, Yiling CAO, Jun SUN
  • Patent number: 11636268
    Abstract: The present disclosure relates to a method and device for generating a finite state automata for recognizing a chemical name in a text, and a recognition method. According to an embodiment of the present disclosure, the method comprises substituting representation constants of categories of character segments appearing in an organic compound name set into the organic compound name set to obtain a conversion name set; updating the conversion name set based on a conversion name segment which repeatedly appears in the conversion name set; and generating the finite state automata based on the updated conversion name set.
    Type: Grant
    Filed: May 20, 2020
    Date of Patent: April 25, 2023
    Assignee: FUJITSU LIMITED
    Inventors: Lu Fang, Zhongguang Zheng, Yingju Xia, Jun Sun
  • Publication number: 20230086928
    Abstract: The light field reconstruction method includes: obtaining a human segmentation result via a pre-trained semantic segmentation network, and obtaining an object segmentation result according to a pre-obtained scene background; fusing multiple frames of depth maps to obtain a geometric model, obtaining a complete human model according to a pre-trained human model completion network, and registering the models by point cloud registration and fusing the registered models to obtain an object model, so as to obtain a complete human model with geometric details and the object model; tracking motion of a rigid object through point cloud registration; reconstructing the complete human model with geometric details through a human skeleton tracking and a non rigid tracking of human surface nodes; and performing a fusion operation in time sequence and obtaining a reconstructed human model and a reconstructed rigid object model through the fusion operation.
    Type: Application
    Filed: September 15, 2022
    Publication date: March 23, 2023
    Inventors: Lu FANG, Dawei ZHONG
  • Publication number: 20230004714
    Abstract: A method of presenting prompt information includes: generating a mask vector for an entity, which is used to identify a position of the entity in a statement composed of the entity and context; generating a first vector and a second vector based on the entity and the context; generating a third vector based on the mask vector and the second vector; concatenating the first vector and the third vector to generate a fourth vector; predicting which concept of multiple predefined concepts the entity corresponds to, based on the fourth vector by a first classifier; predicting which type of multiple predefined types the entity corresponds to, based on the fourth vector by a second classifier; jointly training the first the second classifiers; determining a concept to which the entity corresponds based on prediction result of the trained first classifier; and generating the prompt information based on the determined concept.
    Type: Application
    Filed: June 28, 2022
    Publication date: January 5, 2023
    Applicant: FUJITSU LIMITED
    Inventors: Yiling CAO, Zhongguang ZHENG, Lu FANG, Jun SUN
  • Patent number: 11514607
    Abstract: The disclosure provides 3D reconstruction methods and devices. The method includes: obtaining data captured by the camera and data captured by the inertial measurement unit; obtaining a pose of the camera based on the data; obtaining an adjustment value of the pose of the camera and an adjustment value of bias of the inertial measurement unit; updating the pose of the camera based on the adjustment value of the pose of the camera; determining whether the adjustment value of bias of the inertial measurement unit is less than a preset value; in response to the adjustment value of bias of the inertial measurement unit being greater than or equal to the preset value, determining that a current loop for 3-dimensional reconstruction is an error loop; removing the error loop; and constructing a 3-dimensional model for surroundings of the camera based on the updated pose of the camera and remaining loops.
    Type: Grant
    Filed: September 29, 2020
    Date of Patent: November 29, 2022
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Lu Fang, Dawei Zhong, Qionghai Dai
  • Patent number: 11514667
    Abstract: A method and an apparatus for camera-free light field video processing with all-optical neural network are disclosed. The method includes: mapping the light field video by a digital micro-mirror device (DMD) and an optical fiber coupler, a two-dimensional 2D spatial optical signal into a one-dimensional 1D input optical signal; realizing a multiply-accumulate computing model in a structure of all-optical recurrent neural network structure, and processing the 1D input signal to obtain the processed signal; and receiving the processed signal and outputting an electronic signal by a photodetector, or receiving the processed signal by a relay optical fiber for relay transmission of the processed signal. The method and system here realize light field video processing without the use of a camera and the whole system is all-optical, thus possessing the advantage in computing speed and energy-efficiency.
    Type: Grant
    Filed: March 25, 2022
    Date of Patent: November 29, 2022
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Lu Fang, Tiankuang Zhou, Siyuan Gu, Xiaoyun Yuan, Qionghai Dai
  • Patent number: 11461911
    Abstract: A depth information calculation method and device based on a light-field-binocular system. The method includes obtaining a far-distance disparity map based on binocular information of calibrated input images, setting respective first confidences pixels in the disparity map, and obtaining a first target confidence; detecting the first confidence of a pixel being smaller than a preset value and responsively determining a new disparity value based on light field information of the input images, determining an update depth value based on the new disparity value, and obtaining a second target confidence of the pixel; and combining the far-distance disparity map and a disparity map formed by the new disparity value on a same unit into an index map, combining the first confidence and the first target confidence into a confidence map, optimizing the index and confidence maps to obtain a final disparity map, which is converted to a final depth map.
    Type: Grant
    Filed: September 28, 2020
    Date of Patent: October 4, 2022
    Assignee: Tsinghua University
    Inventors: Lu Fang, Dingjian Jin, Anke Zhang, Qionghai Dai
  • Patent number: 11450017
    Abstract: A method for intelligent light field depth classification based on optoelectronic computing includes capturing and identifying binocular images of a scene within a depth range through a pair of binocular cameras; mapping each depth value in the depth range to a disparity value between the binocular images, to obtain a disparity range of the scene within the depth range; labeling training data based on the disparity range to obtain a pre-trained diffraction neural network model; loading a respective weight for each layer of a network obtained after training into a corresponding optical element based on the pre-trained diffraction neural network model; and after the respective weight for each layer of the network is loaded, performing forward propagation inference on new input data of the scene, and outputting a depth classification result corresponding to each pixel in the binocular images of the scene.
    Type: Grant
    Filed: May 12, 2022
    Date of Patent: September 20, 2022
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Lu Fang, Zhihao Xu, Xiaoyun Yuan, Tiankuang Zhou, Qionghai Dai
  • Patent number: 11425292
    Abstract: A method and an apparatus for camera-free light field imaging with optoelectronic intelligent computing are provided. The method includes: obtaining an optical computing result by an optical computing module in response to receiving a light signal of an object to be imaged, in which the optical computing result includes light field imaging of the object to be imaged; computing by an electronic computing module the optical computing result to obtain an electronic computing result; and in response to determining based on the electronic computing result that cascading is required, forming a cascade structure by taking the electronic computing result at a previous level as an input of the optical computing module at a current level, and in response to determining that cascading is not required, outputting a final result.
    Type: Grant
    Filed: February 23, 2022
    Date of Patent: August 23, 2022
    Assignee: TSINGHUA UNIVERSITY
    Inventors: Lu Fang, Yong Wang, Xiaoyun Yuan, Tiankuang Zhou, Qionghai Dai