Patents by Inventor Dingfu Zhou

Dingfu Zhou has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11841921
    Abstract: The present application provides a model training method and apparatus, and a prediction method and apparatus, and it relates to fields of artificial intelligence, deep learning, image processing, and autonomous driving. The model training method includes: inputting a first sample image of sample images into a depth information prediction model, and acquiring depth information of the first sample image; acquiring inter-image posture information based on a second sample image of the sample images and the first sample image; acquiring a projection image corresponding to the first sample image, at least according to the inter-image posture information and the depth information; and acquiring a loss function by determining a function for calculating a similarity between the second sample image and the projection image, and training the depth information prediction model using the loss function.
    Type: Grant
    Filed: December 4, 2020
    Date of Patent: December 12, 2023
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Xibin Song, Dingfu Zhou, Jin Fang, Liangjun Zhang
  • Patent number: 11796670
    Abstract: A radar point cloud data processing method and device, an apparatus, and storage medium are provided, which are related to technical fields of radar point cloud, automatic driving, and deep learning. An implementation includes: determining a target location area where a target object is located by utilizing a target detection box in the radar point cloud data; removing each point of the target object in the target location area from the radar point cloud data; and adding an object model to the target location area. By applying embodiments of the present disclosure, richer radar point cloud data may be obtained by removing the target object from the radar point cloud data and adding the needed three-dimensional model to the target location area in the radar point cloud data.
    Type: Grant
    Filed: May 20, 2021
    Date of Patent: October 24, 2023
    Assignees: Baidu USA LLC
    Inventors: Jin Fang, Dingfu Zhou, Xibin Song, Liangjun Zhang
  • Publication number: 20230184564
    Abstract: Provided are a high-precision map construction method, an electronic device, and a storage medium, relating to the field of high-precision map technology and, in particular, to autonomous driving technology. The implementation solution includes: calculating a pose of a camera at each position point according to a pre-acquired video; calculating an absolute depth of each keypoint in the pre-acquired video according to the pose of the camera at each position point; constructing, according to the absolute depth of each keypoint in the video, a corresponding three-dimensional point cloud of each pixel point in the pre-acquired video; and constructing, according to the corresponding three-dimensional point cloud of each pixel point in the pre-acquired video, a high-precision map corresponding to the pre-acquired video.
    Type: Application
    Filed: December 8, 2022
    Publication date: June 15, 2023
    Inventors: Shougang SHEN, Kai ZHONG, Dingfu ZHOU, Junjie CAI, Jianzhong YANG, Zhen LU, Tongbin ZHANG
  • Patent number: 11282164
    Abstract: Systems and methods of video inpainting for autonomous driving are disclosed. For example, the method stitches a multiplicity of depth frames into a 3D map, where one or more objects in the depth frames have previously been removed. The method further projects the 3D map onto a first image frame to generate a corresponding depth map, where the first image frame includes a target inpainting region. For each target pixel within the target inpainting region of the first image frame, based on the corresponding depth map, the method further maps the target pixel within the target inpainting region of the first image frame to a candidate pixel in a second image frame. The method further determines a candidate color to fill the target pixel. The method further performs Poisson image editing on the first image frame to achieve color consistency at a boundary and between inside and outside of the target inpainting region of the first image frame.
    Type: Grant
    Filed: May 26, 2020
    Date of Patent: March 22, 2022
    Assignees: BAIDU USA LLC, BAIDU.COM TIMES TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Miao Liao, Feixiang Lu, Dingfu Zhou, Sibo Zhang, Ruigang Yang
  • Publication number: 20210406599
    Abstract: The present application provides a model training method and apparatus, and a prediction method and apparatus, and it relates to fields of artificial intelligence, deep learning, image processing, and autonomous driving. The model training method includes: inputting a first sample image of sample images into a depth information prediction model, and acquiring depth information of the first sample image; acquiring inter-image posture information based on a second sample image of the sample images and the first sample image; acquiring a projection image corresponding to the first sample image, at least according to the inter-image posture information and the depth information; and acquiring a loss function by determining a function for calculating a similarity between the second sample image and the projection image, and training the depth information prediction model using the loss function.
    Type: Application
    Filed: December 4, 2020
    Publication date: December 30, 2021
    Inventors: XIBIN SONG, DINGFU ZHOU, JIN FANG, LIANGJUN ZHANG
  • Publication number: 20210374904
    Abstract: Systems and methods of video inpainting for autonomous driving are disclosed. For example, the method stitches a multiplicity of depth frames into a 3D map, where one or more objects in the depth frames have previously been removed. The method further projects the 3D map onto a first image frame to generate a corresponding depth map, where the first image frame includes a target inpainting region. For each target pixel within the target inpainting region of the first image frame, based on the corresponding depth map, the method further maps the target pixel within the target inpainting region of the first image frame to a candidate pixel in a second image frame. The method further determines a candidate color to fill the target pixel. The method further performs Poisson image editing on the first image frame to achieve color consistency at a boundary and between inside and outside of the target inpainting region of the first image frame.
    Type: Application
    Filed: May 26, 2020
    Publication date: December 2, 2021
    Inventors: Miao LIAO, Feixiang LU, Dingfu ZHOU, Sibo ZHANG, Ruigang YANG
  • Publication number: 20210270958
    Abstract: A radar point cloud data processing method and device, an apparatus, and storage medium are provided, which are related to technical fields of radar point cloud, automatic driving, and deep learning. An implementation includes: determining a target location area where a target object is located by utilizing a target detection box in the radar point cloud data; removing each point of the target object in the target location area from the radar point cloud data; and adding an object model to the target location area. By applying embodiments of the present disclosure, richer radar point cloud data may be obtained by removing the target object from the radar point cloud data and adding the needed three-dimensional model to the target location area in the radar point cloud data.
    Type: Application
    Filed: May 20, 2021
    Publication date: September 2, 2021
    Inventors: Jin Fang, Dingfu Zhou, Xibin Song, Liangjun Zhang
  • Patent number: 10685215
    Abstract: Embodiments of the present disclosure disclose a method and apparatus for recognizing a face. A specific embodiment of the method includes: acquiring at least two facial images of a to-be-recognized face under different illuminations using a near-infrared photographing device; generating at least one difference image based on a brightness difference between each two of the at least two facial images; determining a facial contour image of the to-be-recognized, face based on the at least one difference image; inputting the at least two facial images, the at least one difference image, and the facial contour image into a pre-trained real face prediction value calculation model to obtain a real face prediction value of the to-be-recognized face; and outputting prompt information for indicating successful recognition of a real face, in response to determining the obtained real face prediction value being greater than a preset threshold.
    Type: Grant
    Filed: September 14, 2018
    Date of Patent: June 16, 2020
    Assignee: Baidu Online Network Technology (Beijing) Co., Ltd.
    Inventors: Dingfu Zhou, Ruigang Yang, Yanfu Zhang, Zhibin Hong
  • Publication number: 20190163959
    Abstract: Embodiments of the present disclosure disclose a method and apparatus for recognizing a face. A specific embodiment of the method includes: acquiring at least two facial images of a to-be-recognized face under different illuminations using a near-infrared photographing device; generating at least one difference image based on a brightness difference between each two of the at least two facial images; determining a facial contour image of the to-be-recognized, face based on the at least one difference image; inputting the at least two facial images, the at least one difference image, and the facial contour image into a pre-trained real face prediction value calculation model to obtain a real face prediction value of the to-be-recognized face; and outputting prompt information for indicating successful recognition of a real face, in response to determining the obtained real face prediction value being greater than a preset threshold.
    Type: Application
    Filed: September 14, 2018
    Publication date: May 30, 2019
    Inventors: Dingfu Zhou, Ruigang Yang, Yanfu Zhang, Zhibin Hong