Patents by Inventor Dingfu Zhou
Dingfu Zhou has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11841921Abstract: The present application provides a model training method and apparatus, and a prediction method and apparatus, and it relates to fields of artificial intelligence, deep learning, image processing, and autonomous driving. The model training method includes: inputting a first sample image of sample images into a depth information prediction model, and acquiring depth information of the first sample image; acquiring inter-image posture information based on a second sample image of the sample images and the first sample image; acquiring a projection image corresponding to the first sample image, at least according to the inter-image posture information and the depth information; and acquiring a loss function by determining a function for calculating a similarity between the second sample image and the projection image, and training the depth information prediction model using the loss function.Type: GrantFiled: December 4, 2020Date of Patent: December 12, 2023Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.Inventors: Xibin Song, Dingfu Zhou, Jin Fang, Liangjun Zhang
-
Patent number: 11796670Abstract: A radar point cloud data processing method and device, an apparatus, and storage medium are provided, which are related to technical fields of radar point cloud, automatic driving, and deep learning. An implementation includes: determining a target location area where a target object is located by utilizing a target detection box in the radar point cloud data; removing each point of the target object in the target location area from the radar point cloud data; and adding an object model to the target location area. By applying embodiments of the present disclosure, richer radar point cloud data may be obtained by removing the target object from the radar point cloud data and adding the needed three-dimensional model to the target location area in the radar point cloud data.Type: GrantFiled: May 20, 2021Date of Patent: October 24, 2023Assignees: Baidu USA LLCInventors: Jin Fang, Dingfu Zhou, Xibin Song, Liangjun Zhang
-
Publication number: 20230184564Abstract: Provided are a high-precision map construction method, an electronic device, and a storage medium, relating to the field of high-precision map technology and, in particular, to autonomous driving technology. The implementation solution includes: calculating a pose of a camera at each position point according to a pre-acquired video; calculating an absolute depth of each keypoint in the pre-acquired video according to the pose of the camera at each position point; constructing, according to the absolute depth of each keypoint in the video, a corresponding three-dimensional point cloud of each pixel point in the pre-acquired video; and constructing, according to the corresponding three-dimensional point cloud of each pixel point in the pre-acquired video, a high-precision map corresponding to the pre-acquired video.Type: ApplicationFiled: December 8, 2022Publication date: June 15, 2023Inventors: Shougang SHEN, Kai ZHONG, Dingfu ZHOU, Junjie CAI, Jianzhong YANG, Zhen LU, Tongbin ZHANG
-
Patent number: 11282164Abstract: Systems and methods of video inpainting for autonomous driving are disclosed. For example, the method stitches a multiplicity of depth frames into a 3D map, where one or more objects in the depth frames have previously been removed. The method further projects the 3D map onto a first image frame to generate a corresponding depth map, where the first image frame includes a target inpainting region. For each target pixel within the target inpainting region of the first image frame, based on the corresponding depth map, the method further maps the target pixel within the target inpainting region of the first image frame to a candidate pixel in a second image frame. The method further determines a candidate color to fill the target pixel. The method further performs Poisson image editing on the first image frame to achieve color consistency at a boundary and between inside and outside of the target inpainting region of the first image frame.Type: GrantFiled: May 26, 2020Date of Patent: March 22, 2022Assignees: BAIDU USA LLC, BAIDU.COM TIMES TECHNOLOGY (BEIJING) CO., LTD.Inventors: Miao Liao, Feixiang Lu, Dingfu Zhou, Sibo Zhang, Ruigang Yang
-
Publication number: 20210406599Abstract: The present application provides a model training method and apparatus, and a prediction method and apparatus, and it relates to fields of artificial intelligence, deep learning, image processing, and autonomous driving. The model training method includes: inputting a first sample image of sample images into a depth information prediction model, and acquiring depth information of the first sample image; acquiring inter-image posture information based on a second sample image of the sample images and the first sample image; acquiring a projection image corresponding to the first sample image, at least according to the inter-image posture information and the depth information; and acquiring a loss function by determining a function for calculating a similarity between the second sample image and the projection image, and training the depth information prediction model using the loss function.Type: ApplicationFiled: December 4, 2020Publication date: December 30, 2021Inventors: XIBIN SONG, DINGFU ZHOU, JIN FANG, LIANGJUN ZHANG
-
Publication number: 20210374904Abstract: Systems and methods of video inpainting for autonomous driving are disclosed. For example, the method stitches a multiplicity of depth frames into a 3D map, where one or more objects in the depth frames have previously been removed. The method further projects the 3D map onto a first image frame to generate a corresponding depth map, where the first image frame includes a target inpainting region. For each target pixel within the target inpainting region of the first image frame, based on the corresponding depth map, the method further maps the target pixel within the target inpainting region of the first image frame to a candidate pixel in a second image frame. The method further determines a candidate color to fill the target pixel. The method further performs Poisson image editing on the first image frame to achieve color consistency at a boundary and between inside and outside of the target inpainting region of the first image frame.Type: ApplicationFiled: May 26, 2020Publication date: December 2, 2021Inventors: Miao LIAO, Feixiang LU, Dingfu ZHOU, Sibo ZHANG, Ruigang YANG
-
Publication number: 20210270958Abstract: A radar point cloud data processing method and device, an apparatus, and storage medium are provided, which are related to technical fields of radar point cloud, automatic driving, and deep learning. An implementation includes: determining a target location area where a target object is located by utilizing a target detection box in the radar point cloud data; removing each point of the target object in the target location area from the radar point cloud data; and adding an object model to the target location area. By applying embodiments of the present disclosure, richer radar point cloud data may be obtained by removing the target object from the radar point cloud data and adding the needed three-dimensional model to the target location area in the radar point cloud data.Type: ApplicationFiled: May 20, 2021Publication date: September 2, 2021Inventors: Jin Fang, Dingfu Zhou, Xibin Song, Liangjun Zhang
-
Patent number: 10685215Abstract: Embodiments of the present disclosure disclose a method and apparatus for recognizing a face. A specific embodiment of the method includes: acquiring at least two facial images of a to-be-recognized face under different illuminations using a near-infrared photographing device; generating at least one difference image based on a brightness difference between each two of the at least two facial images; determining a facial contour image of the to-be-recognized, face based on the at least one difference image; inputting the at least two facial images, the at least one difference image, and the facial contour image into a pre-trained real face prediction value calculation model to obtain a real face prediction value of the to-be-recognized face; and outputting prompt information for indicating successful recognition of a real face, in response to determining the obtained real face prediction value being greater than a preset threshold.Type: GrantFiled: September 14, 2018Date of Patent: June 16, 2020Assignee: Baidu Online Network Technology (Beijing) Co., Ltd.Inventors: Dingfu Zhou, Ruigang Yang, Yanfu Zhang, Zhibin Hong
-
Publication number: 20190163959Abstract: Embodiments of the present disclosure disclose a method and apparatus for recognizing a face. A specific embodiment of the method includes: acquiring at least two facial images of a to-be-recognized face under different illuminations using a near-infrared photographing device; generating at least one difference image based on a brightness difference between each two of the at least two facial images; determining a facial contour image of the to-be-recognized, face based on the at least one difference image; inputting the at least two facial images, the at least one difference image, and the facial contour image into a pre-trained real face prediction value calculation model to obtain a real face prediction value of the to-be-recognized face; and outputting prompt information for indicating successful recognition of a real face, in response to determining the obtained real face prediction value being greater than a preset threshold.Type: ApplicationFiled: September 14, 2018Publication date: May 30, 2019Inventors: Dingfu Zhou, Ruigang Yang, Yanfu Zhang, Zhibin Hong