Patents by Inventor Ruigang Yang

Ruigang Yang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11205289
    Abstract: A data augmentation method, device are provided according to embodiments of the present application. The method includes: acquiring a point cloud of a frame, the point cloud comprising a plurality of original obstacles; obtaining a plurality of position voids by removing the original obstacles from the point cloud, and filling the position voids to obtain a real background of the point cloud; arranging a plurality of new obstacles labeled by labeling data, in the real background of the point cloud; and adjusting the new obstacles based on the labeling data of the new obstacles to obtain layout data of the new obstacles. The amount of real data is increased, and a diversity of the real data is improved.
    Type: Grant
    Filed: July 17, 2019
    Date of Patent: December 21, 2021
    Assignee: Baidu Online Network Technology (Beijing) Co., Ltd.
    Inventors: Jin Fang, Feilong Yan, Ruigang Yang, Liang Wang, Yu Ma
  • Publication number: 20210390748
    Abstract: Presented herein are novel embodiments for converting a given speech audio or text into a photo-realistic speaking video of a person with synchronized, realistic, and expressive body dynamics. In one or more embodiments, 3D skeleton movements are generated from the audio sequence using a recurrent neural network, and an output video is synthesized via a conditional generative adversarial network. To make movements realistic and expressive, the knowledge of an articulated 3D human skeleton and a learned dictionary of personal speech iconic gestures may be embedded into the generation process in both learning and testing pipelines. The former prevents the generation of unreasonable body distortion, while the later helps the model quickly learn meaningful body movement with a few videos. To produce photo-realistic and high-resolution video with motion details, a part-attention mechanism is inserted in the conditional GAN, where each detailed part is automatically zoomed in to have their own discriminators.
    Type: Application
    Filed: June 12, 2020
    Publication date: December 16, 2021
    Applicants: Baidu USA LLC, Baidu.com Times Technology (Beijing) Co., Ltd.
    Inventors: Miao LIAO, Sibo ZHANG, Peng WANG, Ruigang YANG
  • Publication number: 20210374904
    Abstract: Systems and methods of video inpainting for autonomous driving are disclosed. For example, the method stitches a multiplicity of depth frames into a 3D map, where one or more objects in the depth frames have previously been removed. The method further projects the 3D map onto a first image frame to generate a corresponding depth map, where the first image frame includes a target inpainting region. For each target pixel within the target inpainting region of the first image frame, based on the corresponding depth map, the method further maps the target pixel within the target inpainting region of the first image frame to a candidate pixel in a second image frame. The method further determines a candidate color to fill the target pixel. The method further performs Poisson image editing on the first image frame to achieve color consistency at a boundary and between inside and outside of the target inpainting region of the first image frame.
    Type: Application
    Filed: May 26, 2020
    Publication date: December 2, 2021
    Inventors: Miao LIAO, Feixiang LU, Dingfu ZHOU, Sibo ZHANG, Ruigang YANG
  • Patent number: 11182928
    Abstract: Embodiments of the present disclosure provide a method, apparatus for determining a rotation angle of an engineering mechanical device, an electronic device and a computer readable medium. The method may include: acquiring a depth image sequence acquired by a binocular camera disposed at a rotating portion of the engineering mechanical device during rotation of the rotating portion of the engineering mechanical device; converting the depth image sequence into a three-dimensional point cloud sequence; and determining a matching point between three-dimensional point cloud frames in the three-dimensional point cloud sequence, determining a rotation angle of the binocular camera during the rotation of the rotating portion of the engineering mechanical device based on the matching point between the three-dimensional point cloud frames as the rotation angle of the engineering mechanical device.
    Type: Grant
    Filed: November 6, 2019
    Date of Patent: November 23, 2021
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Xinjing Cheng, Ruigang Yang, Feixiang Lu, Yajue Yang, Hao Xu
  • Publication number: 20210358151
    Abstract: A method for generating simulated point cloud data, a device, and a storage medium includes: acquiring at least one frame of point cloud data collected by a road collecting device in an actual environment without a dynamic obstacle as static scene point cloud data; setting, least one dynamic obstacle in a coordinate system matching the static scene point cloud data; simulating in the coordinate system, a plurality of simulated scanning lights emitted by a virtual scanner located at an origin of the coordinate system; updating the static scene point cloud data according to intersections of the plurality of simulated scanning lights and the at least one dynamic obstacle to obtain the simulated point cloud data comprising point cloud data of the dynamic obstacle; and at least one of adding a set noise to the simulated point cloud data, and, deleting point cloud data corresponding to the dynamic obstacle according to a set ratio.
    Type: Application
    Filed: July 27, 2021
    Publication date: November 18, 2021
    Inventors: Feilong YAN, Jin FANG, Tongtong ZHAO, Chi ZHANG, Liang WANG, Yu MA, Ruigang YANG
  • Patent number: 11131084
    Abstract: Embodiments of the present disclosure relate to a method and apparatus for selecting a target excavating point. The method includes: acquiring a height map of a material pile; discretizing the height map to obtain an excavating point set; acquiring an excavating trajectory set of an excavating point in the excavating point set; and selecting a target excavating point based on the excavating trajectory set of the excavating point in the excavating point set.
    Type: Grant
    Filed: November 7, 2019
    Date of Patent: September 28, 2021
    Inventors: Xinjing Cheng, Ruigang Yang, Yajue Yang, Feixiang Lu, Hao Xu
  • Patent number: 11113830
    Abstract: Embodiments of the present disclosure are directed to a method for generating simulated point cloud data, a device, and a storage medium. The method includes: acquiring at least one frame of point cloud data collected by a road collecting device in an actual environment without a dynamic obstacle as static scene point cloud data; setting, according to set position association information, at least one dynamic obstacle in a coordinate system matching the static scene point cloud data; simulating in the coordinate system, according to the static scene point cloud data, a plurality of simulated scanning lights emitted by a virtual scanner located at an origin of the coordinate system; and updating the static scene point cloud data according to intersections of the plurality of simulated scanning lights and the at least one dynamic obstacle to obtain the simulated point cloud data comprising point cloud data of the dynamic obstacle.
    Type: Grant
    Filed: August 26, 2019
    Date of Patent: September 7, 2021
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Feilong Yan, Jin Fang, Tongtong Zhao, Chi Zhang, Liang Wang, Yu Ma, Ruigang Yang
  • Patent number: 11087474
    Abstract: A method, an apparatus, a device, and a medium for calibrating a posture of a moving obstacle are provided. The method includes: obtaining a 3D map, the 3D map including first static obstacles; selecting a target frame of data, the target frame of data including second static obstacles and one or more moving obstacles; determining posture information of each of the one or more moving obstacles in a coordinate system of the 3D map; registering the target frame of data with the 3D map; determining posture offset information of the target frame of data in the coordinate system according to a registration result; calibrating the posture information of each of the one or more moving obstacles according to the posture offset information; and adding each of the one or more moving obstacles after the calibrating into the 3D map.
    Type: Grant
    Filed: August 30, 2019
    Date of Patent: August 10, 2021
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Feilong Yan, Jin Fang, Tongtong Zhao, Liang Wang, Yu Ma, Ruigang Yang
  • Patent number: 11069133
    Abstract: The present disclosure provides a method and a device for generating a 3D scene map, a related apparatus and a storage medium. The method includes the following. At least two frames of point cloud data collected by a collection device is obtained. Data registration is performed on the at least two frames of point cloud data. First type of point cloud data corresponding to a movable obstacle is deleted from each frame of point cloud data and each frame of point cloud data is merged to obtain an initial scene map. Second type of point cloud data corresponding to a regularly shaped object is replaced with model data of a geometry model matching with the regularly object for the initial scene map to obtain the 3D scene map.
    Type: Grant
    Filed: August 20, 2019
    Date of Patent: July 20, 2021
    Assignee: BAIDU ONLINE NETWORK TECHNOLOGY (BEIJING) CO., LTD.
    Inventors: Feilong Yan, Tongtong Zhao, Jin Fang, Liang Wang, Yu Ma, Ruigang Yang
  • Publication number: 20210174524
    Abstract: Presented are systems and methods for improving speed and quality of real-time per-pixel depth estimation of scene layouts from a single image by using an end-to-end Convolutional Spatial Propagation Network (CSPN). An efficient linear propagation model performs propagation using a recurrent convolutional operation. The affinity among neighboring pixels may be learned through a deep convolutional neural network (CNN). The CSPN may be applied to two depth estimation tasks, given a single image: (1) to refine the depth output of existing methods, and (2) to convert sparse depth samples to a dense depth map, e.g., by embedding the depth samples within the propagation procedure. The conversion ensures that the sparse input depth values are preserved in the final depth map and runs in real-time and is, thus, well suited for robotics and autonomous driving applications, where sparse but accurate depth measurements, e.g., from LiDAR, can be fused with image data.
    Type: Application
    Filed: June 29, 2018
    Publication date: June 10, 2021
    Applicants: Baidu USA LLC, Baidu.com Times Technology (Beijing) Co., Ltd.
    Inventors: Peng WANG, Xinjing CHENG, Ruigang YANG
  • Patent number: 11030525
    Abstract: Presented are deep learning-based systems and methods for fusing sensor data, such as camera images, motion sensors (GPS/IMU), and a 3D semantic map to achieve robustness, real-time performance, and accuracy of camera localization and scene parsing useful for applications such as robotic navigation and augment reality. In embodiments, a unified framework accomplishes this by jointly using camera poses and scene semantics in training and testing. To evaluate the presented methods and systems, embodiments use a novel dataset that is created from real scenes and comprises dense 3D semantically labeled point clouds, ground truth camera poses obtained from high-accuracy motion sensors, and pixel-level semantic labels of video camera images. As demonstrated by experimental results, the presented systems and methods are mutually beneficial for both camera poses and scene semantics.
    Type: Grant
    Filed: February 9, 2018
    Date of Patent: June 8, 2021
    Assignees: Baidu USA LLC, Baidu.com Times Technology (Beijing) Co., Ltd.
    Inventors: Peng Wang, Ruigang Yang, Binbin Cao, Wei Xu
  • Patent number: 11004235
    Abstract: Embodiments of the present disclosure provide a method and apparatus for determining position and orientation of a bucket of an excavator, an electronic device and a computer readable medium. The method may include: acquiring an image of a bucket of an excavator collected by a camera provided on an excavator body, the image of the bucket including a preset marker provided on the bucket; determining position and orientation information of the camera relative to the bucket on the basis of the image of the bucket and pre-acquired three-dimensional feature information of the preset marker; and converting the position and orientation information of the camera relative to the bucket into position and orientation information of the bucket relative to the excavator body.
    Type: Grant
    Filed: November 6, 2019
    Date of Patent: May 11, 2021
    Assignee: BEIJING BAIDU NETCOM SCIENCE AND TECHNOLOGY CO., LTD.
    Inventors: Xinjing Cheng, Ruigang Yang, Feixiang Lu, Yajue Yang, Hao Xu
  • Publication number: 20210118286
    Abstract: Embodiments of the present disclosure relate to a method and apparatus for updating information. The method may include: acquiring road network structure information of a target road network and vehicle information of a target number of vehicles in the target road network, the vehicle information including initial state information, perception information and positioning information, and the vehicle information being constrained by the road network structure information; selecting a target vehicle from the target number of vehicles; determining, based on a vehicle dynamics model, a reference speed at which the target vehicle passes a preset time step; and updating vehicle information of a vehicle in the target road network based on the reference speed of the target vehicle.
    Type: Application
    Filed: June 9, 2020
    Publication date: April 22, 2021
    Inventors: He JIANG, Jinxin ZHAO, Ruigang YANG
  • Patent number: 10984588
    Abstract: An obstacle distribution simulation method, device and terminal based on multiple models. The method can include: acquiring a point cloud, the point cloud including a plurality of obstacles labeled with real labeling data; extracting the real labeling data of the obstacles, and training a plurality of neural network models based on the real labeling data of the obstacles; extracting unlabeled data in the point cloud, inputting the unlabeled data into the neural network models, and outputting a plurality of prediction results. The plurality of prediction results can include a plurality of simulated obstacles with attribute data; selecting at least one simulated obstacle based on the plurality of prediction results; and inputting the attribute data of the selected simulated obstacle into the neural network models to obtain position coordinates of the simulated obstacle, and further obtain a position distribution of the simulated obstacle.
    Type: Grant
    Filed: July 15, 2019
    Date of Patent: April 20, 2021
    Assignee: Baidu Online Network Technology (Beijing) Co., Ltd
    Inventors: Jin Fang, Feilong Yan, Feihu Zhang, Ruigang Yang, Liang Wang, Yu Ma
  • Patent number: 10970938
    Abstract: Embodiments of the present disclosure provide a method and apparatus for generating information. A method may include: selecting a three-dimensional object model from a preset three-dimensional object model set based on a to-be-matched object image in a target two-dimensional image; determining, based on a normal vector of a ground plane of the target two-dimensional image, a plane equation of ground corresponding to the normal vector of the ground plane in a three-dimensional space; adjusting a rotation parameter and a translation parameter of the three-dimensional object model in the plane characterized by the plane equation; and generating, in response to determining that a contour of the adjusted three-dimensional object model matches a contour of the to-be-matched object image in the target two-dimensional image, three-dimensional information of an object corresponding to the to-be-matched object image based on the adjusted three-dimensional object model.
    Type: Grant
    Filed: September 11, 2019
    Date of Patent: April 6, 2021
    Assignee: Baidu Online Network Technology (Beijing) Co., Ltd.
    Inventors: Xibin Song, He Jiang, Ruigang Yang
  • Publication number: 20200364554
    Abstract: Presented are deep learning-based systems and methods for fusing sensor data, such as camera images, motion sensors (GPS/IMU), and a 3D semantic map to achieve robustness, real-time performance, and accuracy of camera localization and scene parsing useful for applications such as robotic navigation and augment reality. In embodiments, a unified framework accomplishes this by jointly using camera poses and scene semantics in training and testing. To evaluate the presented methods and systems, embodiments use a novel dataset that is created from real scenes and comprises dense 3D semantically labeled point clouds, ground truth camera poses obtained from high-accuracy motion sensors, and pixel-level semantic labels of video camera images. As demonstrated by experimental results, the presented systems and methods are mutually beneficial for both camera poses and scene semantics.
    Type: Application
    Filed: February 9, 2018
    Publication date: November 19, 2020
    Applicants: Baidu USA LLC, Baidu.com Times Technology (Beijing) Co., Ltd.
    Inventors: Peng WANG, Ruigang YANG, Binbin CAO, Wei XU
  • Patent number: 10839543
    Abstract: Presented are systems and methods for improving speed and quality of real-time per-pixel depth estimation of scene layouts from a single image by using a 3D end-to-end Convolutional Spatial Propagation Network (CSPN). An efficient linear propagation model performs propagation using a recurrent convolutional operation. The affinity among neighboring pixels may be learned through a deep convolutional neural network (CNN). The CSPN may be applied to two depth estimation tasks, given a single image: (1) to refine the depth output of existing methods, and (2) to convert sparse depth samples to a dense depth map, e.g., by embedding the depth samples within the propagation procedure. For stereo depth estimation, the 3D CPSN is applied to stereo matching by adding a diffusion dimension over discrete disparity space and feature scale space. This aids the recovered stereo depth to generate more details and to avoid error matching from noisy appearance caused by sunlight, shadow, and similar effects.
    Type: Grant
    Filed: February 26, 2019
    Date of Patent: November 17, 2020
    Assignee: Baidu USA LLC
    Inventors: Xinjing Cheng, Peng Wang, Ruigang Yang
  • Publication number: 20200334890
    Abstract: A method for texturing an un-textured computer-generated or scanned 3D model that is a polygon mesh of a plurality of triangles is disclosed. A set of input images and the un-textured 3D model are provided to an image space optimization module, which determines texture coordinates for each triangle of the polygon mesh. Each texture coordinate associates the triangle with an area of a source image from the set of input images. Determining the texture coordinates includes locally optimizing the texture coordinates along texture seams, followed by globally optimizing the texture coordinates in each source image over the entire source image. A textured 3D model is generated from the determined texture coordinates after the local-then-global optimization.
    Type: Application
    Filed: April 22, 2019
    Publication date: October 22, 2020
    Inventors: Wei LI, Ruigang YANG
  • Publication number: 20200279397
    Abstract: Embodiments of the present disclosure provide a method and apparatus for determining position and orientation of a bucket of an excavator, an electronic device and a computer readable medium. The method may include: acquiring an image of a bucket of an excavator collected by a camera provided on an excavator body, the image of the bucket including a preset marker provided on the bucket; determining position and orientation information of the camera relative to the bucket on the basis of the image of the bucket and pre-acquired three-dimensional feature information of the preset marker; and converting the position and orientation information of the camera relative to the bucket into position and orientation information of the bucket relative to the excavator body.
    Type: Application
    Filed: November 6, 2019
    Publication date: September 3, 2020
    Inventors: Xinjing CHENG, Ruigang YANG, Feixiang LU, Yajue YANG, Hao XU
  • Publication number: 20200279402
    Abstract: Embodiments of the present disclosure provide a method, apparatus for determining a rotation angle of an engineering mechanical device, an electronic device and a computer readable medium. The method may include: acquiring a depth image sequence acquired by a binocular camera disposed at a rotating portion of the engineering mechanical device during rotation of the rotating portion of the engineering mechanical device; converting the depth image sequence into a three-dimensional point cloud sequence; and determining a matching point between three-dimensional point cloud frames in the three-dimensional point cloud sequence, determining a rotation angle of the binocular camera during the rotation of the rotating portion of the engineering mechanical device based on the matching point between the three-dimensional point cloud frames as the rotation angle of the engineering mechanical device.
    Type: Application
    Filed: November 6, 2019
    Publication date: September 3, 2020
    Inventors: Xinjing CHENG, Ruigang YANG, Feixiang LU, Yajue YANG, Hao XU