Patents by Inventor Yuliang Guo

Yuliang Guo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250244137
    Abstract: Methods and systems for generating a HD map and lane trajectory for an autonomous vehicle based on an SD map. Images from one or more image sensors mounted on a vehicle are received. Via a vehicle processor, perception data is generated based on the received images, wherein the perception data provides a representation of an environment proximate to the vehicle. A standard definition (SD) map corresponding with the environment proximate to the vehicle. The vehicle processor generates a high definition (HD) map corresponding with the environment proximate to the vehicle based on the SD map and the perception data. The vehicle processor also generates lane-level trajectory associated with a planned route for the vehicle utilizing the HD map.
    Type: Application
    Filed: January 31, 2024
    Publication date: July 31, 2025
    Inventors: David Fernando PAZ RUIZ, Yuliang GUO, Hengyuan ZHANG, Arun DAS, Xinyu HUANG, Liu REN
  • Publication number: 20250218113
    Abstract: A computer-implemented method and system relate to generating a three-dimensional (3D) layout model. Segmentation masks are generated using a digital image. The segmentation masks identify architectural elements in the digital image. Depth data is generated for each segmentation mask. A set of planes is generated using the depth data and the segmentation masks. Boundary estimate data is generated for the set of planes using boundary data of the segmentation masks. A set of plane segments is generated by bounding the set of planes using the boundary estimate data. Boundary tolerance data is generated for each boundary estimate data. A 3D layout model is constructed by generating at least a boundary segment that connects a first bounded plane and a second bounded plane at an intersection, which is located using the boundary estimate data and the boundary tolerance data.
    Type: Application
    Filed: December 27, 2023
    Publication date: July 3, 2025
    Inventors: Ruoyu Wang, Yuliang Guo, Cheng Zhao, Xinyu Huang, Liu Ren
  • Publication number: 20250153736
    Abstract: Methods and systems for training an autonomous driving system using a vision-language planning (VLP) model. Image data is obtained from a vehicle-mounted camera, encompassing details about agents situated within the external environment. Via image processing, the system identifies these agents within the environment. A Bird's Eye View (BEV) representation of the surroundings is then generated, encapsulating the spatiotemporal information linked to the vehicle and the recognized agents. Execution of the VLP machine learning model begins by extracting vision-based planning features from the BEV, and receiving or generating textual information characterizing various attributes of the vehicle within the environment. Text-based planning features are extracted from this textual information. To enhance model performance, a contrastive learning model is engaged to establish similarities between the vision-based and text-based planning features, and a predicted trajectory is output based on the similarities.
    Type: Application
    Filed: November 10, 2023
    Publication date: May 15, 2025
    Inventors: Chenbin PAN, Burhaneddin YAMAN, Tommaso NESTI, Abhirup MALLIK, Yuliang GUO, Liu REN
  • Patent number: 12299837
    Abstract: A system and method are disclosed herein for developing a machine perception model in the omnidirectional image domain. The system and method utilize the knowledge distillation process to transfer and adapt knowledge from the perspective projection image domain to the omnidirectional image domain. A teacher model is pre-trained to perform the machine perception task in the perspective projection image. A student model is trained by adapting the pre-existing knowledge of the teacher model from the perspective projection image domain to the omnidirectional image domain. By way of this training, the student model learns to perform the same machine perception task, except in the omnidirectional image domain, using limited or no suitably labeled training data in the omnidirectional image domain.
    Type: Grant
    Filed: December 8, 2021
    Date of Patent: May 13, 2025
    Assignee: Robert Bosch GmbH
    Inventors: Yuliang Guo, Zhixin Yan, Yuyan Li, Xinyu Huang, Liu Ren
  • Patent number: 12145592
    Abstract: A method for performing at least one perception task associated with autonomous vehicle control includes receiving a first dataset and identifying a first object category of objects associated with the plurality of images, the first object category including a plurality of object types. The method also includes identifying a current statistical distribution of a first object type of the plurality of object types and determining a first distribution difference between the current statistical distribution of the first object type and a standard statistical distribution associated with the first object category. The method also includes, in response to a determination that the first distribution difference is greater than a threshold, generating first object type data corresponding to the first object type, configuring at least one attribute of the first object type data, and generating a second dataset by augmenting the first dataset using the first object type data.
    Type: Grant
    Filed: March 23, 2022
    Date of Patent: November 19, 2024
    Assignee: Robert Bosch GmbH
    Inventors: Yiqi Zhong, Xinyu Huang, Yuliang Guo, Liang Gou, Liu Ren
  • Publication number: 20240203152
    Abstract: A method for identifying human poses in an image, a computer system, and a non-transitory computer-readable medium are provided. The method includes obtaining the image, encoding the image into a feature map, converting the feature map into a plurality of TPDF maps corresponding to a plurality of predefined part types, and generating a global pose map based on the feature map and the plurality of TPDF maps corresponding to the predefined part types. Each TPDF map corresponds to a respective predefined part type and represents a plurality of vectors connecting each position on the image to a closest body part among the one or more body parts of the respective predefined part type. A global pose map is generated based on the feature map and the plurality of TPDF maps, identifying one or more human bodies having a plurality of body parts in a scene of the image.
    Type: Application
    Filed: December 14, 2020
    Publication date: June 20, 2024
    Applicant: INNOPEAK TECHNOLOGY, INC.
    Inventors: Yuliang GUO, Zhong LI, Xiangyu DU, Yi XU, Shuxue QUAN
  • Publication number: 20230303084
    Abstract: A method for performing at least one perception task associated with autonomous vehicle control includes receiving a first dataset and identifying a first object category of objects associated with the plurality of images, the first object category including a plurality of object types. The method also includes identifying a current statistical distribution of a first object type of the plurality of object types and determining a first distribution difference between the current statistical distribution of the first object type and a standard statistical distribution associated with the first object category. The method also includes, in response to a determination that the first distribution difference is greater than a threshold, generating first object type data corresponding to the first object type, configuring at least one attribute of the first object type data, and generating a second dataset by augmenting the first dataset using the first object type data.
    Type: Application
    Filed: March 23, 2022
    Publication date: September 28, 2023
    Inventors: Yiqi Zhong, Xinyu Huang, Yuliang Guo, Liang Gou, Liu Ren
  • Publication number: 20230281864
    Abstract: A computer-implemented system and method for semantic localization of various objects includes obtaining an image from a camera. The image displays a scene with a first object and a second object. A first set of 2D keypoints are generated with respect to the first object. First object pose data is generated based on the first set of 2D keypoints. Camera pose data is generated based on the first object pose data. A keypoint heatmap is generated using the camera pose data. A second set of 2D keypoints is generated with respect to the second object based on the keypoint heatmap. Second object pose data is generated based on the second set of 2D keypoints. First coordinate data of the first object is generated in world coordinates using the first object pose data and the camera pose data. Second coordinate data of the second object is generated in the world coordinates using the second object pose data and the camera pose data. The first object is tracked based on the first coordinate data.
    Type: Application
    Filed: March 4, 2022
    Publication date: September 7, 2023
    Inventors: Yuliang Guo, Xinyu Huang, Liu Ren
  • Publication number: 20230245448
    Abstract: A method and system for an augmented reality assistant that recognizes a step in a product assembly process and assists in the installation of a constituent component into a base component. That system having a prepopulated database of templates, the templates being generated based off of two-dimensional images and the related three-dimensional models. The template database is used to train a first machine learning model, that model configured to identify the step in the product assembly process based on an image captured from an image capture device. Verifying that determination by a second machine learning model. Presenting an AR assistant to the user to assist with that step based on the related template.
    Type: Application
    Filed: January 28, 2022
    Publication date: August 3, 2023
    Inventors: Yuliang GUO, Xinyu HUANG, Liu REN
  • Publication number: 20230244835
    Abstract: Methods and systems for determining a 6D pose of an object in an image are disclosed. In embodiments, an input image is received from a sensor, wherein the input image includes an object in the image. A trained image encoder transforms the input image into a normal map and an instance segmentation map. The normal map is encoded with pointwise 2D features. A 3D CAD model is selected from memory that resembles the object in the image. The 3D CAD model is encoded with pointwise 3D features. The pointwise 2D features are matched with the pointwise 3D features to obtain correspondences between the 2D features and the 3D features. The 6D pose of the object is then determined based on the correspondences.
    Type: Application
    Filed: January 31, 2022
    Publication date: August 3, 2023
    Inventors: Yuliang GUO, Xinyu HUANG, Liu REN
  • Patent number: 11715300
    Abstract: A method and system for an augmented reality assistant that recognizes a step in a product assembly process and assists in the installation of a constituent component into a base component. That system having a prepopulated database of templates, the templates being generated based off of two-dimensional images and the related three-dimensional models. The template database is used to train a first machine learning model, that model configured to identify the step in the product assembly process based on an image captured from an image capture device. Verifying that determination by a second machine learning model. Presenting an AR assistant to the user to assist with that step based on the related template.
    Type: Grant
    Filed: January 28, 2022
    Date of Patent: August 1, 2023
    Assignee: Robert Bosch GMBH
    Inventors: Yuliang Guo, Xinyu Huang, Liu Ren
  • Patent number: 11704913
    Abstract: A list of images is received. The images were captured by a sensor of an ADV chronologically while driving through a driving environment. A first image of the images is identified that includes a first object in a first dimension (e.g., larger size) detected by an object detector using an object detection algorithm. In response to the detection of the first object, the images in the list are traversed backwardly in time from the first image to identify a second image that includes a second object in a second dimension (e.g., smaller size) based on a moving trail of the ADV represented by the list of images. The second object is then labeled or annotated in the second image equivalent to the first object in the first image. The list of images having the labeled second image can be utilized for subsequent object detection during autonomous driving.
    Type: Grant
    Filed: July 2, 2019
    Date of Patent: July 18, 2023
    Assignee: BAIDU USA LLC
    Inventors: Tae Eun Choe, Guang Chen, Weide Zhang, Yuliang Guo, Ka Wai Tsoi
  • Patent number: 11679764
    Abstract: During the autonomous driving, the movement trails or moving history of obstacles, as well as, an autonomous driving vehicle (ADV) may be maintained in a corresponding buffer. For the obstacles and the ADV, the vehicle states at different points in time are maintained and stored in one or more buffers. The vehicle states representing the moving trails or moving history of the obstacles and the ADV may be utilized to reconstruct a history trajectory of the obstacles and the ADV, which may be used for a variety of purposes. For example, the moving trails or history of obstacles may be utilized to determine lane configuration of one or more lanes of a road, particularly, in a rural area where the lane markings are unclear. The moving history of the obstacles may also be utilized predict the future movement of the obstacles, tailgate an obstacle, and infer a lane line.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: June 20, 2023
    Assignee: BAIDU USA LLC
    Inventors: Tae Eun Choe, Guang Chen, Weide Zhang, Yuliang Guo, Ka Wai Tsoi
  • Publication number: 20230186590
    Abstract: A method and device for performing a perception task are disclosed. The method and device incorporate a dense regression model. The dense regression model advantageously incorporates a distortion-free convolution technique that is designed to accommodate and appropriately handle the varying levels of distortion in omnidirectional images across different regions. In addition to distortion-free convolution, the dense regression model further utilizes a transformer that incorporates an spherical self-attention that use distortion-free image embedding to compute an appearance attention and uses spherical distance to compute a positional attention.
    Type: Application
    Filed: December 13, 2021
    Publication date: June 15, 2023
    Inventors: Yuliang Guo, Zhixin Yan, Yuyan Li, Xinyu Huang, Liu Ren
  • Publication number: 20230184949
    Abstract: A system and method are disclosed herein for developing robust semantic mapping models for estimating semantic maps from LiDAR scans. In particular, the system and method enable the generation of realistic simulated LiDAR scans based on two-dimensional (2D) floorplans, for the purpose of providing a much larger set of training data that can be used to train robust semantic mapping models. These simulated LiDAR scans, as well as real LiDAR scans, are annotated using automated and manual processes with a rich set of semantic labels. Based on the annotated LiDAR scans, one or more semantic mapping models can be trained to estimate the semantic map for new LiDAR scans. The trained semantic mapping model can be deployed in robot vacuum cleaners, as well as similar devices that must interpret LiDAR scans of an environment to perform a task.
    Type: Application
    Filed: December 9, 2021
    Publication date: June 15, 2023
    Inventors: Xinyu Huang, Sharath Gopal, Lincan Zou, Yuliang Guo, Liu Ren
  • Publication number: 20230177637
    Abstract: A system and method are disclosed herein for developing a machine perception model in the omnidirectional image domain. The system and method utilize the knowledge distillation process to transfer and adapt knowledge from the perspective projection image domain to the omnidirectional image domain. A teacher model is pre-trained to perform the machine perception task in the perspective projection image. A student model is trained by adapting the pre-existing knowledge of the teacher model from the perspective projection image domain to the omnidirectional image domain. By way of this training, the student model learns to perform the same machine perception task, except in the omnidirectional image domain, using limited or no suitably labeled training data in the omnidirectional image domain.
    Type: Application
    Filed: December 8, 2021
    Publication date: June 8, 2023
    Inventors: Yuliang Guo, Zhixin Yan, Yuyan Li, Xinyu Huang, Liu Ren
  • Patent number: 11227167
    Abstract: In some implementations, a method is provided. The method includes obtaining an image depicting an environment where an autonomous driving vehicle (ADV) may be located. The image comprises a plurality of line indicators. The plurality of line indicators represent one or more lanes in the environment. The image is part of training data for a neural network. The method also includes determining a plurality of line segments based on the plurality of line indicators. The method further includes determining a vanishing point within the image based on the plurality of line segments. The method further includes updating one or more of the image or metadata associated with the image to indicate a location of the vanishing point within the image.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: January 18, 2022
    Assignee: BAIDU USA LLC
    Inventors: Yuliang Guo, Tae Eun Choe, Ka Wai Tsoi, Guang Chen, Weide Zhang
  • Patent number: 11120566
    Abstract: In some implementations, a method is provided. The method includes obtaining an image depicting an environment where an autonomous driving vehicle (ADV) is located. The method also includes determining, using a first neural network, a plurality of line indicators based on the image. The plurality of line indicators represent one or more lanes in the environment. The method further includes determining, using a second neural network, a vanishing point within the image based on the plurality of line segments. The second neural network is communicatively coupled to the first neural network. The plurality of line indicators is determined simultaneously with the vanishing point. The method further includes calibrating one or more sensors based of the autonomous driving vehicle based on the vanishing point.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: September 14, 2021
    Assignee: BAIDU USA LLC
    Inventors: Yuliang Guo, Tae Eun Choe, Ka Wai Tsoi, Guang Chen, Weide Zhang
  • Patent number: 11055540
    Abstract: In one embodiment, a set of bounding box candidates are plotted onto a 2D space based on their respective dimension (e.g., widths and heights). The bounding box candidates are clustered on the 2D space based on the distribution density of the bounding box candidates. For each of the clusters of the bounding box candidates, an anchor box is determined to represent the corresponding cluster. A neural network model is trained based on the anchor boxes representing the clusters. The neural network model is utilized to detect or recognize objects based on images and/or point clouds captured by a sensor (e.g., camera, LIDAR, and/or RADAR) of an autonomous driving vehicle.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: July 6, 2021
    Assignee: BAIDU USA LLC
    Inventors: Ka Wai Tsoi, Tae Eun Choe, Yuliang Guo, Guang Chen, Weide Zhang
  • Patent number: 10915766
    Abstract: In one embodiment, in addition to detecting or recognizing an actual lane, a virtual lane is determined based on the current state or motion prediction of an ADV. A virtual lane may or may not be identical or similar to the actual lane. A virtual lane may represent the likely movement of the ADV in a next time period given the current speed and heading direction of the vehicle. If an object is detected that may cross a lane line of the virtual lane and is a closest object to the ADV, the object is considered as a CIPO, and an emergency operation may be activated. That is, even though an object may not be in the path of an actual lane, if the object is in the path of a virtual lane of an ADV, the object may be considered as a CIPO and subject to a special operation.
    Type: Grant
    Filed: June 28, 2019
    Date of Patent: February 9, 2021
    Assignee: BAIDU USA LLC
    Inventors: Tae Eun Choe, Yuliang Guo, Guang Chen, Weide Zhang, Ka Wai Tsoi