Patents by Inventor Panqu Wang

Panqu Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200160067
    Abstract: A system and method for image localization based on semantic segmentation are disclosed. A particular embodiment includes: receiving image data from an image generating device mounted on an autonomous vehicle; performing semantic segmentation or other object detection on the received image data to identify and label objects in the image data and produce semantic label image data; identifying extraneous objects in the semantic label image data; removing the extraneous objects from the semantic label image data; comparing the semantic label image data to a baseline semantic label map; and determining a vehicle location of the autonomous vehicle based on information in a matching baseline semantic label map.
    Type: Application
    Filed: January 25, 2020
    Publication date: May 21, 2020
    Inventors: Zehua HUANG, Pengfei CHEN, Panqu WANG, Ke XU
  • Publication number: 20200126179
    Abstract: A system and method for fisheye image processing is disclosed.
    Type: Application
    Filed: October 19, 2018
    Publication date: April 23, 2020
    Inventors: Zhipeng YAN, Pengfei CHEN, Panqu WANG
  • Publication number: 20200082180
    Abstract: A system and method for three-dimensional (3D) object detection is disclosed. A particular embodiment can be configured to: receive image data from at least one camera associated with an autonomous vehicle, the image data representing at least one image frame; use a trained deep learning module to determine pixel coordinates of a two-dimensional (2D) bounding box around an object detected in the image frame; use the trained deep learning module to determine vertices of a three-dimensional (3D) bounding box around the object; use a fitting module to obtain geological information related to a particular environment associated with the image frame and to obtain camera calibration information associated with the at least one camera; and use the fitting module to determine 3D attributes of the object using the 3D bounding box, the geological information, and the camera calibration information.
    Type: Application
    Filed: September 12, 2018
    Publication date: March 12, 2020
    Inventor: Panqu WANG
  • Patent number: 10586456
    Abstract: A system and method for determining car to lane distance is provided. In one aspect, the system includes a camera configured to generate an image, a processor, and a computer-readable memory. The processor is configured to receive the image from the camera, generate a wheel segmentation map representative of one or more wheels detected in the image, and generate a lane segmentation map representative of one or more lanes detected in the image. For at least one of the wheels in the wheel segmentation map, the processor is also configured to determine a distance between the wheel and at least one nearby lane in the lane segmentation map. The processor is further configured to determine a distance between a vehicle in the image and the lane based on the distance between the wheel and the lane.
    Type: Grant
    Filed: April 27, 2018
    Date of Patent: March 10, 2020
    Assignee: TuSimple
    Inventor: Panqu Wang
  • Patent number: 10558864
    Abstract: A system and method for image localization based on semantic segmentation are disclosed. A particular embodiment includes: receiving image data from an image generating device mounted on an autonomous vehicle; performing semantic segmentation or other object detection on the received image data to identify and label objects in the image data and produce semantic label image data; identifying extraneous objects in the semantic label image data; removing the extraneous objects from the semantic label image data; comparing the semantic label image data to a baseline semantic label map; and determining a vehicle location of the autonomous vehicle based on information in a matching baseline semantic label map.
    Type: Grant
    Filed: May 18, 2017
    Date of Patent: February 11, 2020
    Assignee: TuSimple
    Inventors: Zehua Huang, Pengfei Chen, Panqu Wang, Ke Xu
  • Patent number: 10528851
    Abstract: A system and method for drivable road surface representation generation using multimodal sensor data are disclosed. A particular embodiment includes: receiving image data from an image generating device mounted on a vehicle and receiving three dimensional (3D) point cloud data from a distance measuring device mounted on the vehicle; projecting the 3D point cloud data onto the 2D image data to produce mapped image and point cloud data; performing post-processing operations on the mapped image and point cloud data; and performing a smoothing operation on the processed mapped image and point cloud data to produce a drivable road surface map or representation.
    Type: Grant
    Filed: November 27, 2017
    Date of Patent: January 7, 2020
    Assignee: TUSIMPLE
    Inventors: Ligeng Zhu, Panqu Wang, Pengfei Chen
  • Publication number: 20190370574
    Abstract: A system and method for taillight signal recognition using a convolutional neural network is disclosed. An example embodiment includes: receiving a plurality of image frames from one or more image-generating devices of an autonomous vehicle; using a single-frame taillight illumination status annotation dataset and a single-frame taillight mask dataset to recognize a taillight illumination status of a proximate vehicle identified in an image frame of the plurality of image frames, the single-frame taillight illumination status annotation dataset including one or more taillight illumination status conditions of a right or left vehicle taillight signal, the single-frame taillight mask dataset including annotations to isolate a taillight region of a vehicle; and using a multi-frame taillight illumination status dataset to recognize a taillight illumination status of the proximate vehicle in multiple image frames of the plurality of image frames, the multiple image frames being in temporal succession.
    Type: Application
    Filed: August 16, 2019
    Publication date: December 5, 2019
    Inventors: Panqu WANG, Tian LI
  • Patent number: 10481267
    Abstract: A method of generating a ground truth dataset for motion planning of a vehicle is disclosed. The method includes: obtaining undistorted LiDAR scans; identifying, for a pair of undistorted LiDAR scans, points belonging to a static object in an environment; aligning the close points based on pose estimates; and transforming a reference scan that is close in time to a target undistorted LiDAR scan so as to align the reference scan with the target undistorted LiDAR scan.
    Type: Grant
    Filed: June 13, 2017
    Date of Patent: November 19, 2019
    Assignee: TUSIMPLE
    Inventors: Yi Wang, Yi Luo, Wentao Zhu, Panqu Wang
  • Patent number: 10471963
    Abstract: A system and method for transitioning between an autonomous and manual driving mode based on detection of a driver's capacity to control a vehicle are disclosed. A particular embodiment includes: receiving sensor data related to a vehicle driver's capacity to take manual control of an autonomous vehicle; determining, based on the sensor data, if the driver has the capacity to take manual control of the autonomous vehicle, the determining including prompting the driver to perform an action or provide an input; and outputting a vehicle control transition signal to a vehicle subsystem to cause the vehicle subsystem to take action based on the driver's capacity to take manual control of the autonomous vehicle.
    Type: Grant
    Filed: April 7, 2017
    Date of Patent: November 12, 2019
    Assignee: TUSIMPLE
    Inventors: Zehua Huang, Panqu Wang, Pengfei Chen
  • Publication number: 20190333389
    Abstract: A system and method for determining car to lane distance is provided. In one aspect, the system includes a camera configured to generate an image, a processor, and a computer-readable memory. The processor is configured to receive the image from the camera, generate a wheel segmentation map representative of one or more wheels detected in the image, and generate a lane segmentation map representative of one or more lanes detected in the image. For at least one of the wheels in the wheel segmentation map, the processor is also configured to determine a distance between the wheel and at least one nearby lane in the lane segmentation map. The processor is further configured to determine a distance between a vehicle in the image and the lane based on the distance between the wheel and the lane.
    Type: Application
    Filed: April 27, 2018
    Publication date: October 31, 2019
    Inventor: Panqu Wang
  • Publication number: 20190286916
    Abstract: A system and method for lateral vehicle detection is disclosed. A particular embodiment can be configured to: receive lateral image data from at least one laterally-facing camera associated with an autonomous vehicle; warp the lateral image data based on a line parallel to a side of the autonomous vehicle; perform object extraction on the warped lateral image data to identify extracted objects in the warped lateral image data; and apply bounding boxes around the extracted objects.
    Type: Application
    Filed: March 18, 2018
    Publication date: September 19, 2019
    Inventors: Zhipeng YAN, Lingting GE, Pengfei CHEN, Panqu WANG
  • Patent number: 10410055
    Abstract: A system and method for aerial video traffic analysis are disclosed. A particular embodiment is configured to: receive a captured video image sequence from an unmanned aerial vehicle (UAV); clip the video image sequence by removing unnecessary images; stabilize the video image sequence by choosing a reference image and adjusting other images to the reference image; extract a background image of the video image sequence for vehicle segmentation; perform vehicle segmentation to identify vehicles in the video image sequence on a pixel by pixel basis; determine a centroid, heading, and rectangular shape of each identified vehicle; perform vehicle tracking to detect a same identified vehicle in multiple image frames of the video image sequence; and produce output and visualization of the video image sequence including a combination of the background image and the images of each identified vehicle.
    Type: Grant
    Filed: October 5, 2017
    Date of Patent: September 10, 2019
    Assignee: TuSimple
    Inventors: Yijie Wang, Panqu Wang, Pengfei Chen
  • Publication number: 20190272433
    Abstract: A system and method for vehicle occlusion detection is disclosed.
    Type: Application
    Filed: May 19, 2019
    Publication date: September 5, 2019
    Inventors: Hongkai YU, Zhipeng YAN, Panqu WANG, Pengfei CHEN
  • Publication number: 20190266420
    Abstract: A system and method for online real-time multi-object tracking is disclosed. A particular embodiment can be configured to: receive image frame data from at least one camera associated with an autonomous vehicle; generate similarity data corresponding to a similarity between object data in a previous image frame compared with object detection results from a current image frame; use the similarity data to generate data association results corresponding to a best matching between the object data in the previous image frame and the object detection results from the current image frame; cause state transitions in finite state machines for each object according to the data association results; and provide as an output object tracking output data corresponding to the states of the finite state machines for each object.
    Type: Application
    Filed: February 27, 2018
    Publication date: August 29, 2019
    Inventors: Lingting GE, Pengfei CHEN, Panqu WANG
  • Patent number: 10387736
    Abstract: A system method for detecting taillight signals of a vehicle using a convolutional neural network is disclosed.
    Type: Grant
    Filed: September 20, 2017
    Date of Patent: August 20, 2019
    Assignee: TUSIMPLE
    Inventors: Yijie Wang, Ligeng Zhu, Panqu Wang, Pengfei Chen
  • Patent number: 10360686
    Abstract: A system for generating a ground truth dataset for motion planning of a vehicle is disclosed. The system includes an internet server that further includes an I/O port, configured to transmit and receive electrical signals to and from a client device; a memory; one or more processing units; and one or more programs stored in the memory and configured for execution by the one or more processing units, the one or more programs including instructions for: a corresponding module configured to correspond, for each pair of images, a first image of the pair to a LiDAR static-scene point cloud; and a computing module configured to compute a camera pose associated with the pair of images in the coordinate of the point cloud.
    Type: Grant
    Filed: June 13, 2017
    Date of Patent: July 23, 2019
    Assignee: TUSIMPLE
    Inventors: Yi Wang, Yi Luo, Wentao Zhu, Panqu Wang
  • Patent number: 10311312
    Abstract: A system and method for vehicle occlusion detection is disclosed.
    Type: Grant
    Filed: October 28, 2017
    Date of Patent: June 4, 2019
    Assignee: TuSimple
    Inventors: Hongkai Yu, Zhipeng Yan, Panqu Wang, Pengfei Chen
  • Publication number: 20190164018
    Abstract: A system and method for drivable road surface representation generation using multimodal sensor data are disclosed. A particular embodiment includes: receiving image data from an image generating device mounted on a vehicle and receiving three dimensional (3D) point cloud data from a distance measuring device mounted on the vehicle; projecting the 3D point cloud data onto the 2D image data to produce mapped image and point cloud data; performing post-processing operations on the mapped image and point cloud data; and performing a smoothing operation on the processed mapped image and point cloud data to produce a drivable road surface map or representation.
    Type: Application
    Filed: November 27, 2017
    Publication date: May 30, 2019
    Inventors: Ligeng ZHU, Panqu WANG, Pengfei CHEN
  • Patent number: 10303956
    Abstract: A system and method for using triplet loss for proposal free instance-wise semantic segmentation for lane detection are disclosed. A particular embodiment includes: receiving image data from an image generating device mounted on an autonomous vehicle; performing a semantic segmentation operation or other object detection on the received image data to identify and label objects in the image data with object category labels on a per-pixel basis and producing corresponding semantic segmentation prediction data; performing a triplet loss calculation operation using the semantic segmentation prediction data to identify different instances of objects with similar object category labels found in the image data; and determining an appropriate vehicle control action for the autonomous vehicle based on the different instances of objects identified in the image data.
    Type: Grant
    Filed: August 23, 2017
    Date of Patent: May 28, 2019
    Assignee: TUSIMPLE
    Inventors: Zehua Huang, Panqu Wang, Pengfei Chen, Tian Li
  • Publication number: 20190108641
    Abstract: A system and method for semantic segmentation using hybrid dilated convolution (HDC) are disclosed. A particular embodiment includes: receiving an input image; producing a feature map from the input image; performing a convolution operation on the feature map and producing multiple convolution layers; grouping the multiple convolution layers into a plurality of groups; applying different dilation rates for different convolution layers in a single group of the plurality of groups; and applying a same dilation rate setting across all groups of the plurality of groups.
    Type: Application
    Filed: December 4, 2018
    Publication date: April 11, 2019
    Inventors: Zehua HUANG, Pengfei CHEN, Panqu WANG