Patents by Inventor Panqu Wang

Panqu Wang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12276982
    Abstract: A system installed in a vehicle includes a first group of sensing devices configured to allow a first level of autonomous operation of the vehicle; a second group of sensing devices configured to allow a second level of autonomous operation of the vehicle, the second group of sensing devices including primary sensing devices and backup sensing devices; a third group of sensing devices configured to allow the vehicle to perform a safe stop maneuver; and a control element communicatively coupled to the first group of sensing devices, the second group of sensing devices, and the third group of sensing devices. The control element is configured to: receive data from at least one of the first group, the second group, or the third group of sensing devices, and provide a control signal to a sensing device based on categorization information indicating a group to which the sensing device belongs.
    Type: Grant
    Filed: February 13, 2023
    Date of Patent: April 15, 2025
    Assignee: TUSIMPLE, INC.
    Inventors: Xiaoling Han, Chenzhe Qian, Chiyu Zhang, Charles A. Price, Joshua Miguel Rodriguez, Lei Nie, Lingting Ge, Panqu Wang, Pengfei Chen, Shuhan Yang, Xiangchen Zhao, Xiaodi Hou, Zehua Huang
  • Publication number: 20250086802
    Abstract: A method of processing point cloud information includes converting points in a point cloud obtained from a lidar sensor into a voxel grid, generating, from the voxel grid, sparse voxel features by applying a multi-layer perceptron and one or more max pooling layers that reduce dimension of input data; applying a cascade of an encoder that performs a N-stage sparse-to-dense feature operation, a global context pooling (GCP) module, and an M-stage decoder that performs a dense-to-sparse feature generation operation. The GCP module bridges an output of a last stage of the N-stages with an input of a first stage of the M-stages, where N and M are positive integers. The GCP module comprises a multi-scale feature extractor; and performing one or more perception operations on an output of the M-stage decoder and/or an output of the GCP module.
    Type: Application
    Filed: February 6, 2024
    Publication date: March 13, 2025
    Inventors: Dongqiangzi YE, Zixiang ZHOU, Weijia CHEN, Yufei XIE, Yu WANG, Panqu WANG, Lingting GE
  • Patent number: 12243428
    Abstract: A system and method for lateral vehicle detection is disclosed. A particular embodiment can be configured to: receive lateral image data from at least one laterally-facing camera associated with an autonomous vehicle; warp the lateral image data based on a line parallel to a side of the autonomous vehicle; perform object extraction on the warped lateral image data to identify extracted objects in the warped lateral image data; and apply bounding boxes around the extracted objects.
    Type: Grant
    Filed: March 14, 2023
    Date of Patent: March 4, 2025
    Assignee: TUSIMPLE, INC.
    Inventors: Zhipeng Yan, Lingting Ge, Pengfei Chen, Panqu Wang
  • Patent number: 12242967
    Abstract: A system and method for instance-level roadway feature detection for autonomous vehicle control are disclosed.
    Type: Grant
    Filed: December 12, 2023
    Date of Patent: March 4, 2025
    Assignee: TUSIMPLE, INC.
    Inventors: Tian Li, Panqu Wang, Pengfei Chen
  • Publication number: 20250050913
    Abstract: Techniques are described for operating a vehicle using sensor data provided by one or more ultrasonic sensors located on or in the vehicle. An example method includes receiving, by a computer located in a vehicle, data from an ultrasonic sensor located on the vehicle, where the data includes a first set of coordinates of two points associated with a location where an object is detected by the ultrasonic sensor; determining a second set of coordinates associated with a point in between the two points; performing a first determination that the second set of coordinates is associated with a lane or a road on which the vehicle is operating; performing a second determination that the object is movable; and sending, in response to the first determination and the second determination, a message that causes the vehicle to perform a driving related operation while the vehicle is operating on the road.
    Type: Application
    Filed: October 13, 2023
    Publication date: February 13, 2025
    Inventors: Zhe CHEN, Lingting GE, Joshua Miguel RODRIGUEZ, Ji HAN, Panqu WANG, Junjun XIN, Xiaoling HAN, Yizhe ZHAO
  • Publication number: 20250046075
    Abstract: A unified framework for detecting perception anomalies in autonomous driving systems is described. The perception anomaly detection framework takes an input image from a camera in or on a vehicle and identifies anomalies as belonging to one of three categories. Lens anomalies are associated with poor sensor conditions, such as water, dirt, or overexposure. Environment anomalies are associated with unfamiliar changes to an environment. Finally, object anomalies are associated with unknown objects. After perception anomalies are detected, the results are sent downstream to cause a behavior change of the vehicle.
    Type: Application
    Filed: November 22, 2023
    Publication date: February 6, 2025
    Inventors: Long SHA, Junliang ZHANG, Rundong GE, Xiangchen ZHAO, Fangjun ZHANG, Yizhe ZHAO, Panqu WANG
  • Publication number: 20250042369
    Abstract: Techniques are described for determining a set of pose information for an object when multiple sets of pose information are determined for a same object from multiple images. An example driving operation method includes obtaining, by a computer located in a vehicle, at least two sets of pose information related to an object located on a road on which the vehicle is operating, where each set of pose information includes characteristic(s) about the object, and where each set of pose information is determined from an image obtained by a camera; determining at least two weighted output vectors; determining, for the object, a set of pose information that are based on a combined weighted output vector that is obtained by combining the at least two weighted output vectors; and causing the vehicle to perform a driving-related operation using the set of pose information for the object.
    Type: Application
    Filed: October 13, 2023
    Publication date: February 6, 2025
    Inventors: Yizhe ZHAO, Zhe CHEN, Lingting GE, Panqu WANG
  • Publication number: 20250029274
    Abstract: The present disclosure provides methods and systems of sampling-based object pose determination. An example method includes obtaining, for a time frame, sensor data of the object acquired by a plurality of sensors; generating a two-dimensional bounding box of the object in a projection plane based on the sensor data of the time frame; generating a three-dimensional pose model of the object based on the sensor data of the time frame and a model reconstruction algorithm; generating, based on the sensor data, the pose model, and multiple sampling techniques, a plurality of pose hypotheses of the object corresponding to the time frame, generating a hypothesis projection of the object for each of the pose hypotheses by projecting the pose hypothesis onto the projection plane; determining evaluation results by comparing the hypothesis projections with the bounding box; and determining, based on the evaluation results, an object pose for the time frame.
    Type: Application
    Filed: October 17, 2023
    Publication date: January 23, 2025
    Inventors: Yizhe ZHAO, Zhe CHEN, Ye FAN, Lingting GE, Zhe HUANG, Panqu WANG, Xue MEI
  • Patent number: 12190465
    Abstract: A system and method for fisheye image processing can be configured to: receive fisheye image data from at least one fisheye lens camera associated with an autonomous vehicle, the fisheye image data representing at least one fisheye image frame; partition the fisheye image frame into a plurality of image portions representing portions of the fisheye image frame; warp each of the plurality of image portions to map an arc of a camera projected view into a line corresponding to a mapped target view, the mapped target view being generally orthogonal to a line between a camera center and a center of the arc of the camera projected view; combine the plurality of warped image portions to form a combined resulting fisheye image data set representing recovered or distortion-reduced fisheye image data corresponding to the fisheye image frame; generate auto-calibration data representing a correspondence between pixels in the at least one fisheye image frame and corresponding pixels in the combined resulting fisheye image
    Type: Grant
    Filed: February 9, 2024
    Date of Patent: January 7, 2025
    Assignee: TUSIMPLE, INC.
    Inventors: Zhipeng Yan, Pengfei Chen, Panqu Wang
  • Publication number: 20240379004
    Abstract: A system and method for determining car to lane distance is provided. In one aspect, the system includes a camera configured to generate an image, a processor, and a computer-readable memory. The processor is configured to receive the image from the camera, generate a wheel segmentation map representative of one or more wheels detected in the image, and generate a lane segmentation map representative of one or more lanes detected in the image. For at least one of the wheels in the wheel segmentation map, the processor is also configured to determine a distance between the wheel and at least one nearby lane in the lane segmentation map. The processor is further configured to determine a distance between a vehicle in the image and the lane based on the distance between the wheel and the lane.
    Type: Application
    Filed: July 25, 2024
    Publication date: November 14, 2024
    Inventor: Panqu WANG
  • Publication number: 20240346815
    Abstract: A system and method for vehicle wheel detection is disclosed. A particular embodiment can be configured to: receive training image data from a training image data collection system; obtain ground truth data corresponding to the training image data; perform a training phase to train one or more classifiers for processing images of the training image data to detect vehicle wheel objects in the images of the training image data; receive operational image data from an image data collection system associated with an autonomous vehicle; and perform an operational phase including applying the trained one or more classifiers to extract vehicle wheel objects from the operational image data and produce vehicle wheel object data.
    Type: Application
    Filed: April 17, 2024
    Publication date: October 17, 2024
    Inventors: Panqu WANG, Pengfei CHEN
  • Publication number: 20240320990
    Abstract: Techniques are described for performing an image processing on frames of a camera located on or in a vehicle. An example technique includes receiving, by a computer located in a vehicle, a first image and a second image from a camera; determining a first set of characteristics about a first set of pixels in the first image and a second set of characteristics about a second set of pixels in the second image; obtaining a motion information for each pixel in the second set by comparing the second set of characteristics with the first set of characteristics; generating, using the motion information for each pixel in the second set, a combined set of characteristics; determining attributes of a road using at least some of the combined set of characteristics; and causing the vehicle to perform a driving related operation in response to the determining the attributes of the road.
    Type: Application
    Filed: March 7, 2024
    Publication date: September 26, 2024
    Inventors: Rundong GE, Long SHA, Haiping WU, Xiangchen ZHAO, Fangjun ZHANG, Zilong GUO, Hongyuan DU, Pengfei CHEN, Panqu WANG
  • Publication number: 20240320987
    Abstract: Techniques are described for performing an image processing technique on frames of a camera located on or in a vehicle. An example technique includes receiving, by a computer located in a vehicle, a first image frame from a camera located on or in the vehicle; obtaining a first combined set of information by combining a first set of information about an object detected from the first image frame and a second set of information about a set of objects detected from a second image frame, where the set of objects includes the object; obtaining, by using the first combined set of information, a second combined set of information about the object from the first image frame and from the second image frame; and causing the vehicle to perform a driving related operation in response to determining a characteristic of the object using the second combined set of information.
    Type: Application
    Filed: March 7, 2024
    Publication date: September 26, 2024
    Inventors: Haiping WU, Long SHA, Hongyuan DU, Zilong GUO, Yihe TANG, Tingyu MAO, Pengfei CHEN, Panqu WANG, Rundong GE
  • Publication number: 20240320988
    Abstract: Techniques are described for performing image processing on images of cameras located on or in a vehicle. An example technique includes receiving a first set of images obtained by a first camera and a second set of images obtained by a second camera; determining, for each image in the first set, a first set of features of a first object; determining, for each image in the second set, a second set of features of a second object; obtaining a third set of features of an object by combining the first set of features and the second set of features; obtaining a fourth set of features of the object by including one or more features of a light signal of the object; determining characteristic(s) indicated by the light signal; and causing a vehicle to perform a driving related operation based on the characteristic(s) of the object.
    Type: Application
    Filed: March 7, 2024
    Publication date: September 26, 2024
    Inventors: Long SHA, Lezhou FENG, Pengfei CHEN, Panqu WANG
  • Publication number: 20240311954
    Abstract: A system and method for fisheye image processing can be configured to: receive fisheye image data from at least one fisheye lens camera associated with an autonomous vehicle, the fisheye image data representing at least one fisheye image frame; partition the fisheye image frame into a plurality of image portions representing portions of the fisheye image frame; warp each of the plurality of image portions to map an arc of a camera projected view into a line corresponding to a mapped target view, the mapped target view being generally orthogonal to a line between a camera center and a center of the arc of the camera projected view; combine the plurality of warped image portions to form a combined resulting fisheye image data set representing recovered or distortion-reduced fisheye image data corresponding to the fisheye image frame; generate auto-calibration data representing a correspondence between pixels in the at least one fisheye image frame and corresponding pixels in the combined resulting fisheye image
    Type: Application
    Filed: February 9, 2024
    Publication date: September 19, 2024
    Inventors: Zhipeng YAN, Pengfei CHEN, Panqu WANG
  • Patent number: 12073724
    Abstract: A system and method for determining car to lane distance is provided. In one aspect, the system includes a camera configured to generate an image, a processor, and a computer-readable memory. The processor is configured to receive the image from the camera, generate a wheel segmentation map representative of one or more wheels detected in the image, and generate a lane segmentation map representative of one or more lanes detected in the image. For at least one of the wheels in the wheel segmentation map, the processor is also configured to determine a distance between the wheel and at least one nearby lane in the lane segmentation map. The processor is further configured to determine a distance between a vehicle in the image and the lane based on the distance between the wheel and the lane.
    Type: Grant
    Filed: June 15, 2023
    Date of Patent: August 27, 2024
    Assignee: TUSIMPLE, INC.
    Inventor: Panqu Wang
  • Patent number: 12073324
    Abstract: A system and method for taillight signal recognition using a convolutional neural network is disclosed. An example embodiment includes: receiving a plurality of image frames from one or more image-generating devices of an autonomous vehicle; using a single-frame taillight illumination status annotation dataset and a single-frame taillight mask dataset to recognize a taillight illumination status of a proximate vehicle identified in an image frame of the plurality of image frames, the single-frame taillight illumination status annotation dataset including one or more taillight illumination status conditions of a right or left vehicle taillight signal, the single-frame taillight mask dataset including annotations to isolate a taillight region of a vehicle; and using a multi-frame taillight illumination status dataset to recognize a taillight illumination status of the proximate vehicle in multiple image frames of the plurality of image frames, the multiple image frames being in temporal succession.
    Type: Grant
    Filed: August 14, 2023
    Date of Patent: August 27, 2024
    Assignee: TUSIMPLE, INC.
    Inventors: Panqu Wang, Tian Li
  • Publication number: 20240265710
    Abstract: The present disclosure provides methods and systems for operating an autonomous vehicle. In some embodiments, the system may obtain, by a camera associated with an autonomous vehicle, an image of an environment of the autonomous vehicle, the environment including a road on which the autonomous vehicle is operating and an occlusion on the road. The system may identify the occlusion in the image based on map information of the environment and at least one camera parameter of the camera for obtaining the image. The system may identify an object represented in the image, and determine a confidence score relating to the object. The confidence score may indicate a likelihood a representation of the object in the image is impacted by the occlusion. The system may determine an operation algorithm based on the confidence score; and cause the autonomous vehicle to operate based on the operation algorithm.
    Type: Application
    Filed: September 27, 2023
    Publication date: August 8, 2024
    Inventors: Zhe CHEN, Yizhe ZHAO, Lingting GE, Panqu WANG
  • Patent number: 12033396
    Abstract: A system and method for three-dimensional (3D) object detection is disclosed. A particular embodiment can be configured to: receive image data from a camera associated with a vehicle, the image data representing an image frame; use a machine learning module to determine at least one pixel coordinate of a two-dimensional (2D) bounding box around an object in the image frame; use the machine learning module to determine at least one vertex of a three-dimensional (3D) bounding box around the object; obtain camera calibration information associated with the camera; and determine 3D attributes of the object using the 3D bounding box and the camera calibration information.
    Type: Grant
    Filed: June 22, 2023
    Date of Patent: July 9, 2024
    Assignee: TUSIMPLE, INC.
    Inventor: Panqu Wang
  • Publication number: 20240203135
    Abstract: Techniques are described for autonomous driving operation that includes receiving, by a computer located in a vehicle, an image from a camera located on the vehicle while the vehicle is operating on a road, wherein the image includes a plurality of lanes of the road; for each of the plurality of lanes: obtaining, from a map database stored in the computer, a set of values that describe locations of boundaries of a lane; dividing the lane into a plurality of polygons; rendering the plurality of polygons onto the image; and determining identifiers of lane segments of the lane; determining one or more characteristics of a lane segment on which the vehicle is operating based on an identifier of the lane segment; and causing the vehicle to perform a driving related operation in response to the one or more characteristics of the lane segment on which the vehicle is operating.
    Type: Application
    Filed: September 26, 2023
    Publication date: June 20, 2024
    Inventors: Yizhe ZHAO, Lingting GE, Panqu WANG