Patents by Inventor Lingting GE

Lingting GE has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12276982
    Abstract: A system installed in a vehicle includes a first group of sensing devices configured to allow a first level of autonomous operation of the vehicle; a second group of sensing devices configured to allow a second level of autonomous operation of the vehicle, the second group of sensing devices including primary sensing devices and backup sensing devices; a third group of sensing devices configured to allow the vehicle to perform a safe stop maneuver; and a control element communicatively coupled to the first group of sensing devices, the second group of sensing devices, and the third group of sensing devices. The control element is configured to: receive data from at least one of the first group, the second group, or the third group of sensing devices, and provide a control signal to a sensing device based on categorization information indicating a group to which the sensing device belongs.
    Type: Grant
    Filed: February 13, 2023
    Date of Patent: April 15, 2025
    Assignee: TUSIMPLE, INC.
    Inventors: Xiaoling Han, Chenzhe Qian, Chiyu Zhang, Charles A. Price, Joshua Miguel Rodriguez, Lei Nie, Lingting Ge, Panqu Wang, Pengfei Chen, Shuhan Yang, Xiangchen Zhao, Xiaodi Hou, Zehua Huang
  • Publication number: 20250086802
    Abstract: A method of processing point cloud information includes converting points in a point cloud obtained from a lidar sensor into a voxel grid, generating, from the voxel grid, sparse voxel features by applying a multi-layer perceptron and one or more max pooling layers that reduce dimension of input data; applying a cascade of an encoder that performs a N-stage sparse-to-dense feature operation, a global context pooling (GCP) module, and an M-stage decoder that performs a dense-to-sparse feature generation operation. The GCP module bridges an output of a last stage of the N-stages with an input of a first stage of the M-stages, where N and M are positive integers. The GCP module comprises a multi-scale feature extractor; and performing one or more perception operations on an output of the M-stage decoder and/or an output of the GCP module.
    Type: Application
    Filed: February 6, 2024
    Publication date: March 13, 2025
    Inventors: Dongqiangzi YE, Zixiang ZHOU, Weijia CHEN, Yufei XIE, Yu WANG, Panqu WANG, Lingting GE
  • Publication number: 20250086828
    Abstract: An image processing method includes performing, using images obtained from one or more sensors onboard a vehicle, a 2-dimensional (2D) feature extraction; performing, a 3-dimensional (3D) feature extraction on the images; detecting objects in the images by fusing detection results from the 2D feature extraction and the 3D feature extraction.
    Type: Application
    Filed: August 29, 2024
    Publication date: March 13, 2025
    Inventors: Dongqiangzi YE, Yufei XIE, Weijia CHEN, Zixiang ZHOU, Lingting GE
  • Publication number: 20250085115
    Abstract: A computer-implemented method of trajectory prediction includes obtaining a first cross-attention between a vectorized representation of a road map near a vehicle and information obtained from a rasterized representation of an environment near the vehicle by processing through a first cross-attention stage; obtaining a second cross-attention between a vectorized representation of a vehicle history and information obtained from the rasterized representation by processing through a second cross-attention stage; operating a scene encoder on the first cross-attention and the second cross-attention; operating a trajectory decoder on an output of the scene encoder; obtaining one or more trajectory predictions by performing one or more queries on the trajectory decoder.
    Type: Application
    Filed: November 3, 2023
    Publication date: March 13, 2025
    Inventors: Hao XIAO, Yiqian GAN, Ethan ZHANG, Xin YE, Yizhe ZHAO, Zhe HUANG, Lingting GE, Robert August ROSSI, JR.
  • Publication number: 20250074463
    Abstract: A method of predicting vehicle trajectory includes operating a scene encoder on an environmental representation surrounding a vehicle; concatenating an output of the scene encoder with a history trajectory; applying a sequence encoder to a result of the concatenating; refining an output of the sequence encoder based on the history trajectory; and generating one or more predicted future trajectories by operating a decoder on an output of the refining.
    Type: Application
    Filed: February 6, 2024
    Publication date: March 6, 2025
    Inventors: Ethan ZHANG, Hao XIAO, Yiqian GAN, Yizhe ZHAO, Zhe HUANG, Lingting GE
  • Patent number: 12243428
    Abstract: A system and method for lateral vehicle detection is disclosed. A particular embodiment can be configured to: receive lateral image data from at least one laterally-facing camera associated with an autonomous vehicle; warp the lateral image data based on a line parallel to a side of the autonomous vehicle; perform object extraction on the warped lateral image data to identify extracted objects in the warped lateral image data; and apply bounding boxes around the extracted objects.
    Type: Grant
    Filed: March 14, 2023
    Date of Patent: March 4, 2025
    Assignee: TUSIMPLE, INC.
    Inventors: Zhipeng Yan, Lingting Ge, Pengfei Chen, Panqu Wang
  • Publication number: 20250050913
    Abstract: Techniques are described for operating a vehicle using sensor data provided by one or more ultrasonic sensors located on or in the vehicle. An example method includes receiving, by a computer located in a vehicle, data from an ultrasonic sensor located on the vehicle, where the data includes a first set of coordinates of two points associated with a location where an object is detected by the ultrasonic sensor; determining a second set of coordinates associated with a point in between the two points; performing a first determination that the second set of coordinates is associated with a lane or a road on which the vehicle is operating; performing a second determination that the object is movable; and sending, in response to the first determination and the second determination, a message that causes the vehicle to perform a driving related operation while the vehicle is operating on the road.
    Type: Application
    Filed: October 13, 2023
    Publication date: February 13, 2025
    Inventors: Zhe CHEN, Lingting GE, Joshua Miguel RODRIGUEZ, Ji HAN, Panqu WANG, Junjun XIN, Xiaoling HAN, Yizhe ZHAO
  • Publication number: 20250054286
    Abstract: An image processing method includes performing, using images obtained from one or more sensors onboard a vehicle, a 2-dimensional (2D) feature extraction; performing, a 3-dimensional (3D) feature extraction on the images; detecting objects in the images by fusing detection results from the 2D feature extraction and the 3D feature extraction.
    Type: Application
    Filed: September 27, 2023
    Publication date: February 13, 2025
    Inventors: Zhe HUANG, Lingting GE, Yizhe ZHAO
  • Publication number: 20250042369
    Abstract: Techniques are described for determining a set of pose information for an object when multiple sets of pose information are determined for a same object from multiple images. An example driving operation method includes obtaining, by a computer located in a vehicle, at least two sets of pose information related to an object located on a road on which the vehicle is operating, where each set of pose information includes characteristic(s) about the object, and where each set of pose information is determined from an image obtained by a camera; determining at least two weighted output vectors; determining, for the object, a set of pose information that are based on a combined weighted output vector that is obtained by combining the at least two weighted output vectors; and causing the vehicle to perform a driving-related operation using the set of pose information for the object.
    Type: Application
    Filed: October 13, 2023
    Publication date: February 6, 2025
    Inventors: Yizhe ZHAO, Zhe CHEN, Lingting GE, Panqu WANG
  • Publication number: 20250029274
    Abstract: The present disclosure provides methods and systems of sampling-based object pose determination. An example method includes obtaining, for a time frame, sensor data of the object acquired by a plurality of sensors; generating a two-dimensional bounding box of the object in a projection plane based on the sensor data of the time frame; generating a three-dimensional pose model of the object based on the sensor data of the time frame and a model reconstruction algorithm; generating, based on the sensor data, the pose model, and multiple sampling techniques, a plurality of pose hypotheses of the object corresponding to the time frame, generating a hypothesis projection of the object for each of the pose hypotheses by projecting the pose hypothesis onto the projection plane; determining evaluation results by comparing the hypothesis projections with the bounding box; and determining, based on the evaluation results, an object pose for the time frame.
    Type: Application
    Filed: October 17, 2023
    Publication date: January 23, 2025
    Inventors: Yizhe ZHAO, Zhe CHEN, Ye FAN, Lingting GE, Zhe HUANG, Panqu WANG, Xue MEI
  • Publication number: 20250014305
    Abstract: Image processing techniques are described to obtain an image from a camera located on a vehicle while the vehicle is being driven, cropping a portion of the obtained image corresponding to a region of interest, detecting an object in the cropped portion, adding a bounding box around the detected object, determining position(s) of reference point(s) on the bounding box, and determining a location of the detected object in a spatial region where the vehicle is being driven based on the determined one or more positions of the second set of one or more reference points on the bounding box.
    Type: Application
    Filed: September 20, 2024
    Publication date: January 9, 2025
    Inventors: Siyuan LIU, Lingting GE, Chenzhe QIAN, Zehua HUANG, Xiaodi HOU
  • Patent number: 12131499
    Abstract: Techniques are described to estimate orientation of one or more cameras located on a vehicle. The orientation estimation technique can include obtaining an image from a camera located on a vehicle while the vehicle is being driven on a road, determining, from a terrain map, a location of a landmark located at a distance from a location of the vehicle on the road, determining, in the image, pixel locations of the landmark, selecting one pixel location from the determined pixel locations; and calculating values that describe an orientation of the camera using at least an intrinsic matrix and a previously known extrinsic matrix of the camera, where the intrinsic matrix is characterized based on at least the one pixel location and the location of the landmark.
    Type: Grant
    Filed: June 22, 2023
    Date of Patent: October 29, 2024
    Assignee: TUSIMPLE, INC.
    Inventors: Yijie Wang, Lingting Ge, Yiqian Gan, Xiaodi Hou
  • Patent number: 12100190
    Abstract: Image processing techniques are described to obtain an image from a camera located on a vehicle while the vehicle is being driven, cropping a portion of the obtained image corresponding to a region of interest, detecting an object in the cropped portion, adding a bounding box around the detected object, determining position(s) of reference point(s) on the bounding box, and determining a location of the detected object in a spatial region where the vehicle is being driven based on the determined one or more positions of the second set of one or more reference points on the bounding box.
    Type: Grant
    Filed: June 15, 2023
    Date of Patent: September 24, 2024
    Assignee: TUSIMPLE, INC.
    Inventors: Siyuan Liu, Lingting Ge, Chenzhe Qian, Zehua Huang, Xiaodi Hou
  • Publication number: 20240265710
    Abstract: The present disclosure provides methods and systems for operating an autonomous vehicle. In some embodiments, the system may obtain, by a camera associated with an autonomous vehicle, an image of an environment of the autonomous vehicle, the environment including a road on which the autonomous vehicle is operating and an occlusion on the road. The system may identify the occlusion in the image based on map information of the environment and at least one camera parameter of the camera for obtaining the image. The system may identify an object represented in the image, and determine a confidence score relating to the object. The confidence score may indicate a likelihood a representation of the object in the image is impacted by the occlusion. The system may determine an operation algorithm based on the confidence score; and cause the autonomous vehicle to operate based on the operation algorithm.
    Type: Application
    Filed: September 27, 2023
    Publication date: August 8, 2024
    Inventors: Zhe CHEN, Yizhe ZHAO, Lingting GE, Panqu WANG
  • Publication number: 20240203135
    Abstract: Techniques are described for autonomous driving operation that includes receiving, by a computer located in a vehicle, an image from a camera located on the vehicle while the vehicle is operating on a road, wherein the image includes a plurality of lanes of the road; for each of the plurality of lanes: obtaining, from a map database stored in the computer, a set of values that describe locations of boundaries of a lane; dividing the lane into a plurality of polygons; rendering the plurality of polygons onto the image; and determining identifiers of lane segments of the lane; determining one or more characteristics of a lane segment on which the vehicle is operating based on an identifier of the lane segment; and causing the vehicle to perform a driving related operation in response to the one or more characteristics of the lane segment on which the vehicle is operating.
    Type: Application
    Filed: September 26, 2023
    Publication date: June 20, 2024
    Inventors: Yizhe ZHAO, Lingting GE, Panqu WANG
  • Publication number: 20240182081
    Abstract: Autonomous vehicles must accommodate various road configurations such as straight roads, curved roads, controlled intersections, uncontrolled intersections, and many others. Autonomous driving systems must make decisions about the speed and distance of traffic and about obstacles including obstacles that obstruct the view of the autonomous vehicle's sensors. For example, at intersections, the autonomous driving system must identify vehicles in the path of the autonomous vehicle or potentially in the path based on a planned path, estimate the distance to those vehicles, and estimate the speeds of those vehicles. Then, based on those and the road configuration and environmental conditions, the autonomous driving system must decide whether it is safe to proceed along the planned path or not, and when it is safe to proceed.
    Type: Application
    Filed: February 13, 2024
    Publication date: June 6, 2024
    Inventors: Yiqian GAN, Yijie WANG, Xiaodi HOU, Lingting GE
  • Patent number: 11912310
    Abstract: A method includes receiving a series of road images from a side-view camera sensor of the autonomous driving vehicle. For each object from objects captured in the series of road images, a series of bounding boxes in the series of road images is generated, and a direction of travel or stationarity of the object is determined. The methods and apparatus also include determining a speed of each object for which the direction of travel has been determined and determining, based on the directions of travel, speeds, or stationarity of the objects, whether the autonomous driving vehicle can safely move in a predetermined direction. Furthermore, one or more control signals is sent to the autonomous driving vehicle to cause the autonomous driving vehicle to move or to remain stationary based on determining whether the autonomous driving vehicle can safely move in the predetermined direction.
    Type: Grant
    Filed: June 25, 2021
    Date of Patent: February 27, 2024
    Assignee: TUSIMPLE, INC.
    Inventors: Yiqian Gan, Yijie Wang, Xiaodi Hou, Lingting Ge
  • Publication number: 20240046654
    Abstract: Devices, systems and methods for fusing scenes from real-time image feeds from on-vehicle cameras in autonomous vehicles to reduce redundancy of the information processed to enable real-time autonomous operation are described. One example of a method for improving perception in an autonomous vehicle includes receiving a plurality of cropped images, wherein each of the plurality of cropped images comprises one or more bounding boxes that correspond to one or more objects in a corresponding cropped image; identifying, based on the metadata in the plurality of cropped images, a first bounding box in a first cropped image and a second bounding box in a second cropped image, wherein the first and second bounding boxes correspond to a common object; and fusing the metadata corresponding to the common object from the first cropped image and the second cropped image to generate an output result for the common object.
    Type: Application
    Filed: October 18, 2023
    Publication date: February 8, 2024
    Inventors: Yijie WANG, Siyuan LIU, Lingting GE, Zehua HUANG
  • Publication number: 20240046489
    Abstract: A system and method for online real-time multi-object tracking is disclosed. A particular embodiment can be configured to: receive image frame data from at least one camera associated with an autonomous vehicle; generate similarity data corresponding to a similarity between object data in a previous image frame compared with object detection results from a current image frame; use the similarity data to generate data association results corresponding to a best matching between the object data in the previous image frame and the object detection results from the current image frame; cause state transitions in finite state machines for each object according to the data association results; and provide as an output object tracking output data corresponding to the states of the finite state machines for each object.
    Type: Application
    Filed: October 18, 2023
    Publication date: February 8, 2024
    Inventors: Lingting GE, Pengfei CHEN, Panqu WANG
  • Patent number: 11865967
    Abstract: A system comprises a headlight mounted on an autonomous vehicle. The headlight is configured to illuminate at least a portion of a road the autonomous vehicle is on. The system further comprises a control device associated with the autonomous vehicle. The processor obtains information about an environment around the autonomous vehicle. The processor determines that at least a portion of the road should be illuminated if the information indicates that an illumination level of the portion of the road is less than a threshold illumination level. The processor adjusts the headlight to illuminate at least the portion of the road in response to determining that at least the portion of the road should be illuminated.
    Type: Grant
    Filed: January 4, 2023
    Date of Patent: January 9, 2024
    Assignee: TUSIMPLE, INC.
    Inventors: Yu-Ju Hsu, Xiaoling Han, Yijing Li, Zehua Huang, Lingting Ge, Panqu Wang, Shuhan Yang