Abstract: Techniques are described for compensating for movements of sensors on a vehicle. A method includes receiving two sets of sensor data from two sets of sensors, where a first set of sensors are located on a roof of a cab of the semi-trailer truck and a second set of sensor data are located on a hood of the semi-trailer truck. The method also receives from a height sensor a measured value indicative of a height of the rear of a rear portion of the cab of the semi-trailer truck relative to a chassis of the semi-trailer truck, determines two correction values, one for each of the two sets of sensor data, and compensates for the movement of the two sets of sensors by generating two sets of compensated sensor data. The two sets of compensated sensor data are generated by adjusting the two sets of sensor data based on the two correction values.
Type:
Grant
Filed:
September 9, 2019
Date of Patent:
October 6, 2020
Assignee:
TUSIMPLE, INC.
Inventors:
Alan Camyre, Todd Skinner, Juexiao Ning, Qiwei Li, Yishi Liu
Abstract: The present application describes a computer server. The computer server includes a plurality of layers of fixed plates each having at least one corresponding component provided thereon. An air inlet and an air outlet are provided on side panels of an outer shell of the server case. A first set of fans is provided on an inward-facing side of the air inlet, and a second set of fans is provided on an inward-facing side of the air outlet. The first set of fans and the second set of fans generate a high-pressure airflow from the air inlet to the air outlet. The computer server further comprises at least one first heat sink and a second heat sink, wherein the at least one first heat sink is connected to a heat generating component on the plurality of layers of fixed plates.
Abstract: The present application discloses a method and device of labeling laser point cloud. The method comprises: receiving data of a laser point cloud; constructing a 3D scene and establishing a 3D coordinate system corresponding to the 3D scene; converting a coordinate of each laser point in the laser point cloud into a 3D coordinate in the 3D coordinate system; mapping laser points included in the laser point cloud into the 3D scene respectively according to the 3D coordinate of the laser points; labeling the laser points in the 3D scene.
Abstract: A system and method for providing multiple agents for decision making, trajectory planning, and control for autonomous vehicles are disclosed. A particular embodiment includes: partitioning a multiple agent autonomous vehicle control module for an autonomous vehicle into a plurality of subsystem agents, the plurality of subsystem agents including a deep computing vehicle control subsystem and a fast response vehicle control subsystem; receiving a task request from a vehicle subsystem; dispatching the task request to the deep computing vehicle control subsystem or the fast response vehicle control subsystem based on content of the task request or a context of the autonomous vehicle; causing execution of the deep computing vehicle control subsystem or the fast response vehicle control subsystem by use of a data processor to produce a vehicle control output; and providing the vehicle control output to a vehicle control subsystem of the autonomous vehicle.
Type:
Grant
Filed:
September 30, 2017
Date of Patent:
September 8, 2020
Assignee:
TUSIMPLE, INC.
Inventors:
Xing Sun, Yufei Zhao, Wutu Lin, Zijie Xuan, Liu Liu, Kai-Chieh Ma
Abstract: The present application discloses an image transmission device and method. The image transmission device includes a receiver configured to receive pixel data in image data from a camera in sequence and buffer the pixel data into a memory, and determine, upon reception of a line of pixel data, a line number of the line of pixel data in the image data and a frame number of the image data; and a processor configured to obtain the line of pixel data, the line number of the line of pixel data and the frame number of the image data from the receiver, package the obtained line of pixel data, line number of the line of pixel data and frame number of the image data into a data packet, and transmit the data packet to a server.
Abstract: A system and method for actively selecting and labeling images for semantic segmentation are disclosed. A particular embodiment includes: receiving image data from an image generating device; performing semantic segmentation or other object detection on the received image data to identify and label objects in the image data and produce semantic label image data; determining the quality of the semantic label image data based on prediction probabilities associated with regions or portions of the image; and identifying a region or portion of the image for manual labeling if an associated prediction probability is below a pre-determined threshold.
Type:
Grant
Filed:
June 14, 2017
Date of Patent:
September 1, 2020
Assignee:
TUSIMPLE, INC.
Inventors:
Zhipeng Yan, Zehua Huang, Pengfei Chen, Panqu Wang
Abstract: A method of localization for a non-transitory computer readable storage medium storing one or more programs is disclosed. The one or more programs comprise instructions, which when executed by a computing device, cause the computing device to perform by one or more autonomous vehicle driving modules execution of processing of images from a camera and data from a LiDAR using the following steps comprising: voxelizing a 3D submap and a global map into voxels; estimating distribution of 3D points within the voxels, using a probabilistic model; extracting features from the 3D submap and the global map; and classifying the extracted features into classes.
Abstract: A system and method for adaptive cruise control with proximate vehicle detection are disclosed. The example embodiment can be configured for: receiving input object data from a subsystem of a host vehicle, the input object data including distance data and velocity data relative to detected target vehicles; detecting the presence of any target vehicles within a sensitive zone in front of the host vehicle, to the left of the host vehicle, and to the right of the host vehicle; determining a relative speed and a separation distance between each of the detected target vehicles relative to the host vehicle; and generating a velocity command to adjust a speed of the host vehicle based on the relative speeds and separation distances between the host vehicle and the detected target vehicles to maintain a safe separation between the host vehicle and the target vehicles.
Abstract: A system and method for adaptive cruise control for low speed following are disclosed. A particular embodiment includes: receiving input object data from a subsystem of an autonomous vehicle, the input object data including distance data and velocity data relative to a lead vehicle; generating a weighted distance differential corresponding to a weighted difference between an actual distance between the autonomous vehicle and the lead vehicle and a desired distance between the autonomous vehicle and the lead vehicle; generating a weighted velocity differential corresponding to a weighted difference between a velocity of the autonomous vehicle and a velocity of the lead vehicle; combining the weighted distance differential and the weighted velocity differential with the velocity of the lead vehicle to produce a velocity command for the autonomous vehicle; adjusting the velocity command using a dynamic gain; and controlling the autonomous vehicle to conform to the adjusted velocity command.
Abstract: A system and method for taillight signal recognition using a convolutional neural network is disclosed. An example embodiment includes: receiving a plurality of image frames from one or more image-generating devices of an autonomous vehicle; using a single-frame taillight illumination status annotation dataset and a single-frame taillight mask dataset to recognize a taillight illumination status of a proximate vehicle identified in an image frame of the plurality of image frames, the single-frame taillight illumination status annotation dataset including one or more taillight illumination status conditions of a right or left vehicle taillight signal, the single-frame taillight mask dataset including annotations to isolate a taillight region of a vehicle; and using a multi-frame taillight illumination status dataset to recognize a taillight illumination status of the proximate vehicle in multiple image frames of the plurality of image frames, the multiple image frames being in temporal succession.
Abstract: A system and method for path planning of autonomous vehicles based on gradient are disclosed. A particular embodiment includes: generating and scoring a first suggested trajectory for an autonomous vehicle; generating a trajectory gradient based on the first suggested trajectory; generating and scoring a second suggested trajectory for the autonomous vehicle, the second suggested trajectory being based on the first suggested trajectory and a human driving model; and outputting the second suggested trajectory if the score corresponding to the second suggested trajectory is within a score differential threshold relative to the score corresponding to the first suggested trajectory.
Abstract: A system and method for lateral vehicle detection is disclosed. A particular embodiment can be configured to: receive lateral image data from at least one laterally-facing camera associated with an autonomous vehicle; warp the lateral image data based on a line parallel to a side of the autonomous vehicle; perform object extraction on the warped lateral image data to identify extracted objects in the warped lateral image data; and apply bounding boxes around the extracted objects.
Type:
Grant
Filed:
March 18, 2018
Date of Patent:
June 16, 2020
Assignee:
TUSIMPLE, INC.
Inventors:
Zhipeng Yan, Lingting Ge, Pengfei Chen, Panqu Wang
Abstract: A system and method for online real-time multi-object tracking is disclosed. A particular embodiment can be configured to: receive image frame data from at least one camera associated with an autonomous vehicle; generate similarity data corresponding to a similarity between object data in a previous image frame compared with object detection results from a current image frame; use the similarity data to generate data association results corresponding to a best matching between the object data in the previous image frame and the object detection results from the current image frame; cause state transitions in finite state machines for each object according to the data association results; and provide as an output object tracking output data corresponding to the states of the finite state machines for each object.