Patents by Inventor Jifei Qian
Jifei Qian has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12221115Abstract: Techniques for updating data operations in a perception system are discussed herein. A vehicle may use a perception system to capture data about an environment proximate to the vehicle. The perception system may output the data about the environment to a system configured to determine positions of objects relative to the perception system over time. The positions of the objects may be used to estimate an object velocity and may be compared against machine learning model outputs in a self-supervised manner to train the machine learning model to output object velocities based on inputs from the perception system. The output of the machine learning model may include a two-dimensional velocity for objects in the environment. The two-dimensional velocity may be used for a vehicle system such that the vehicle can make environmentally aware operational decisions, which may improve reaction time(s) and/or safety outcomes of the vehicle.Type: GrantFiled: April 29, 2022Date of Patent: February 11, 2025Assignee: Zoox, Inc.Inventors: Patrick Blaes, Jifei Qian
-
Patent number: 11814084Abstract: Techniques for determining an output from a plurality of sensor modalities are discussed herein. Features from a radar sensor, a lidar sensor, and an image sensor may be input into respective models to determine respective intermediate outputs associated with a tracks associated with an object and associated confidence levels. The Intermediate outputs from a radar model, a lidar model, and an vision model may be input into a fused model to determine a fused confidence level and fused output associated with the track. The fused confidence level and the individual confidence levels are compared to a threshold to generate the track to transmit to a planning system or prediction system of an autonomous vehicle. Additionally, a vehicle controller can control the autonomous vehicle based on the track and/or on the confidence level(s).Type: GrantFiled: December 17, 2021Date of Patent: November 14, 2023Assignee: Zoox, Inc.Inventors: Subhasis Das, Jifei Qian, Liujiang Yan
-
Publication number: 20230192145Abstract: Techniques for determining an output from a plurality of sensor modalities are discussed herein. Features from a radar sensor, a lidar sensor, and an image sensor may be input into respective models to determine respective intermediate outputs associated with a tracks associated with an object and associated confidence levels. The Intermediate outputs from a radar model, a lidar model, and an vision model may be input into a fused model to determine a fused confidence level and fused output associated with the track. The fused confidence level and the individual confidence levels are compared to a threshold to generate the track to transmit to a planning system or prediction system of an autonomous vehicle. Additionally, a vehicle controller can control the autonomous vehicle based on the track and/or on the confidence level(s).Type: ApplicationFiled: December 17, 2021Publication date: June 22, 2023Inventors: Subhasis Das, Jifei Qian, Liujiang Yan
-
Patent number: 11609321Abstract: Some radar sensors may provide a Doppler measurement indicating a relative velocity of an object to a velocity of the radar sensor. Techniques for determining a two-or-more-dimensional velocity from one or more radar measurements associated with an object may comprise determining a data structure that comprises a yaw assumption and a set of weights to tune the influence of the yaw assumption. Determining the two-or-more-dimensional velocity may further comprise using the data structure as part of regression algorithm to determine a velocity and/or yaw rate associated with the object.Type: GrantFiled: February 19, 2020Date of Patent: March 21, 2023Assignee: Zoox, Inc.Inventors: Anton Mario Bongio Karrman, Michael Carsten Bosse, Subhasis Das, Francesco Papi, Jifei Qian, Shiwei Sheng, Chuang Wang
-
Publication number: 20230003871Abstract: Sensors, including radar sensors, may be used to detect objects in an environment. In an example, a vehicle may include one or more radar sensors that sense objects around the vehicle, e.g., so the vehicle can navigate relative to the objects. A plurality of radar points from one or more radar scans are associated with a sensed object and a representation of the sensed object is determined from the plurality of radar points. The representation may be compared to track information of previously-identified, tracked objects. Based on the comparison, the sensed object may be associated with one of the tracked objects, and, alternatively, the track information may be updated based on the representation. Conversely, the comparison may indicate that the sensed object is not associated with any of the tracked objects. In this instance, the representation may be used to generate a new track, e.g., for the newly-sensed object.Type: ApplicationFiled: June 30, 2021Publication date: January 5, 2023Inventors: Jifei Qian, Joshua Kriser Cohen, Chuang Wang
-
Publication number: 20230003872Abstract: Sensors, including radar sensors, may be used to detect objects in an environment. In an example, a vehicle may include one or more radar sensors that sense objects around the vehicle, e.g., so the vehicle can navigate relative to the objects. A plurality of radar points from one or more radar scans are associated with a sensed object and a representation of the sensed object is determined from the plurality of radar points. The representation may be compared to track information of previously-identified, tracked objects. Based on the comparison, the sensed object may be associated with one of the tracked objects, and, alternatively, the track information may be updated based on the representation. Conversely, the comparison may indicate that the sensed object is not associated with any of the tracked objects. In this instance, the representation may be used to generate a new track, e.g., for the newly-sensed object.Type: ApplicationFiled: June 30, 2021Publication date: January 5, 2023Inventors: Jifei Qian, Joshua Kriser Cohen, Chuang Wang
-
Patent number: 11520037Abstract: Techniques for updating data operations in a perception system are discussed herein. A vehicle may use a perception system to capture data about an environment proximate to the vehicle. The perception system may receive state data stored in cyclic buffer of globally registered detection and occasionally converted to gridded point cloud in a local reference frame. The two-dimensional gridded point cloud may be processed using one or more neural networks to generate semantic data associated with a scene or physical environment surrounding the vehicle such that the vehicle can make environment aware operational decisions, which may improve reaction time(s) and/or safety outcomes of the autonomous vehicle.Type: GrantFiled: September 30, 2019Date of Patent: December 6, 2022Assignee: Zoox, Inc.Inventors: Anton Mario Bongio Karrman, Cooper Stokes Sloan, Chuang Wang, Joshua Kriser Cohen, Yassen Ivanchev Dobrev, Jifei Qian
-
Publication number: 20210255307Abstract: Some radar sensors may provide a Doppler measurement indicating a relative velocity of an object to a velocity of the radar sensor. Techniques for determining a two-or-more-dimensional velocity from one or more radar measurements associated with an object may comprise determining a data structure that comprises a yaw assumption and a set of weights to tune the influence of the yaw assumption. Determining the two-or-more-dimensional velocity may further comprise using the data structure as part of regression algorithm to determine a velocity and/or yaw rate associated with the object.Type: ApplicationFiled: February 19, 2020Publication date: August 19, 2021Inventors: Anton Mario Bongio Karrman, Michael Carsten Bosse, Subhasis Das, Francesco Papi, Jifei Qian, Shiwei Sheng, Chuang Wang
-
Publication number: 20210096241Abstract: Techniques for updating data operations in a perception system are discussed herein. A vehicle may use a perception system to capture data about an environment proximate to the vehicle. The perception system may receive state data stored in cyclic buffer of globally registered detection and occasionally converted to gridded point cloud in a local reference frame. The two-dimensional gridded point cloud may be processed using one or more neural networks to generate semantic data associated with a scene or physical environment surrounding the vehicle such that the vehicle can make environment aware operational decisions, which may improve reaction time(s) and/or safety outcomes of the autonomous vehicle.Type: ApplicationFiled: September 30, 2019Publication date: April 1, 2021Inventors: Anton Mario Bongio Karrman, Cooper Stokes Sloan, Chuang Wang, Joshua Kriser Cohen, Yassen Ivanchev Dobrev, Jifei Qian