Patents by Inventor Ankit Laddha

Ankit Laddha has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240144010
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices for detecting objects are provided. For example, the disclosed technology can obtain a representation of sensor data associated with an environment surrounding a vehicle. Further, the sensor data can include sensor data points. A point classification and point property estimation can be determined for each of the sensor data points and a portion of the sensor data points can be clustered into an object instance based on the point classification and point property estimation for each of the sensor data points. A collection of point classifications and point property estimations can be determined for the portion of the sensor data points clustered into the object instance. Furthermore, object instance property estimations for the object instance can be determined based on the collection of point classifications and point property estimations for the portion of the sensor data points clustered into the object instance.
    Type: Application
    Filed: October 30, 2023
    Publication date: May 2, 2024
    Inventors: Eric Randall Kee, Carlos Vallespi-Gonzalez, Gregory P. Meyer, Ankit Laddha
  • Patent number: 11960290
    Abstract: Systems and methods for trajectory prediction are provided. A method can include obtaining LIDAR data, radar data, and map data; inputting the LIDAR data, the radar data, and the map data into a network model; transforming, by the network model, the radar data into a coordinate frame associated with a most recent radar sweep in the radar data; generating, by the network model, one or more features for each of the LIDAR data, the transformed radar data, and the map data; combining, by the network model, the one or more generated features to generate fused feature data; generating, by the network model, prediction data based at least in part on the fused feature data; and receiving, as an output of the network model, the prediction data. The prediction data can include a respective predicted trajectory for a future time period for one or more detected objects.
    Type: Grant
    Filed: November 11, 2020
    Date of Patent: April 16, 2024
    Assignee: UATC, LLC
    Inventors: Ankit Laddha, Meet Pragnesh Shah, Zhiling Huang, Duncan Blake Barber, Matthew A. Langford, Carlos Vallespi-Gonzalez, Sida Zhang
  • Patent number: 11885910
    Abstract: Systems and methods for detecting and classifying objects proximate to an autonomous vehicle can include a sensor system and a vehicle computing system. The sensor system includes at least one LIDAR system configured to transmit ranging signals relative to the autonomous vehicle and to generate LIDAR data. The vehicle computing system receives the LIDAR data from the sensor system. The vehicle computing system also determines at least a range-view representation of the LIDAR data and a top-view representation of the LIDAR data, wherein the range-view representation contains a fewer number of total data points than the top-view representation. The vehicle computing system further detects objects of interest in the range-view representation of the LIDAR data and generates a bounding shape for each of the detected objects of interest in the top-view representation of the LIDAR data.
    Type: Grant
    Filed: October 9, 2020
    Date of Patent: January 30, 2024
    Assignee: UATC, LLC
    Inventors: Carlos Vallespi-Gonzalez, Ankit Laddha, Gregory P. Meyer, Eric Randall Kee
  • Patent number: 11836623
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices for detecting objects are provided. For example, the disclosed technology can obtain a representation of sensor data associated with an environment surrounding a vehicle. Further, the sensor data can include sensor data points. A point classification and point property estimation can be determined for each of the sensor data points and a portion of the sensor data points can be clustered into an object instance based on the point classification and point property estimation for each of the sensor data points. A collection of point classifications and point property estimations can be determined for the portion of the sensor data points clustered into the object instance. Furthermore, object instance property estimations for the object instance can be determined based on the collection of point classifications and point property estimations for the portion of the sensor data points clustered into the object instance.
    Type: Grant
    Filed: November 1, 2021
    Date of Patent: December 5, 2023
    Assignee: UATC, LLC
    Inventors: Eric Randall Kee, Carlos Vallespi-Gonzalez, Gregory P. Meyer, Ankit Laddha
  • Patent number: 11762094
    Abstract: Systems and methods for detecting objects and predicting their motion are provided. In particular, a computing system can obtain a plurality of sensor sweeps. The computing system can determine movement data associated with movement of the autonomous vehicle. For each sensor sweep, the computing system can generate an image associated with the sensor sweep. The computing system can extract, using the respective image as input to one or more machine-learned models, feature data from the respective image. The computing system can transform the feature data into a coordinate frame associated with a next time step. The computing system can generate a fused image. The computing system can generate a final fused image. The computing system can predict, based, at least in part, on the final fused representation of the plurality of sensors sweeps from the plurality of sensor sweeps, movement associated with the feature data at one or more time steps in the future.
    Type: Grant
    Filed: November 6, 2020
    Date of Patent: September 19, 2023
    Assignee: UATC, LLC
    Inventors: Ankit Laddha, Gregory P. Meyer, Jake Scott Charland, Shivam Gautam, Shreyash Pandey, Carlos Vallespi-Gonzalez, Carl Knox Wellington
  • Patent number: 11703562
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices associated with sensor output segmentation are provided. For example, sensor data can be accessed. The sensor data can include sensor data returns representative of an environment detected by a sensor across the sensor's field of view. Each sensor data return can be associated with a respective bin of a plurality of bins corresponding to the field of view of the sensor. Each bin can correspond to a different portion of the sensor's field of view. Channels can be generated for each of the plurality of bins and can include data indicative of a range and an azimuth associated with a sensor data return associated with each bin. Furthermore, a semantic segment of a portion of the sensor data can be generated by inputting the channels for each bin into a machine-learned segmentation model trained to generate an output including the semantic segment.
    Type: Grant
    Filed: September 19, 2019
    Date of Patent: July 18, 2023
    Assignee: UATC, LLC
    Inventors: Ankit Laddha, Carlos Vallespi-Gonzalez, Duncan Blake Barber, Jacob White, Anurag Kumar
  • Publication number: 20220051035
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices for detecting objects are provided. For example, the disclosed technology can obtain a representation of sensor data associated with an environment surrounding a vehicle. Further, the sensor data can include sensor data points. A point classification and point property estimation can be determined for each of the sensor data points and a portion of the sensor data points can be clustered into an object instance based on the point classification and point property estimation for each of the sensor data points. A collection of point classifications and point property estimations can be determined for the portion of the sensor data points clustered into the object instance. Furthermore, object instance property estimations for the object instance can be determined based on the collection of point classifications and point property estimations for the portion of the sensor data points clustered into the object instance.
    Type: Application
    Filed: November 1, 2021
    Publication date: February 17, 2022
    Inventors: Eric Randall Kee, Carlos Vallespi-Gonzalez, Gregory P. Meyer, Ankit Laddha
  • Patent number: 11164016
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices for detecting objects are provided. For example, the disclosed technology can obtain a representation of sensor data associated with an environment surrounding a vehicle. Further, the sensor data can include sensor data points. A point classification and point property estimation can be determined for each of the sensor data points and a portion of the sensor data points can be clustered into an object instance based on the point classification and point property estimation for each of the sensor data points. A collection of point classifications and point property estimations can be determined for the portion of the sensor data points clustered into the object instance. Furthermore, object instance property estimations for the object instance can be determined based on the collection of point classifications and point property estimations for the portion of the sensor data points clustered into the object instance.
    Type: Grant
    Filed: July 18, 2018
    Date of Patent: November 2, 2021
    Assignee: UATC, LLC
    Inventors: Eric Randall Kee, Carlos Vallespi-Gonzalez, Gregory P. Meyer, Ankit Laddha
  • Publication number: 20210278539
    Abstract: Systems and methods for detecting objects and predicting their motion are provided. In particular, a computing system can obtain a plurality of sensor sweeps. The computing system can determine movement data associated with movement of the autonomous vehicle. For each sensor sweep, the computing system can generate an image associated with the sensor sweep. The computing system can extract, using the respective image as input to one or more machine-learned models, feature data from the respective image. The computing system can transform the feature data into a coordinate frame associated with a next time step. The computing system can generate a fused image. The computing system can generate a final fused image. The computing system can predict, based, at least in part, on the final fused representation of the plurality of sensors sweeps from the plurality of sensor sweeps, movement associated with the feature data at one or more time steps in the future.
    Type: Application
    Filed: November 6, 2020
    Publication date: September 9, 2021
    Inventors: Ankit Laddha, Gregory P. Meyer, Jake Scott Charland, Shivam Gautam, Shreyash Pandey, Carlos Vallespi-Gonzalez, Carl Knox Wellington
  • Publication number: 20210025989
    Abstract: Systems and methods for detecting and classifying objects proximate to an autonomous vehicle can include a sensor system and a vehicle computing system. The sensor system includes at least one LIDAR system configured to transmit ranging signals relative to the autonomous vehicle and to generate LIDAR data. The vehicle computing system receives the LIDAR data from the sensor system. The vehicle computing system also determines at least a range-view representation of the LIDAR data and a top-view representation of the LIDAR data, wherein the range-view representation contains a fewer number of total data points than the top-view representation. The vehicle computing system further detects objects of interest in the range-view representation of the LIDAR data and generates a bounding shape for each of the detected objects of interest in the top-view representation of the LIDAR data.
    Type: Application
    Filed: October 9, 2020
    Publication date: January 28, 2021
    Inventors: Carlos Vallespi-Gonzalez, Ankit Laddha, Gregory P. Meyer, Eric Randall Kee
  • Publication number: 20210003665
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices associated with sensor output segmentation are provided. For example, sensor data can be accessed. The sensor data can include sensor data returns representative of an environment detected by a sensor across the sensor's field of view. Each sensor data return can be associated with a respective bin of a plurality of bins corresponding to the field of view of the sensor. Each bin can correspond to a different portion of the sensor's field of view. Channels can be generated for each of the plurality of bins and can include data indicative of a range and an azimuth associated with a sensor data return associated with each bin. Furthermore, a semantic segment of a portion of the sensor data can be generated by inputting the channels for each bin into a machine-learned segmentation model trained to generate an output including the semantic segment.
    Type: Application
    Filed: September 19, 2019
    Publication date: January 7, 2021
    Inventors: Ankit Laddha, Carlos Vallespi-Gonzalez, Duncan Blake Barber, Jacob White, Anurag Kumar
  • Patent number: 10809361
    Abstract: Systems and methods for detecting and classifying objects proximate to an autonomous vehicle can include a sensor system and a vehicle computing system. The sensor system includes at least one LIDAR system configured to transmit ranging signals relative to the autonomous vehicle and to generate LIDAR data. The vehicle computing system receives the LIDAR data from the sensor system. The vehicle computing system also determines at least a range-view representation of the LIDAR data and a top-view representation of the LIDAR data, wherein the range-view representation contains a fewer number of total data points than the top-view representation. The vehicle computing system further detects objects of interest in the range-view representation of the LIDAR data and generates a bounding shape for each of the detected objects of interest in the top-view representation of the LIDAR data.
    Type: Grant
    Filed: February 28, 2018
    Date of Patent: October 20, 2020
    Assignee: UATC, LLC
    Inventors: Carlos Vallespi-Gonzalez, Ankit Laddha, Gregory P Meyer, Eric Randall Kee
  • Publication number: 20190354782
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices for detecting objects are provided. For example, the disclosed technology can obtain a representation of sensor data associated with an environment surrounding a vehicle. Further, the sensor data can include sensor data points. A point classification and point property estimation can be determined for each of the sensor data points and a portion of the sensor data points can be clustered into an object instance based on the point classification and point property estimation for each of the sensor data points. A collection of point classifications and point property estimations can be determined for the portion of the sensor data points clustered into the object instance. Furthermore, object instance property estimations for the object instance can be determined based on the collection of point classifications and point property estimations for the portion of the sensor data points clustered into the object instance.
    Type: Application
    Filed: July 18, 2018
    Publication date: November 21, 2019
    Inventors: Eric Randall Kee, Carlos Vallespi-Gonzalez, Gregory P. Meyer, Ankit Laddha
  • Publication number: 20190310651
    Abstract: Generally, the present disclosure is directed to systems and methods for detecting objects of interest and determining location information and motion information for the detected objects based at least in part on sensor data (e.g., LIDAR data) provided from one or more sensor systems (e.g., LIDAR systems) included in the autonomous vehicle. The perception system can include a machine-learned detector model that is configured to receive multiple time frames of sensor data and implement curve-fitting of sensor data points over the multiple time frames of sensor data. The machine-learned model can be trained to determine, in response to the multiple time frames of sensor data provided as input, location information descriptive of a location of one or more objects of interest detected within the environment at a given time and motion information descriptive of the motion of each object of interest.
    Type: Application
    Filed: June 27, 2018
    Publication date: October 10, 2019
    Inventors: Carlos Vallespi-Gonzalez, Siheng Chen, Abhishek Sen, Ankit Laddha
  • Patent number: 10310087
    Abstract: Systems and methods for detecting and classifying objects that are proximate to an autonomous vehicle can include receiving, by one or more computing devices, LIDAR data from one or more LIDAR sensors configured to transmit ranging signals relative to an autonomous vehicle, generating, by the one or more computing devices, a data matrix comprising a plurality of data channels based at least in part on the LIDAR data, and inputting the data matrix to a machine-learned model. A class prediction for each of one or more different portions of the data matrix and/or a properties estimation associated with each class prediction generated for the data matrix can be received as an output of the machine-learned model. One or more object segments can be generated based at least in part on the class predictions and properties estimations. The one or more object segments can be provided to an object classification and tracking application.
    Type: Grant
    Filed: May 31, 2017
    Date of Patent: June 4, 2019
    Assignee: Uber Technologies, Inc.
    Inventors: Ankit Laddha, J. Andrew Bagnell, Varun Ramakrishna, Yimu Wang, Carlos Vallespi-Gonzalez
  • Publication number: 20180348346
    Abstract: Systems and methods for detecting and classifying objects proximate to an autonomous vehicle can include a sensor system and a vehicle computing system. The sensor system includes at least one LIDAR system configured to transmit ranging signals relative to the autonomous vehicle and to generate LIDAR data. The vehicle computing system receives the LIDAR data from the sensor system. The vehicle computing system also determines at least a range-view representation of the LIDAR data and a top-view representation of the LIDAR data, wherein the range-view representation contains a fewer number of total data points than the top-view representation. The vehicle computing system further detects objects of interest in the range-view representation of the LIDAR data and generates a bounding shape for each of the detected objects of interest in the top-view representation of the LIDAR data.
    Type: Application
    Filed: February 28, 2018
    Publication date: December 6, 2018
    Inventors: Carlos Vallespi-Gonzalez, Ankit Laddha, Gregory P Meyer, Eric Randall Kee
  • Publication number: 20180348374
    Abstract: Systems and methods for detecting and classifying objects that are proximate to an autonomous vehicle can include receiving, by one or more computing devices, LIDAR data from one or more LIDAR sensors configured to transmit ranging signals relative to an autonomous vehicle, generating, by the one or more computing devices, a data matrix comprising a plurality of data channels based at least in part on the LIDAR data, and inputting the data matrix to a machine-learned model. A class prediction for each of one or more different portions of the data matrix and/or a properties estimation associated with each class prediction generated for the data matrix can be received as an output of the machine-learned model. One or more object segments can be generated based at least in part on the class predictions and properties estimations. The one or more object segments can be provided to an object classification and tracking application.
    Type: Application
    Filed: May 31, 2017
    Publication date: December 6, 2018
    Inventors: Ankit Laddha, James Andrew Bagnall, Varun Ramakrishna, Yimu Wang, Carlos Vallespi-Gonzalez