Patents by Inventor Duncan Blake Barber

Duncan Blake Barber has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11960290
    Abstract: Systems and methods for trajectory prediction are provided. A method can include obtaining LIDAR data, radar data, and map data; inputting the LIDAR data, the radar data, and the map data into a network model; transforming, by the network model, the radar data into a coordinate frame associated with a most recent radar sweep in the radar data; generating, by the network model, one or more features for each of the LIDAR data, the transformed radar data, and the map data; combining, by the network model, the one or more generated features to generate fused feature data; generating, by the network model, prediction data based at least in part on the fused feature data; and receiving, as an output of the network model, the prediction data. The prediction data can include a respective predicted trajectory for a future time period for one or more detected objects.
    Type: Grant
    Filed: November 11, 2020
    Date of Patent: April 16, 2024
    Assignee: UATC, LLC
    Inventors: Ankit Laddha, Meet Pragnesh Shah, Zhiling Huang, Duncan Blake Barber, Matthew A. Langford, Carlos Vallespi-Gonzalez, Sida Zhang
  • Patent number: 11703562
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices associated with sensor output segmentation are provided. For example, sensor data can be accessed. The sensor data can include sensor data returns representative of an environment detected by a sensor across the sensor's field of view. Each sensor data return can be associated with a respective bin of a plurality of bins corresponding to the field of view of the sensor. Each bin can correspond to a different portion of the sensor's field of view. Channels can be generated for each of the plurality of bins and can include data indicative of a range and an azimuth associated with a sensor data return associated with each bin. Furthermore, a semantic segment of a portion of the sensor data can be generated by inputting the channels for each bin into a machine-learned segmentation model trained to generate an output including the semantic segment.
    Type: Grant
    Filed: September 19, 2019
    Date of Patent: July 18, 2023
    Assignee: UATC, LLC
    Inventors: Ankit Laddha, Carlos Vallespi-Gonzalez, Duncan Blake Barber, Jacob White, Anurag Kumar
  • Publication number: 20220035376
    Abstract: Systems and methods for trajectory prediction are provided. A method can include obtaining LIDAR data, radar data, and map data; inputting the LIDAR data, the radar data, and the map data into a network model; transforming, by the network model, the radar data into a coordinate frame associated with a most recent radar sweep in the radar data; generating, by the network model, one or more features for each of the LIDAR data, the transformed radar data, and the map data; combining, by the network model, the one or more generated features to generate fused feature data; generating, by the network model, prediction data based at least in part on the fused feature data; and receiving, as an output of the network model, the prediction data. The prediction data can include a respective predicted trajectory for a future time period for one or more detected objects.
    Type: Application
    Filed: November 11, 2020
    Publication date: February 3, 2022
    Inventors: Ankit Laddah, Meet Pragnesh Shah, Zhiling Huang, Duncan Blake Barber, Matthew A. Langford, Carlos Vallespi-Gonzalez, Sida Zhang
  • Publication number: 20210003665
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices associated with sensor output segmentation are provided. For example, sensor data can be accessed. The sensor data can include sensor data returns representative of an environment detected by a sensor across the sensor's field of view. Each sensor data return can be associated with a respective bin of a plurality of bins corresponding to the field of view of the sensor. Each bin can correspond to a different portion of the sensor's field of view. Channels can be generated for each of the plurality of bins and can include data indicative of a range and an azimuth associated with a sensor data return associated with each bin. Furthermore, a semantic segment of a portion of the sensor data can be generated by inputting the channels for each bin into a machine-learned segmentation model trained to generate an output including the semantic segment.
    Type: Application
    Filed: September 19, 2019
    Publication date: January 7, 2021
    Inventors: Ankit Laddha, Carlos Vallespi-Gonzalez, Duncan Blake Barber, Jacob White, Anurag Kumar
  • Publication number: 20200209853
    Abstract: Systems and methods for determining degradation in perception sensors of an autonomous vehicle are provided. A method can include obtaining, by a computing system comprising one or more computing devices, first data from a first sensor of an autonomous vehicle and second data from a second sensor of the autonomous vehicle. The first data and the second data can include detection level data. The computer-implemented method can further include obtaining, by the computing system, third data from the first sensor. The third data can include processed data. The computer-implemented method can further include determining, by the computing system, a sensor degradation condition for the first sensor based at least in part on the first data, the second data, and the third data, and implementing, by the computing system, a sensor correction action for the first sensor based at least in part on the sensor degradation condition.
    Type: Application
    Filed: February 28, 2019
    Publication date: July 2, 2020
    Inventors: William M. Leach, Scott C. Poeppel, Duncan Blake Barber