Patents by Inventor Carlos Vallespi-Gonzalez

Carlos Vallespi-Gonzalez has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220035376
    Abstract: Systems and methods for trajectory prediction are provided. A method can include obtaining LIDAR data, radar data, and map data; inputting the LIDAR data, the radar data, and the map data into a network model; transforming, by the network model, the radar data into a coordinate frame associated with a most recent radar sweep in the radar data; generating, by the network model, one or more features for each of the LIDAR data, the transformed radar data, and the map data; combining, by the network model, the one or more generated features to generate fused feature data; generating, by the network model, prediction data based at least in part on the fused feature data; and receiving, as an output of the network model, the prediction data. The prediction data can include a respective predicted trajectory for a future time period for one or more detected objects.
    Type: Application
    Filed: November 11, 2020
    Publication date: February 3, 2022
    Inventors: Ankit Laddah, Meet Pragnesh Shah, Zhiling Huang, Duncan Blake Barber, Matthew A. Langford, Carlos Vallespi-Gonzalez, Sida Zhang
  • Publication number: 20210402991
    Abstract: Systems, devices, and methods for trajectory association and tracking are provided. A method can include method can include obtaining input data indicative of a respective trajectory for each of one or more first objects for a first time step and input data indicative of a respective trajectory for each of one or more second objects for a second time step subsequent to the first time step. The method can include generating, using a machine-learned model, a temporally-consistent trajectory for at least one of the one or more first objects or the one or more second objects based at least in part on the input data and determining a third predicted trajectory for the at least one of the one or more first objects or the one or more second objects for at least the second time step based at least in part on the temporally-consistent trajectory.
    Type: Application
    Filed: July 24, 2020
    Publication date: December 30, 2021
    Inventors: Shivam Gautam, Sidney Zhang, Gregory P. Meyer, Carlos Vallespi-Gonzalez, Brian C. Becker
  • Patent number: 11164016
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices for detecting objects are provided. For example, the disclosed technology can obtain a representation of sensor data associated with an environment surrounding a vehicle. Further, the sensor data can include sensor data points. A point classification and point property estimation can be determined for each of the sensor data points and a portion of the sensor data points can be clustered into an object instance based on the point classification and point property estimation for each of the sensor data points. A collection of point classifications and point property estimations can be determined for the portion of the sensor data points clustered into the object instance. Furthermore, object instance property estimations for the object instance can be determined based on the collection of point classifications and point property estimations for the portion of the sensor data points clustered into the object instance.
    Type: Grant
    Filed: July 18, 2018
    Date of Patent: November 2, 2021
    Assignee: UATC, LLC
    Inventors: Eric Randall Kee, Carlos Vallespi-Gonzalez, Gregory P. Meyer, Ankit Laddha
  • Patent number: 11138745
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices for associating objects are provided. For example, the disclosed technology can receive sensor data associated with the detection of objects over time. An association dataset can be generated and can include information associated with object detections of the objects at a most recent time interval and object tracks of the objects at time intervals in the past. A subset of the association dataset including the object detections that satisfy some association subset criteria can be determined. Association scores for the object detections in the subset of the association dataset can be determined. Further, the object detections can be associated with the object tracks based on the association scores for each of the object detections in the subset of the association dataset that satisfy some association criteria.
    Type: Grant
    Filed: July 18, 2018
    Date of Patent: October 5, 2021
    Assignee: UATC, LLC
    Inventors: Carlos Vallespi-Gonzalez, Abhishek Sen, Shivam Gautam
  • Publication number: 20210278539
    Abstract: Systems and methods for detecting objects and predicting their motion are provided. In particular, a computing system can obtain a plurality of sensor sweeps. The computing system can determine movement data associated with movement of the autonomous vehicle. For each sensor sweep, the computing system can generate an image associated with the sensor sweep. The computing system can extract, using the respective image as input to one or more machine-learned models, feature data from the respective image. The computing system can transform the feature data into a coordinate frame associated with a next time step. The computing system can generate a fused image. The computing system can generate a final fused image. The computing system can predict, based, at least in part, on the final fused representation of the plurality of sensors sweeps from the plurality of sensor sweeps, movement associated with the feature data at one or more time steps in the future.
    Type: Application
    Filed: November 6, 2020
    Publication date: September 9, 2021
    Inventors: Ankit Laddha, Gregory P. Meyer, Jake Scott Charland, Shivam Gautam, Shreyash Pandey, Carlos Vallespi-Gonzalez, Carl Knox Wellington
  • Publication number: 20210253131
    Abstract: An autonomous vehicle computing system can include a primary perception system configured to receive a plurality of sensor data points as input generate primary perception data representing a plurality of classifiable objects and a plurality of paths representing tracked motion of the plurality of classifiable objects. The autonomous vehicle computing system can include a secondary perception system configured to receive the plurality of sensor data points as input, cluster a subset of the plurality of sensor data points of the sensor data to generate one or more sensor data point clusters representing one or more unclassifiable objects that are not classifiable by the primary perception system, and generate secondary path data representing tracked motion of the one or more unclassifiable objects. The autonomous vehicle computing system can generate fused perception data based on the primary perception data and the one or more unclassifiable objects.
    Type: Application
    Filed: April 2, 2020
    Publication date: August 19, 2021
    Inventors: Abhishek Sen, Ashton James Fagg, Brian C. Becker, Yang Xu, Nathan Nicolas Pilbrough, Carlos Vallespi-Gonzalez
  • Publication number: 20210049378
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices associated with object association and tracking are provided. Input data can be obtained. The input data can be indicative of a detected object within a surrounding environment of an autonomous vehicle and an initial object classification of the detected object at an initial time interval and object tracks at time intervals preceding the initial time interval. Association data can be generated based on the input data and a machine-learned model. The association data can indicate whether the detected object is associated with at least one of the object tracks. An object classification probability distribution can be determined based on the association data. The object classification probability distribution can indicate a probability that the detected object is associated with each respective object classification. The association data and the object classification probability distribution for the detected object can be outputted.
    Type: Application
    Filed: September 6, 2019
    Publication date: February 18, 2021
    Inventors: Shivam Gautam, Brian C. Becker, Carlos Vallespi-Gonzalez, Cole Christian Gulino
  • Publication number: 20210025989
    Abstract: Systems and methods for detecting and classifying objects proximate to an autonomous vehicle can include a sensor system and a vehicle computing system. The sensor system includes at least one LIDAR system configured to transmit ranging signals relative to the autonomous vehicle and to generate LIDAR data. The vehicle computing system receives the LIDAR data from the sensor system. The vehicle computing system also determines at least a range-view representation of the LIDAR data and a top-view representation of the LIDAR data, wherein the range-view representation contains a fewer number of total data points than the top-view representation. The vehicle computing system further detects objects of interest in the range-view representation of the LIDAR data and generates a bounding shape for each of the detected objects of interest in the top-view representation of the LIDAR data.
    Type: Application
    Filed: October 9, 2020
    Publication date: January 28, 2021
    Inventors: Carlos Vallespi-Gonzalez, Ankit Laddha, Gregory P. Meyer, Eric Randall Kee
  • Publication number: 20210003665
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices associated with sensor output segmentation are provided. For example, sensor data can be accessed. The sensor data can include sensor data returns representative of an environment detected by a sensor across the sensor's field of view. Each sensor data return can be associated with a respective bin of a plurality of bins corresponding to the field of view of the sensor. Each bin can correspond to a different portion of the sensor's field of view. Channels can be generated for each of the plurality of bins and can include data indicative of a range and an azimuth associated with a sensor data return associated with each bin. Furthermore, a semantic segment of a portion of the sensor data can be generated by inputting the channels for each bin into a machine-learned segmentation model trained to generate an output including the semantic segment.
    Type: Application
    Filed: September 19, 2019
    Publication date: January 7, 2021
    Inventors: Ankit Laddha, Carlos Vallespi-Gonzalez, Duncan Blake Barber, Jacob White, Anurag Kumar
  • Publication number: 20200394474
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices for autonomous vehicle operation are provided. For example, a computing system can receive object data that includes portions of sensor data. The computing system can determine, in a first stage of a multiple stage classification using hardware components, one or more first stage characteristics of the portions of sensor data based on a first machine-learned model. In a second stage of the multiple stage classification, the computing system can determine second stage characteristics of the portions of sensor data based on a second machine-learned model. The computing system can generate an object output based on the first stage characteristics and the second stage characteristics. The object output can include indications associated with detection of objects in the portions of sensor data.
    Type: Application
    Filed: August 31, 2020
    Publication date: December 17, 2020
    Inventors: Carlos Vallespi-Gonzalez, Joseph Lawrence Amato, George Totolos, JR.
  • Patent number: 10860896
    Abstract: Image processing systems can include one or more cameras configured to obtain image data, one or more memory devices configured to store a classification model that classifies image features within the image data as including or not including detected objects, and a field programmable gate array (FPGA) device coupled to the one or more cameras. The FPGA device is configured to implement one or more image processing pipelines for image transformation and object detection. The one or more image processing pipelines can generate a multi-scale image pyramid of multiple image samples having different scaling factors, identify and aggregate features within one or more of the multiple image samples having different scaling factors, access the classification model, provide the features as input to the classification model, and receive an output indicative of objects detected within the image data.
    Type: Grant
    Filed: April 8, 2019
    Date of Patent: December 8, 2020
    Assignee: Uber Technologies, Inc.
    Inventors: George Totolos, Jr., Joshua Oren Silberman, Daniel Leland Strother, Carlos Vallespi-Gonzalez, David Bruce Parlour
  • Patent number: 10817731
    Abstract: Object detection systems and methods can include identifying an object of interest within an image obtained from a camera, obtaining a first supplemental portion of data associated with the object of interest determining an estimated location of the object of interest within three-dimensional space based at least in part on the first supplemental portion of data and a known relative location of the camera, determining a portion of the LIDAR point data corresponding to the object of interest based at least in part on the estimated location of the object of interest within three-dimensional space, and providing one or more of at least a portion of the image corresponding to the object of interest and the portion of LIDAR point data corresponding to the object of interest as an output.
    Type: Grant
    Filed: October 8, 2018
    Date of Patent: October 27, 2020
    Assignee: UATC, LLC
    Inventors: Carlos Vallespi-Gonzalez, Joseph Lawrence Amato, Hilton Keith Bristow
  • Patent number: 10809361
    Abstract: Systems and methods for detecting and classifying objects proximate to an autonomous vehicle can include a sensor system and a vehicle computing system. The sensor system includes at least one LIDAR system configured to transmit ranging signals relative to the autonomous vehicle and to generate LIDAR data. The vehicle computing system receives the LIDAR data from the sensor system. The vehicle computing system also determines at least a range-view representation of the LIDAR data and a top-view representation of the LIDAR data, wherein the range-view representation contains a fewer number of total data points than the top-view representation. The vehicle computing system further detects objects of interest in the range-view representation of the LIDAR data and generates a bounding shape for each of the detected objects of interest in the top-view representation of the LIDAR data.
    Type: Grant
    Filed: February 28, 2018
    Date of Patent: October 20, 2020
    Assignee: UATC, LLC
    Inventors: Carlos Vallespi-Gonzalez, Ankit Laddha, Gregory P Meyer, Eric Randall Kee
  • Patent number: 10762396
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices for autonomous vehicle operation are provided. For example, a computing system can receive object data that includes portions of sensor data. The computing system can determine, in a first stage of a multiple stage classification using hardware components, one or more first stage characteristics of the portions of sensor data based on a first machine-learned model. In a second stage of the multiple stage classification, the computing system can determine second stage characteristics of the portions of sensor data based on a second machine-learned model. The computing system can generate an object output based on the first stage characteristics and the second stage characteristics. The object output can include indications associated with detection of objects in the portions of sensor data.
    Type: Grant
    Filed: May 7, 2018
    Date of Patent: September 1, 2020
    Assignee: UTAC, LLC
    Inventors: Carlos Vallespi-Gonzalez, Joseph Lawrence Amato, George Totolos, Jr.
  • Patent number: 10664726
    Abstract: A method and non-transitory computer-readable medium capture an image of bulk grain and apply a feature extractor to the image to determine a feature of the bulk grain in the image. For each of a plurality of different sampling locations in the image, based upon the feature of the bulk grain at the sampling location, a determination is made regarding a classification score for the presence of a classification of material at the sampling location. A quality of the bulk grain of the image is determined based upon an aggregation of the classification scores for the presence of the classification of material at the sampling locations.
    Type: Grant
    Filed: October 2, 2017
    Date of Patent: May 26, 2020
    Assignee: Deere & Company
    Inventors: Carl Knox Wellington, Aaron J. Bruns, Victor S. Sierra, James J. Phelan, John M. Hageman, Cristian Dima, Hanke Boesch, Herman Herman, Zachary Abraham Pezzementi, Cason Robert Male, Joan Campoy, Carlos Vallespi-gonzalez
  • Patent number: 10489686
    Abstract: An object detection system for an autonomous vehicle processes sensor data, including one or more images, obtained for a road segment on which the autonomous vehicle is being driven. The object detection system compares the images to three-dimensional (3D) environment data for the road segment to determine pixels in the images that correspond to objects not previously identified in the 3D environment data. The object detection system then analyzes the pixels to classify the objects not previously identified in the 3D environment data.
    Type: Grant
    Filed: March 3, 2017
    Date of Patent: November 26, 2019
    Assignee: UATC, LLC
    Inventor: Carlos Vallespi-Gonzalez
  • Publication number: 20190354782
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices for detecting objects are provided. For example, the disclosed technology can obtain a representation of sensor data associated with an environment surrounding a vehicle. Further, the sensor data can include sensor data points. A point classification and point property estimation can be determined for each of the sensor data points and a portion of the sensor data points can be clustered into an object instance based on the point classification and point property estimation for each of the sensor data points. A collection of point classifications and point property estimations can be determined for the portion of the sensor data points clustered into the object instance. Furthermore, object instance property estimations for the object instance can be determined based on the collection of point classifications and point property estimations for the portion of the sensor data points clustered into the object instance.
    Type: Application
    Filed: July 18, 2018
    Publication date: November 21, 2019
    Inventors: Eric Randall Kee, Carlos Vallespi-Gonzalez, Gregory P. Meyer, Ankit Laddha
  • Publication number: 20190332875
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices for operating an autonomous vehicle are provided. For example, the disclosed technology can include receiving sensor data and map data. The sensor data can include information associated with an environment detected by sensors of a vehicle. The map data can include information associated with traffic signals in the environment. Further, an input representation can be generated based on the sensor data and the map data. The input representation can include regions of interest associated with images of the traffic signals. States of the traffic signals in the environment can be determined, based on the input representation and a machine-learned model. Traffic signal state data that includes a determinative state of the traffic signals can be generated based on the states of the traffic signals.
    Type: Application
    Filed: July 18, 2018
    Publication date: October 31, 2019
    Inventors: Carlos Vallespi-Gonzalez, Joseph Lawrence Amato, Gregory P. Meyer
  • Publication number: 20190333232
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices for associating objects are provided. For example, the disclosed technology can receive sensor data associated with the detection of objects over time. An association dataset can be generated and can include information associated with object detections of the objects at a most recent time interval and object tracks of the objects at time intervals in the past. A subset of the association dataset including the object detections that satisfy some association subset criteria can be determined. Association scores for the object detections in the subset of the association dataset can be determined. Further, the object detections can be associated with the object tracks based on the association scores for each of the object detections in the subset of the association dataset that satisfy some association criteria.
    Type: Application
    Filed: July 18, 2018
    Publication date: October 31, 2019
    Inventors: Carlos Vallespi-Gonzalez, Abhishek Sen, Shivam Gautam
  • Publication number: 20190310651
    Abstract: Generally, the present disclosure is directed to systems and methods for detecting objects of interest and determining location information and motion information for the detected objects based at least in part on sensor data (e.g., LIDAR data) provided from one or more sensor systems (e.g., LIDAR systems) included in the autonomous vehicle. The perception system can include a machine-learned detector model that is configured to receive multiple time frames of sensor data and implement curve-fitting of sensor data points over the multiple time frames of sensor data. The machine-learned model can be trained to determine, in response to the multiple time frames of sensor data provided as input, location information descriptive of a location of one or more objects of interest detected within the environment at a given time and motion information descriptive of the motion of each object of interest.
    Type: Application
    Filed: June 27, 2018
    Publication date: October 10, 2019
    Inventors: Carlos Vallespi-Gonzalez, Siheng Chen, Abhishek Sen, Ankit Laddha