Patents by Inventor Carlos Vallespi-Gonzalez

Carlos Vallespi-Gonzalez has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190333232
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices for associating objects are provided. For example, the disclosed technology can receive sensor data associated with the detection of objects over time. An association dataset can be generated and can include information associated with object detections of the objects at a most recent time interval and object tracks of the objects at time intervals in the past. A subset of the association dataset including the object detections that satisfy some association subset criteria can be determined. Association scores for the object detections in the subset of the association dataset can be determined. Further, the object detections can be associated with the object tracks based on the association scores for each of the object detections in the subset of the association dataset that satisfy some association criteria.
    Type: Application
    Filed: July 18, 2018
    Publication date: October 31, 2019
    Inventors: Carlos Vallespi-Gonzalez, Abhishek Sen, Shivam Gautam
  • Publication number: 20190332875
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices for operating an autonomous vehicle are provided. For example, the disclosed technology can include receiving sensor data and map data. The sensor data can include information associated with an environment detected by sensors of a vehicle. The map data can include information associated with traffic signals in the environment. Further, an input representation can be generated based on the sensor data and the map data. The input representation can include regions of interest associated with images of the traffic signals. States of the traffic signals in the environment can be determined, based on the input representation and a machine-learned model. Traffic signal state data that includes a determinative state of the traffic signals can be generated based on the states of the traffic signals.
    Type: Application
    Filed: July 18, 2018
    Publication date: October 31, 2019
    Inventors: Carlos Vallespi-Gonzalez, Joseph Lawrence Amato, Gregory P. Meyer
  • Publication number: 20190310651
    Abstract: Generally, the present disclosure is directed to systems and methods for detecting objects of interest and determining location information and motion information for the detected objects based at least in part on sensor data (e.g., LIDAR data) provided from one or more sensor systems (e.g., LIDAR systems) included in the autonomous vehicle. The perception system can include a machine-learned detector model that is configured to receive multiple time frames of sensor data and implement curve-fitting of sensor data points over the multiple time frames of sensor data. The machine-learned model can be trained to determine, in response to the multiple time frames of sensor data provided as input, location information descriptive of a location of one or more objects of interest detected within the environment at a given time and motion information descriptive of the motion of each object of interest.
    Type: Application
    Filed: June 27, 2018
    Publication date: October 10, 2019
    Inventors: Carlos Vallespi-Gonzalez, Siheng Chen, Abhishek Sen, Ankit Laddha
  • Patent number: 10371561
    Abstract: A system is provided that can include a 3D sensor. The 3D sensor can be configured to detect an area of an elevator on a harvester. The 3D sensor can further be configured to transmit a first signal associated with the area. The system can also include a processing device in communication with the 3D sensor. The system can further include a memory device in which instructions executable by the processing device are stored for causing the processing device to receive the first signal and determine a volume of a material on the elevator based on the first signal.
    Type: Grant
    Filed: October 29, 2014
    Date of Patent: August 6, 2019
    Assignees: Iowa State University Research Foundation, Inc., Deere & Company, Carnegie Mellon University
    Inventors: Matthew J. Darr, Daniel Joseph Corbett, Herman Herman, Carlos Vallespi-Gonzalez, Bryan E. Dugas, Hernan Badino
  • Publication number: 20190236414
    Abstract: Image processing systems can include one or more cameras configured to obtain image data, one or more memory devices configured to store a classification model that classifies image features within the image data as including or not including detected objects, and a field programmable gate array (FPGA) device coupled to the one or more cameras. The FPGA device is configured to implement one or more image processing pipelines for image transformation and object detection. The one or more image processing pipelines can generate a multi-scale image pyramid of multiple image samples having different scaling factors, identify and aggregate features within one or more of the multiple image samples having different scaling factors, access the classification model, provide the features as input to the classification model, and receive an output indicative of objects detected within the image data.
    Type: Application
    Filed: April 8, 2019
    Publication date: August 1, 2019
    Inventors: George Totolos, JR., Joshua Oren Silberman, Daniel Leland Strother, Carlos Vallespi-Gonzalez, David Bruce Parlour
  • Publication number: 20190171912
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices for autonomous vehicle operation are provided. For example, a computing system can receive object data that includes portions of sensor data. The computing system can determine, in a first stage of a multiple stage classification using hardware components, one or more first stage characteristics of the portions of sensor data based on a first machine-learned model. In a second stage of the multiple stage classification, the computing system can determine second stage characteristics of the portions of sensor data based on a second machine-learned model. The computing system can generate an object output based on the first stage characteristics and the second stage characteristics. The object output can include indications associated with detection of objects in the portions of sensor data.
    Type: Application
    Filed: May 7, 2018
    Publication date: June 6, 2019
    Inventors: Carlos Vallespi-Gonzalez, Joseph Lawrence Amato, George Totolos, JR.
  • Patent number: 10310087
    Abstract: Systems and methods for detecting and classifying objects that are proximate to an autonomous vehicle can include receiving, by one or more computing devices, LIDAR data from one or more LIDAR sensors configured to transmit ranging signals relative to an autonomous vehicle, generating, by the one or more computing devices, a data matrix comprising a plurality of data channels based at least in part on the LIDAR data, and inputting the data matrix to a machine-learned model. A class prediction for each of one or more different portions of the data matrix and/or a properties estimation associated with each class prediction generated for the data matrix can be received as an output of the machine-learned model. One or more object segments can be generated based at least in part on the class predictions and properties estimations. The one or more object segments can be provided to an object classification and tracking application.
    Type: Grant
    Filed: May 31, 2017
    Date of Patent: June 4, 2019
    Assignee: Uber Technologies, Inc.
    Inventors: Ankit Laddha, J. Andrew Bagnell, Varun Ramakrishna, Yimu Wang, Carlos Vallespi-Gonzalez
  • Patent number: 10255525
    Abstract: Image processing systems can include one or more cameras configured to obtain image data, one or more memory devices configured to store a classification model that classifies image features within the image data as including or not including detected objects, and a field programmable gate array (FPGA) device coupled to the one or more cameras. The FPGA device is configured to implement one or more image processing pipelines for image transformation and object detection. The one or more image processing pipelines can generate a multi-scale image pyramid of multiple image samples having different scaling factors, identify and aggregate features within one or more of the multiple image samples having different scaling factors, access the classification model, provide the features as input to the classification model, and receive an output indicative of objects detected within the image data.
    Type: Grant
    Filed: April 25, 2017
    Date of Patent: April 9, 2019
    Assignee: Uber Technologies, Inc.
    Inventors: George Totolos, Jr., Joshua Silberman, Daniel Strother, Carlos Vallespi-Gonzalez, David Bruce Parlour
  • Publication number: 20190079526
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices for operating an autonomous vehicle are provided. For example, a method can include receiving object data based on one or more states of one or more objects. The object data can include information based on sensor output associated with one or more portions of the one or more objects. Characteristics of the one or more objects, including an estimated set of physical dimensions of the one or more objects can be determined, based in part on the object data and a machine learned model. One or more orientations of the one or more objects relative to the location of the autonomous vehicle can be determined based on the estimated set of physical dimensions of the one or more objects. Vehicle systems associated with the autonomous vehicle can be activated, based on the one or more orientations of the one or more objects.
    Type: Application
    Filed: October 27, 2017
    Publication date: March 14, 2019
    Inventors: Carlos Vallespi-Gonzalez, Abhishek Sen, Wei Pu, Joseph Pilarczyk, II
  • Publication number: 20190042865
    Abstract: Object detection systems and methods can include identifying an object of interest within an image obtained from a camera, obtaining a first supplemental portion of data associated with the object of interest determining an estimated location of the object of interest within three-dimensional space based at least in part on the first supplemental portion of data and a known relative location of the camera, determining a portion of the LIDAR point data corresponding to the object of interest based at least in part on the estimated location of the object of interest within three-dimensional space, and providing one or more of at least a portion of the image corresponding to the object of interest and the portion of LIDAR point data corresponding to the object of interest as an output.
    Type: Application
    Filed: October 8, 2018
    Publication date: February 7, 2019
    Inventors: Carlos Vallespi-Gonzalez, Joseph Lawrence Amato, Hilton Keith Bristow
  • Publication number: 20180349746
    Abstract: Systems and methods for detecting and classifying objects proximate to an autonomous vehicle can include a sensor system and a vehicle computing system. The sensor system includes at least one LIDAR system configured to transmit ranging signals relative to the autonomous vehicle and to generate LIDAR data. The vehicle computing system receives LIDAR data from the sensor system and generates a top-view representation of the LIDAR data that is discretized into a grid of multiple cells, each cell representing a column in three-dimensional space. The vehicle computing system also determines one or more cell statistics characterizing the LIDAR data corresponding to each cell and/or a feature extraction vector for each cell by aggregating the one or more cell statistics of surrounding cells at one or more different scales. The vehicle computing system then determines a classification for each cell based at least in part on the feature extraction vectors.
    Type: Application
    Filed: May 31, 2017
    Publication date: December 6, 2018
    Inventor: Carlos Vallespi-Gonzalez
  • Publication number: 20180348346
    Abstract: Systems and methods for detecting and classifying objects proximate to an autonomous vehicle can include a sensor system and a vehicle computing system. The sensor system includes at least one LIDAR system configured to transmit ranging signals relative to the autonomous vehicle and to generate LIDAR data. The vehicle computing system receives the LIDAR data from the sensor system. The vehicle computing system also determines at least a range-view representation of the LIDAR data and a top-view representation of the LIDAR data, wherein the range-view representation contains a fewer number of total data points than the top-view representation. The vehicle computing system further detects objects of interest in the range-view representation of the LIDAR data and generates a bounding shape for each of the detected objects of interest in the top-view representation of the LIDAR data.
    Type: Application
    Filed: February 28, 2018
    Publication date: December 6, 2018
    Inventors: Carlos Vallespi-Gonzalez, Ankit Laddha, Gregory P Meyer, Eric Randall Kee
  • Publication number: 20180348374
    Abstract: Systems and methods for detecting and classifying objects that are proximate to an autonomous vehicle can include receiving, by one or more computing devices, LIDAR data from one or more LIDAR sensors configured to transmit ranging signals relative to an autonomous vehicle, generating, by the one or more computing devices, a data matrix comprising a plurality of data channels based at least in part on the LIDAR data, and inputting the data matrix to a machine-learned model. A class prediction for each of one or more different portions of the data matrix and/or a properties estimation associated with each class prediction generated for the data matrix can be received as an output of the machine-learned model. One or more object segments can be generated based at least in part on the class predictions and properties estimations. The one or more object segments can be provided to an object classification and tracking application.
    Type: Application
    Filed: May 31, 2017
    Publication date: December 6, 2018
    Inventors: Ankit Laddha, James Andrew Bagnall, Varun Ramakrishna, Yimu Wang, Carlos Vallespi-Gonzalez
  • Publication number: 20180307921
    Abstract: Object detection systems and methods can include identifying an object of interest within an image obtained from a camera, obtaining a first supplemental portion of data associated with the object of interest determining an estimated location of the object of interest within three-dimensional space based at least in part on the first supplemental portion of data and a known relative location of the camera, determining a portion of the LIDAR point data corresponding to the object of interest based at least in part on the estimated location of the object of interest within three-dimensional space, and providing one or more of at least a portion of the image corresponding to the object of interest and the portion of LIDAR point data corresponding to the object of interest as an output.
    Type: Application
    Filed: September 8, 2017
    Publication date: October 25, 2018
    Inventors: Carlos Vallespi-Gonzalez, Joseph Amato, Hilton Bristow
  • Patent number: 10108867
    Abstract: Object detection systems and methods can include identifying an object of interest within an image obtained from a camera, obtaining a first supplemental portion of data associated with the object of interest determining an estimated location of the object of interest within three-dimensional space based at least in part on the first supplemental portion of data and a known relative location of the camera, determining a portion of the LIDAR point data corresponding to the object of interest based at least in part on the estimated location of the object of interest within three-dimensional space, and providing one or more of at least a portion of the image corresponding to the object of interest and the portion of LIDAR point data corresponding to the object of interest as an output.
    Type: Grant
    Filed: September 8, 2017
    Date of Patent: October 23, 2018
    Assignee: Uber Technologies, Inc.
    Inventors: Carlos Vallespi-Gonzalez, Joseph Amato, Hilton Bristow
  • Publication number: 20180025254
    Abstract: A method and non-transitory computer-readable medium capture an image of bulk grain and apply a feature extractor to the image to determine a feature of the bulk grain in the image. For each of a plurality of different sampling locations in the image, based upon the feature of the bulk grain at the sampling location, a determination is made regarding a classification score for the presence of a classification of material at the sampling location. A quality of the bulk grain of the image is determined based upon an aggregation of the classification scores for the presence of the classification of material at the sampling locations.
    Type: Application
    Filed: October 2, 2017
    Publication date: January 25, 2018
    Inventors: Carl Knox Wellington, Aaron J. Bruns, Victor S. Sierra, James J. Phelan, John M. Hageman, Cristian Dima, Hanke Boesch, Herman Herman, Zachary Abraham Pezzementi, Cason Robert Male, Joan Campoy, Carlos Vallespi-gonzalez
  • Publication number: 20170359561
    Abstract: A disparity mapping system for an autonomous vehicle can include a stereoscopic camera which acquires a first image and a second image of a scene. The system generates baseline disparity data from a location and orientation of the stereoscopic camera and three-dimensional environment data for the environment around the camera. Using the first image, second image, and baseline disparity data, the system can then generate a disparity map for the scene.
    Type: Application
    Filed: June 8, 2016
    Publication date: December 14, 2017
    Inventor: Carlos Vallespi-Gonzalez
  • Publication number: 20170323179
    Abstract: An object detection system for an autonomous vehicle processes sensor data, including one or more images, obtained for a road segment on which the autonomous vehicle is being driven. The object detection system compares the images to three-dimensional (3D) environment data for the road segment to determine pixels in the images that correspond to objects not previously identified in the 3D environment data. The object detection system then analyzes the pixels to classify the objects not previously identified in the 3D environment data.
    Type: Application
    Filed: March 3, 2017
    Publication date: November 9, 2017
    Inventor: Carlos Vallespi-Gonzalez
  • Patent number: 9779330
    Abstract: A method and non-transitory computer-readable medium capture an image of bulk grain and apply a feature extractor to the image to determine a feature of the bulk grain in the image. For each of a plurality of different sampling locations in the image, based upon the feature of the bulk grain at the sampling location, a determination is made regarding a classification score for the presence of a classification of material at the sampling location. A quality of the bulk grain of the image is determined based upon an aggregation of the classification scores for the presence of the classification of material at the sampling locations.
    Type: Grant
    Filed: December 26, 2014
    Date of Patent: October 3, 2017
    Assignee: Deere & Company
    Inventors: Carl Knox Wellington, Aaron J. Bruns, Victor S. Sierra, James J. Phelan, John M. Hageman, Cristian Dima, Hanke Boesch, Herman Herman, Zachary Abraham Pezzementi, Cason Robert Male, Joan Campoy, Carlos Vallespi-gonzalez
  • Patent number: 9672446
    Abstract: An object detection system for an autonomous vehicle processes sensor data, including one or more images, obtained for a road segment on which the autonomous vehicle is being driven. The object detection system compares the images to three-dimensional (3D) environment data for the road segment to determine pixels in the images that correspond to objects not previously identified in the 3D environment data. The object detection system then analyzes the pixels to classify the objects not previously identified in the 3D environment data.
    Type: Grant
    Filed: May 6, 2016
    Date of Patent: June 6, 2017
    Assignee: Uber Technologies, Inc.
    Inventor: Carlos Vallespi-Gonzalez