Patents by Inventor Carlos Vallespi-Gonzalez

Carlos Vallespi-Gonzalez has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10371561
    Abstract: A system is provided that can include a 3D sensor. The 3D sensor can be configured to detect an area of an elevator on a harvester. The 3D sensor can further be configured to transmit a first signal associated with the area. The system can also include a processing device in communication with the 3D sensor. The system can further include a memory device in which instructions executable by the processing device are stored for causing the processing device to receive the first signal and determine a volume of a material on the elevator based on the first signal.
    Type: Grant
    Filed: October 29, 2014
    Date of Patent: August 6, 2019
    Assignees: Iowa State University Research Foundation, Inc., Deere & Company, Carnegie Mellon University
    Inventors: Matthew J. Darr, Daniel Joseph Corbett, Herman Herman, Carlos Vallespi-Gonzalez, Bryan E. Dugas, Hernan Badino
  • Publication number: 20190236414
    Abstract: Image processing systems can include one or more cameras configured to obtain image data, one or more memory devices configured to store a classification model that classifies image features within the image data as including or not including detected objects, and a field programmable gate array (FPGA) device coupled to the one or more cameras. The FPGA device is configured to implement one or more image processing pipelines for image transformation and object detection. The one or more image processing pipelines can generate a multi-scale image pyramid of multiple image samples having different scaling factors, identify and aggregate features within one or more of the multiple image samples having different scaling factors, access the classification model, provide the features as input to the classification model, and receive an output indicative of objects detected within the image data.
    Type: Application
    Filed: April 8, 2019
    Publication date: August 1, 2019
    Inventors: George Totolos, JR., Joshua Oren Silberman, Daniel Leland Strother, Carlos Vallespi-Gonzalez, David Bruce Parlour
  • Publication number: 20190171912
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices for autonomous vehicle operation are provided. For example, a computing system can receive object data that includes portions of sensor data. The computing system can determine, in a first stage of a multiple stage classification using hardware components, one or more first stage characteristics of the portions of sensor data based on a first machine-learned model. In a second stage of the multiple stage classification, the computing system can determine second stage characteristics of the portions of sensor data based on a second machine-learned model. The computing system can generate an object output based on the first stage characteristics and the second stage characteristics. The object output can include indications associated with detection of objects in the portions of sensor data.
    Type: Application
    Filed: May 7, 2018
    Publication date: June 6, 2019
    Inventors: Carlos Vallespi-Gonzalez, Joseph Lawrence Amato, George Totolos, JR.
  • Patent number: 10310087
    Abstract: Systems and methods for detecting and classifying objects that are proximate to an autonomous vehicle can include receiving, by one or more computing devices, LIDAR data from one or more LIDAR sensors configured to transmit ranging signals relative to an autonomous vehicle, generating, by the one or more computing devices, a data matrix comprising a plurality of data channels based at least in part on the LIDAR data, and inputting the data matrix to a machine-learned model. A class prediction for each of one or more different portions of the data matrix and/or a properties estimation associated with each class prediction generated for the data matrix can be received as an output of the machine-learned model. One or more object segments can be generated based at least in part on the class predictions and properties estimations. The one or more object segments can be provided to an object classification and tracking application.
    Type: Grant
    Filed: May 31, 2017
    Date of Patent: June 4, 2019
    Assignee: Uber Technologies, Inc.
    Inventors: Ankit Laddha, J. Andrew Bagnell, Varun Ramakrishna, Yimu Wang, Carlos Vallespi-Gonzalez
  • Patent number: 10255525
    Abstract: Image processing systems can include one or more cameras configured to obtain image data, one or more memory devices configured to store a classification model that classifies image features within the image data as including or not including detected objects, and a field programmable gate array (FPGA) device coupled to the one or more cameras. The FPGA device is configured to implement one or more image processing pipelines for image transformation and object detection. The one or more image processing pipelines can generate a multi-scale image pyramid of multiple image samples having different scaling factors, identify and aggregate features within one or more of the multiple image samples having different scaling factors, access the classification model, provide the features as input to the classification model, and receive an output indicative of objects detected within the image data.
    Type: Grant
    Filed: April 25, 2017
    Date of Patent: April 9, 2019
    Assignee: Uber Technologies, Inc.
    Inventors: George Totolos, Jr., Joshua Silberman, Daniel Strother, Carlos Vallespi-Gonzalez, David Bruce Parlour
  • Publication number: 20190079526
    Abstract: Systems, methods, tangible non-transitory computer-readable media, and devices for operating an autonomous vehicle are provided. For example, a method can include receiving object data based on one or more states of one or more objects. The object data can include information based on sensor output associated with one or more portions of the one or more objects. Characteristics of the one or more objects, including an estimated set of physical dimensions of the one or more objects can be determined, based in part on the object data and a machine learned model. One or more orientations of the one or more objects relative to the location of the autonomous vehicle can be determined based on the estimated set of physical dimensions of the one or more objects. Vehicle systems associated with the autonomous vehicle can be activated, based on the one or more orientations of the one or more objects.
    Type: Application
    Filed: October 27, 2017
    Publication date: March 14, 2019
    Inventors: Carlos Vallespi-Gonzalez, Abhishek Sen, Wei Pu, Joseph Pilarczyk, II
  • Publication number: 20190042865
    Abstract: Object detection systems and methods can include identifying an object of interest within an image obtained from a camera, obtaining a first supplemental portion of data associated with the object of interest determining an estimated location of the object of interest within three-dimensional space based at least in part on the first supplemental portion of data and a known relative location of the camera, determining a portion of the LIDAR point data corresponding to the object of interest based at least in part on the estimated location of the object of interest within three-dimensional space, and providing one or more of at least a portion of the image corresponding to the object of interest and the portion of LIDAR point data corresponding to the object of interest as an output.
    Type: Application
    Filed: October 8, 2018
    Publication date: February 7, 2019
    Inventors: Carlos Vallespi-Gonzalez, Joseph Lawrence Amato, Hilton Keith Bristow
  • Publication number: 20180348346
    Abstract: Systems and methods for detecting and classifying objects proximate to an autonomous vehicle can include a sensor system and a vehicle computing system. The sensor system includes at least one LIDAR system configured to transmit ranging signals relative to the autonomous vehicle and to generate LIDAR data. The vehicle computing system receives the LIDAR data from the sensor system. The vehicle computing system also determines at least a range-view representation of the LIDAR data and a top-view representation of the LIDAR data, wherein the range-view representation contains a fewer number of total data points than the top-view representation. The vehicle computing system further detects objects of interest in the range-view representation of the LIDAR data and generates a bounding shape for each of the detected objects of interest in the top-view representation of the LIDAR data.
    Type: Application
    Filed: February 28, 2018
    Publication date: December 6, 2018
    Inventors: Carlos Vallespi-Gonzalez, Ankit Laddha, Gregory P Meyer, Eric Randall Kee
  • Publication number: 20180348374
    Abstract: Systems and methods for detecting and classifying objects that are proximate to an autonomous vehicle can include receiving, by one or more computing devices, LIDAR data from one or more LIDAR sensors configured to transmit ranging signals relative to an autonomous vehicle, generating, by the one or more computing devices, a data matrix comprising a plurality of data channels based at least in part on the LIDAR data, and inputting the data matrix to a machine-learned model. A class prediction for each of one or more different portions of the data matrix and/or a properties estimation associated with each class prediction generated for the data matrix can be received as an output of the machine-learned model. One or more object segments can be generated based at least in part on the class predictions and properties estimations. The one or more object segments can be provided to an object classification and tracking application.
    Type: Application
    Filed: May 31, 2017
    Publication date: December 6, 2018
    Inventors: Ankit Laddha, James Andrew Bagnall, Varun Ramakrishna, Yimu Wang, Carlos Vallespi-Gonzalez
  • Publication number: 20180349746
    Abstract: Systems and methods for detecting and classifying objects proximate to an autonomous vehicle can include a sensor system and a vehicle computing system. The sensor system includes at least one LIDAR system configured to transmit ranging signals relative to the autonomous vehicle and to generate LIDAR data. The vehicle computing system receives LIDAR data from the sensor system and generates a top-view representation of the LIDAR data that is discretized into a grid of multiple cells, each cell representing a column in three-dimensional space. The vehicle computing system also determines one or more cell statistics characterizing the LIDAR data corresponding to each cell and/or a feature extraction vector for each cell by aggregating the one or more cell statistics of surrounding cells at one or more different scales. The vehicle computing system then determines a classification for each cell based at least in part on the feature extraction vectors.
    Type: Application
    Filed: May 31, 2017
    Publication date: December 6, 2018
    Inventor: Carlos Vallespi-Gonzalez
  • Publication number: 20180307921
    Abstract: Object detection systems and methods can include identifying an object of interest within an image obtained from a camera, obtaining a first supplemental portion of data associated with the object of interest determining an estimated location of the object of interest within three-dimensional space based at least in part on the first supplemental portion of data and a known relative location of the camera, determining a portion of the LIDAR point data corresponding to the object of interest based at least in part on the estimated location of the object of interest within three-dimensional space, and providing one or more of at least a portion of the image corresponding to the object of interest and the portion of LIDAR point data corresponding to the object of interest as an output.
    Type: Application
    Filed: September 8, 2017
    Publication date: October 25, 2018
    Inventors: Carlos Vallespi-Gonzalez, Joseph Amato, Hilton Bristow
  • Patent number: 10108867
    Abstract: Object detection systems and methods can include identifying an object of interest within an image obtained from a camera, obtaining a first supplemental portion of data associated with the object of interest determining an estimated location of the object of interest within three-dimensional space based at least in part on the first supplemental portion of data and a known relative location of the camera, determining a portion of the LIDAR point data corresponding to the object of interest based at least in part on the estimated location of the object of interest within three-dimensional space, and providing one or more of at least a portion of the image corresponding to the object of interest and the portion of LIDAR point data corresponding to the object of interest as an output.
    Type: Grant
    Filed: September 8, 2017
    Date of Patent: October 23, 2018
    Assignee: Uber Technologies, Inc.
    Inventors: Carlos Vallespi-Gonzalez, Joseph Amato, Hilton Bristow
  • Publication number: 20180025254
    Abstract: A method and non-transitory computer-readable medium capture an image of bulk grain and apply a feature extractor to the image to determine a feature of the bulk grain in the image. For each of a plurality of different sampling locations in the image, based upon the feature of the bulk grain at the sampling location, a determination is made regarding a classification score for the presence of a classification of material at the sampling location. A quality of the bulk grain of the image is determined based upon an aggregation of the classification scores for the presence of the classification of material at the sampling locations.
    Type: Application
    Filed: October 2, 2017
    Publication date: January 25, 2018
    Inventors: Carl Knox Wellington, Aaron J. Bruns, Victor S. Sierra, James J. Phelan, John M. Hageman, Cristian Dima, Hanke Boesch, Herman Herman, Zachary Abraham Pezzementi, Cason Robert Male, Joan Campoy, Carlos Vallespi-gonzalez
  • Publication number: 20170359561
    Abstract: A disparity mapping system for an autonomous vehicle can include a stereoscopic camera which acquires a first image and a second image of a scene. The system generates baseline disparity data from a location and orientation of the stereoscopic camera and three-dimensional environment data for the environment around the camera. Using the first image, second image, and baseline disparity data, the system can then generate a disparity map for the scene.
    Type: Application
    Filed: June 8, 2016
    Publication date: December 14, 2017
    Inventor: Carlos Vallespi-Gonzalez
  • Publication number: 20170323179
    Abstract: An object detection system for an autonomous vehicle processes sensor data, including one or more images, obtained for a road segment on which the autonomous vehicle is being driven. The object detection system compares the images to three-dimensional (3D) environment data for the road segment to determine pixels in the images that correspond to objects not previously identified in the 3D environment data. The object detection system then analyzes the pixels to classify the objects not previously identified in the 3D environment data.
    Type: Application
    Filed: March 3, 2017
    Publication date: November 9, 2017
    Inventor: Carlos Vallespi-Gonzalez
  • Patent number: 9779330
    Abstract: A method and non-transitory computer-readable medium capture an image of bulk grain and apply a feature extractor to the image to determine a feature of the bulk grain in the image. For each of a plurality of different sampling locations in the image, based upon the feature of the bulk grain at the sampling location, a determination is made regarding a classification score for the presence of a classification of material at the sampling location. A quality of the bulk grain of the image is determined based upon an aggregation of the classification scores for the presence of the classification of material at the sampling locations.
    Type: Grant
    Filed: December 26, 2014
    Date of Patent: October 3, 2017
    Assignee: Deere & Company
    Inventors: Carl Knox Wellington, Aaron J. Bruns, Victor S. Sierra, James J. Phelan, John M. Hageman, Cristian Dima, Hanke Boesch, Herman Herman, Zachary Abraham Pezzementi, Cason Robert Male, Joan Campoy, Carlos Vallespi-gonzalez
  • Patent number: 9672446
    Abstract: An object detection system for an autonomous vehicle processes sensor data, including one or more images, obtained for a road segment on which the autonomous vehicle is being driven. The object detection system compares the images to three-dimensional (3D) environment data for the road segment to determine pixels in the images that correspond to objects not previously identified in the 3D environment data. The object detection system then analyzes the pixels to classify the objects not previously identified in the 3D environment data.
    Type: Grant
    Filed: May 6, 2016
    Date of Patent: June 6, 2017
    Assignee: Uber Technologies, Inc.
    Inventor: Carlos Vallespi-Gonzalez
  • Patent number: 9532504
    Abstract: Present invention is an adjustable transfer device for unloading processed crop onto a container of a transport vehicle including a control arrangement with an electronic control unit, among other integrated components. Electronic control unit calculates position of expected point of incidence of crop flow on the container within field of view of optical image capture device, displays image of container together with symbol representing calculated expected point of incidence of crop flow on container on display, receives adjustment inputs from user interface for adjusting position of actuator and thus of adjustable transfer device, updates position of symbol in image on display, receives confirmation input from user interface once symbol in image on display is in appropriate position, derives at least one feature in image representing container, and tracks container within output signal of image processing system based on retrieved image feature and controls actuator accordingly to fill container with crop.
    Type: Grant
    Filed: March 9, 2016
    Date of Patent: January 3, 2017
    Assignees: CARNEGIE MELLON UNIVERSITY, DEERE & COMPANY
    Inventors: Herman Herman, Zachary T. Bonefas, Carlos Vallespi-Gonzalez, Johannes Josef Zametzer
  • Patent number: 9522791
    Abstract: First imaging device collects first image data, whereas second imaging device collects second image data of a storage portion. A container identification module identifies a container perimeter of the storage portion in at least one of the collected first image data and the collected second image data. A spout identification module is adapted to identify a spout of the transferring vehicle in the collected image data. An image data evaluator determines whether to use the first image data, the second image data, or both based on an evaluation of the intensity of pixel data or ambient light conditions. An alignment module is adapted to determine the relative position of the spout and the container perimeter and to generate command data to the propelled portion to steer the storage portion in cooperative alignment such that the spout is aligned within a central zone or a target zone of the container perimeter.
    Type: Grant
    Filed: February 11, 2013
    Date of Patent: December 20, 2016
    Inventors: Zachary T. Bonefas, Mark M. Chaney, Jeremiah K. Johnson, Aaron J. Bruns, Herman Herman, Carlos Vallespi-Gonzalez, Jose Gonzalez Mora
  • Patent number: 9457971
    Abstract: First imaging device collects first image data, whereas second imaging device collects second image data of a storage portion. A container identification module identifies a container perimeter of the storage portion in at least one of the collected first image data and the collected second image data. A spout identification module is adapted to identify a spout of the transferring vehicle in the collected image data. An image data evaluator determines whether to use the first image data, the second image data, or both based on an evaluation of the intensity of pixel data or ambient light conditions. An alignment module is adapted to determine the relative position of the spout and the container perimeter and to generate command data to the propelled portion to steer the storage portion in cooperative alignment such that the spout is aligned within a central zone or a target zone of the container perimeter.
    Type: Grant
    Filed: February 11, 2013
    Date of Patent: October 4, 2016
    Assignees: Deere & Company, Carnegie Mellon University
    Inventors: Zachary T. Bonefas, Mark M. Chaney, Jeremiah K. Johnson, Aaron J. Bruns, Herman Herman, Carlos Vallespi-Gonzalez, Jose Gonzalez Mora