Patents by Inventor Ryan M. EUSTICE

Ryan M. EUSTICE has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11393127
    Abstract: A system for determining the rigid-body transformation between 2D image data and 3D point cloud data includes a first sensor configured to capture image data of an environment, a second sensor configured to capture point cloud data of the environment; and a computing device communicatively coupled to the first sensor and the second sensor. The computing device is configured to receive image data from the first sensor and point cloud data from the second sensor, parameterize one or more 2D lines from image data, parameterize one or more 3D lines from point cloud data, align the one or more 2D lines with the one or more 3D lines by solving a registration problem formulated as a mixed integer linear program to simultaneously solve for a projection transform vector and a data association set, and generate a data mesh comprising the image data aligned with the point cloud data.
    Type: Grant
    Filed: March 30, 2020
    Date of Patent: July 19, 2022
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Steven A. Parkison, Jeffrey M. Walls, Ryan W. Wolcott, Mohammad Saad, Ryan M. Eustice
  • Publication number: 20210116553
    Abstract: A system and method for calibrating sensors may include one or more processors, a first sensor configured to obtain a two-dimensional image, a second sensor configured to obtain three-dimensional point cloud data, and a memory device. The memory device stores a data collection module and a calibration module. The data collection module has instructions that configure the one or more processors to obtain the two-dimensional image and the three-dimensional point cloud data. The calibration module has instructions that configure the one or more processors to determine and project a three-dimensional point cloud edge of the three-dimensional point cloud data onto the two-dimensional image edge, apply a branch-and-bound optimization algorithm to a plurality of rigid body transforms, determine a lowest cost transform of the plurality of rigid body transforms using the branch-and-bound optimization algorithm, and calibrate the first sensor with the second sensor using the lowest cost transform.
    Type: Application
    Filed: October 18, 2019
    Publication date: April 22, 2021
    Inventors: Jeffrey M. Walls, Steven A. Parkison, Ryan W. Wolcott, Ryan M. Eustice
  • Patent number: 10962630
    Abstract: A system and method for calibrating sensors may include one or more processors, a first sensor configured to obtain a two-dimensional image, a second sensor configured to obtain three-dimensional point cloud data, and a memory device. The memory device stores a data collection module and a calibration module. The data collection module has instructions that configure the one or more processors to obtain the two-dimensional image and the three-dimensional point cloud data. The calibration module has instructions that configure the one or more processors to determine and project a three-dimensional point cloud edge of the three-dimensional point cloud data onto the two-dimensional image edge, apply a branch-and-bound optimization algorithm to a plurality of rigid body transforms, determine a lowest cost transform of the plurality of rigid body transforms using the branch-and-bound optimization algorithm, and calibrate the first sensor with the second sensor using the lowest cost transform.
    Type: Grant
    Filed: October 18, 2019
    Date of Patent: March 30, 2021
    Assignees: Toyota Research Institute, Inc., The Regents of the University of Michigan
    Inventors: Jeffrey M. Walls, Steven A. Parkison, Ryan W. Wolcott, Ryan M. Eustice
  • Publication number: 20210082148
    Abstract: A system for determining the rigid-body transformation between 2D image data and 3D point cloud data includes a first sensor configured to capture image data of an environment, a second sensor configured to capture point cloud data of the environment; and a computing device communicatively coupled to the first sensor and the second sensor. The computing device is configured to receive image data from the first sensor and point cloud data from the second sensor, parameterize one or more 2D lines from image data, parameterize one or more 3D lines from point cloud data, align the one or more 2D lines with the one or more 3D lines by solving a registration problem formulated as a mixed integer linear program to simultaneously solve for a projection transform vector and a data association set, and generate a data mesh comprising the image data aligned with the point cloud data.
    Type: Application
    Filed: March 30, 2020
    Publication date: March 18, 2021
    Applicant: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Steven A. Parkison, Jeffrey M. Walls, Ryan W. Wolcott, Mohammad Saad, Ryan M. Eustice
  • Patent number: 10489663
    Abstract: System, methods, and other embodiments described herein relate to identifying changes between models of a locality. In one embodiment, a method includes, in response to determining that a location model is available for a present environment of a vehicle, generating a current model of the present environment using at least one sensor of the vehicle. The method also includes isolating dynamic objects in the current model as a function of the location model. The method includes providing the dynamic objects to be identified and labeled.
    Type: Grant
    Filed: April 24, 2017
    Date of Patent: November 26, 2019
    Assignee: Toyota Research Institute, Inc.
    Inventors: Edwin B. Olson, Michael R. James, Ryan M. Eustice, Ryan W. Wolcott
  • Patent number: 10460053
    Abstract: System, methods, and other embodiments described herein relate to identifying surface properties of objects using a light detection and ranging (LIDAR) sensor. In one embodiment, a method includes, in response to scanning a surface of an object using the LIDAR sensor, receiving a reflected waveform as a function of attributes of the surface. The method includes analyzing the reflected waveform according to a surface property model to produce an estimate of the attributes. The surface property model characterizes relationships between reflected waveforms and different surface properties. The method includes providing the estimate as an indication of the surface of the scanned object.
    Type: Grant
    Filed: April 24, 2017
    Date of Patent: October 29, 2019
    Assignee: Toyota Research Institute, Inc.
    Inventors: Edwin B. Olson, Michael R. James, Ryan M. Eustice
  • Publication number: 20180306924
    Abstract: System, methods, and other embodiments described herein relate to identifying surface properties of objects using a light detection and ranging (LIDAR) sensor. In one embodiment, a method includes, in response to scanning a surface of an object using the LIDAR sensor, receiving a reflected waveform as a function of attributes of the surface. The method includes analyzing the reflected waveform according to a surface property model to produce an estimate of the attributes. The surface property model characterizes relationships between reflected waveforms and different surface properties. The method includes providing the estimate as an indication of the surface of the scanned object.
    Type: Application
    Filed: April 24, 2017
    Publication date: October 25, 2018
    Inventors: Edwin B. Olson, Michael R. James, Ryan M. Eustice
  • Publication number: 20180307915
    Abstract: System, methods, and other embodiments described herein relate to identifying changes between models of a locality. In one embodiment, a method includes, in response to determining that a location model is available for a present environment of a vehicle, generating a current model of the present environment using at least one sensor of the vehicle. The method also includes isolating dynamic objects in the current model as a function of the location model. The method includes providing the dynamic objects to be identified and labeled.
    Type: Application
    Filed: April 24, 2017
    Publication date: October 25, 2018
    Inventors: Edwin B. Olson, Michael R. James, Ryan M. Eustice, Ryan W. Wolcott
  • Patent number: 9989969
    Abstract: An apparatus and method for visual localization of a visual camera system outputting real-time visual camera data and a graphics processing unit receiving the real-time visual camera data. The graphics processing unit accesses a database of prior map information and generates a synthetic image that is then compared to the real-time visual camera data to determine corrected position data. The graphics processing unit determines a camera position based on the corrected position data. A corrective system for applying navigation of the vehicle based on the determined camera position can be used in some embodiments.
    Type: Grant
    Filed: January 19, 2016
    Date of Patent: June 5, 2018
    Assignee: The Regents of The University of Michigan
    Inventors: Ryan M. Eustice, Ryan W. Wolcott
  • Patent number: 9934688
    Abstract: A system includes a computer programmed to identify, from a first vehicle, one or more second vehicles within a specified distance to the first vehicle. The computer is further programmed to receive data about operations of each of the second vehicles, including trajectory data. Based on the data, the computer is programmed to identify, for each of the second vehicles, a distribution of probabilities of each of a set of potential planned trajectories. The computer is further programmed to determine a planned trajectory for the first vehicle, based on the respective distributions of probabilities of each of the set of potential planned trajectories for each of the second vehicles. The computer is further programmed to provide an instruction to at least one controller associated with the first vehicle based on the determined planned trajectory.
    Type: Grant
    Filed: July 31, 2015
    Date of Patent: April 3, 2018
    Assignees: FORD GLOBAL TECHNOLOGIES, LLC, THE REGENTS OF THE UNIVERSITY OF MICHIGAN
    Inventors: Edwin Olson, Enric Galceran, Alexander G. Cunningham, Ryan M. Eustice, James Robert McBride
  • Patent number: 9618938
    Abstract: A system includes a computer programmed to determine, along a nominal path to be traversed by a vehicle, a potential field representing a driving corridor for the vehicle. The computer is further programmed to identify a position of the vehicle relative to the potential field at a current time, and apply a torque to q steering column of the vehicle. The torque is based at least in part on the position. The potential field includes an attractive potential that guides the vehicle to remain within the corridor.
    Type: Grant
    Filed: July 31, 2015
    Date of Patent: April 11, 2017
    Assignees: FORD GLOBAL TECHNOLOGIES, LLC, THE REGENTS OF THE UNIVERSITY OF MICHIGAN
    Inventors: Edwin Olson, Enric Galceran, Ryan M. Eustice, James Robert McBride
  • Publication number: 20170031362
    Abstract: A system includes a computer programmed to determine, along a nominal path to be traversed by a vehicle, a potential field representing a driving corridor for the vehicle. The computer is further programmed to identify a position of the vehicle relative to the potential field at a current time, and apply a torque to q steering column of the vehicle. The torque is based at least in part on the position. The potential field includes an attractive potential that guides the vehicle to remain within the corridor.
    Type: Application
    Filed: July 31, 2015
    Publication date: February 2, 2017
    Inventors: Edwin Olson, Enric Galceran, Ryan M. Eustice, James Robert McBride
  • Publication number: 20170031361
    Abstract: A system includes a computer programmed to identify, from a first vehicle, one or more second vehicles within a specified distance to the first vehicle. The computer is further programmed to receive data about operations of each of the second vehicles, including trajectory data. Based on the data, the computer is programmed to identify, for each of the second vehicles, a distribution of probabilities of each of a set of potential planned trajectories. The computer is further programmed to determine a planned trajectory for the first vehicle, based on the respective distributions of probabilities of each of the set of potential planned trajectories for each of the second vehicles. The computer is further programmed to provide an instruction to at least one controller associated with the first vehicle based on the determined planned trajectory.
    Type: Application
    Filed: July 31, 2015
    Publication date: February 2, 2017
    Inventors: Edwin Olson, Enric Galceran, Alexander G. Cunningham, Ryan M. Eustice, James Robert McBride
  • Publication number: 20160209846
    Abstract: An apparatus and method for visual localization of a visual camera system outputting real-time visual camera data and a graphics processing unit receiving the real-time visual camera data. The graphics processing unit accesses a database of prior map information and generates a synthetic image that is then compared to the real-time visual camera data to determine corrected position data. The graphics processing unit determines a camera position based on the corrected position data. A corrective system for applying navigation of the vehicle based on the determined camera position can be used in some embodiments.
    Type: Application
    Filed: January 19, 2016
    Publication date: July 21, 2016
    Inventors: Ryan M. EUSTICE, Ryan W. WOLCOTT