Patents by Inventor Ryan M. EUSTICE
Ryan M. EUSTICE has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11393127Abstract: A system for determining the rigid-body transformation between 2D image data and 3D point cloud data includes a first sensor configured to capture image data of an environment, a second sensor configured to capture point cloud data of the environment; and a computing device communicatively coupled to the first sensor and the second sensor. The computing device is configured to receive image data from the first sensor and point cloud data from the second sensor, parameterize one or more 2D lines from image data, parameterize one or more 3D lines from point cloud data, align the one or more 2D lines with the one or more 3D lines by solving a registration problem formulated as a mixed integer linear program to simultaneously solve for a projection transform vector and a data association set, and generate a data mesh comprising the image data aligned with the point cloud data.Type: GrantFiled: March 30, 2020Date of Patent: July 19, 2022Assignee: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Steven A. Parkison, Jeffrey M. Walls, Ryan W. Wolcott, Mohammad Saad, Ryan M. Eustice
-
Publication number: 20210116553Abstract: A system and method for calibrating sensors may include one or more processors, a first sensor configured to obtain a two-dimensional image, a second sensor configured to obtain three-dimensional point cloud data, and a memory device. The memory device stores a data collection module and a calibration module. The data collection module has instructions that configure the one or more processors to obtain the two-dimensional image and the three-dimensional point cloud data. The calibration module has instructions that configure the one or more processors to determine and project a three-dimensional point cloud edge of the three-dimensional point cloud data onto the two-dimensional image edge, apply a branch-and-bound optimization algorithm to a plurality of rigid body transforms, determine a lowest cost transform of the plurality of rigid body transforms using the branch-and-bound optimization algorithm, and calibrate the first sensor with the second sensor using the lowest cost transform.Type: ApplicationFiled: October 18, 2019Publication date: April 22, 2021Inventors: Jeffrey M. Walls, Steven A. Parkison, Ryan W. Wolcott, Ryan M. Eustice
-
Patent number: 10962630Abstract: A system and method for calibrating sensors may include one or more processors, a first sensor configured to obtain a two-dimensional image, a second sensor configured to obtain three-dimensional point cloud data, and a memory device. The memory device stores a data collection module and a calibration module. The data collection module has instructions that configure the one or more processors to obtain the two-dimensional image and the three-dimensional point cloud data. The calibration module has instructions that configure the one or more processors to determine and project a three-dimensional point cloud edge of the three-dimensional point cloud data onto the two-dimensional image edge, apply a branch-and-bound optimization algorithm to a plurality of rigid body transforms, determine a lowest cost transform of the plurality of rigid body transforms using the branch-and-bound optimization algorithm, and calibrate the first sensor with the second sensor using the lowest cost transform.Type: GrantFiled: October 18, 2019Date of Patent: March 30, 2021Assignees: Toyota Research Institute, Inc., The Regents of the University of MichiganInventors: Jeffrey M. Walls, Steven A. Parkison, Ryan W. Wolcott, Ryan M. Eustice
-
Publication number: 20210082148Abstract: A system for determining the rigid-body transformation between 2D image data and 3D point cloud data includes a first sensor configured to capture image data of an environment, a second sensor configured to capture point cloud data of the environment; and a computing device communicatively coupled to the first sensor and the second sensor. The computing device is configured to receive image data from the first sensor and point cloud data from the second sensor, parameterize one or more 2D lines from image data, parameterize one or more 3D lines from point cloud data, align the one or more 2D lines with the one or more 3D lines by solving a registration problem formulated as a mixed integer linear program to simultaneously solve for a projection transform vector and a data association set, and generate a data mesh comprising the image data aligned with the point cloud data.Type: ApplicationFiled: March 30, 2020Publication date: March 18, 2021Applicant: TOYOTA RESEARCH INSTITUTE, INC.Inventors: Steven A. Parkison, Jeffrey M. Walls, Ryan W. Wolcott, Mohammad Saad, Ryan M. Eustice
-
Patent number: 10489663Abstract: System, methods, and other embodiments described herein relate to identifying changes between models of a locality. In one embodiment, a method includes, in response to determining that a location model is available for a present environment of a vehicle, generating a current model of the present environment using at least one sensor of the vehicle. The method also includes isolating dynamic objects in the current model as a function of the location model. The method includes providing the dynamic objects to be identified and labeled.Type: GrantFiled: April 24, 2017Date of Patent: November 26, 2019Assignee: Toyota Research Institute, Inc.Inventors: Edwin B. Olson, Michael R. James, Ryan M. Eustice, Ryan W. Wolcott
-
Patent number: 10460053Abstract: System, methods, and other embodiments described herein relate to identifying surface properties of objects using a light detection and ranging (LIDAR) sensor. In one embodiment, a method includes, in response to scanning a surface of an object using the LIDAR sensor, receiving a reflected waveform as a function of attributes of the surface. The method includes analyzing the reflected waveform according to a surface property model to produce an estimate of the attributes. The surface property model characterizes relationships between reflected waveforms and different surface properties. The method includes providing the estimate as an indication of the surface of the scanned object.Type: GrantFiled: April 24, 2017Date of Patent: October 29, 2019Assignee: Toyota Research Institute, Inc.Inventors: Edwin B. Olson, Michael R. James, Ryan M. Eustice
-
Publication number: 20180306924Abstract: System, methods, and other embodiments described herein relate to identifying surface properties of objects using a light detection and ranging (LIDAR) sensor. In one embodiment, a method includes, in response to scanning a surface of an object using the LIDAR sensor, receiving a reflected waveform as a function of attributes of the surface. The method includes analyzing the reflected waveform according to a surface property model to produce an estimate of the attributes. The surface property model characterizes relationships between reflected waveforms and different surface properties. The method includes providing the estimate as an indication of the surface of the scanned object.Type: ApplicationFiled: April 24, 2017Publication date: October 25, 2018Inventors: Edwin B. Olson, Michael R. James, Ryan M. Eustice
-
Publication number: 20180307915Abstract: System, methods, and other embodiments described herein relate to identifying changes between models of a locality. In one embodiment, a method includes, in response to determining that a location model is available for a present environment of a vehicle, generating a current model of the present environment using at least one sensor of the vehicle. The method also includes isolating dynamic objects in the current model as a function of the location model. The method includes providing the dynamic objects to be identified and labeled.Type: ApplicationFiled: April 24, 2017Publication date: October 25, 2018Inventors: Edwin B. Olson, Michael R. James, Ryan M. Eustice, Ryan W. Wolcott
-
Patent number: 9989969Abstract: An apparatus and method for visual localization of a visual camera system outputting real-time visual camera data and a graphics processing unit receiving the real-time visual camera data. The graphics processing unit accesses a database of prior map information and generates a synthetic image that is then compared to the real-time visual camera data to determine corrected position data. The graphics processing unit determines a camera position based on the corrected position data. A corrective system for applying navigation of the vehicle based on the determined camera position can be used in some embodiments.Type: GrantFiled: January 19, 2016Date of Patent: June 5, 2018Assignee: The Regents of The University of MichiganInventors: Ryan M. Eustice, Ryan W. Wolcott
-
Patent number: 9934688Abstract: A system includes a computer programmed to identify, from a first vehicle, one or more second vehicles within a specified distance to the first vehicle. The computer is further programmed to receive data about operations of each of the second vehicles, including trajectory data. Based on the data, the computer is programmed to identify, for each of the second vehicles, a distribution of probabilities of each of a set of potential planned trajectories. The computer is further programmed to determine a planned trajectory for the first vehicle, based on the respective distributions of probabilities of each of the set of potential planned trajectories for each of the second vehicles. The computer is further programmed to provide an instruction to at least one controller associated with the first vehicle based on the determined planned trajectory.Type: GrantFiled: July 31, 2015Date of Patent: April 3, 2018Assignees: FORD GLOBAL TECHNOLOGIES, LLC, THE REGENTS OF THE UNIVERSITY OF MICHIGANInventors: Edwin Olson, Enric Galceran, Alexander G. Cunningham, Ryan M. Eustice, James Robert McBride
-
Patent number: 9618938Abstract: A system includes a computer programmed to determine, along a nominal path to be traversed by a vehicle, a potential field representing a driving corridor for the vehicle. The computer is further programmed to identify a position of the vehicle relative to the potential field at a current time, and apply a torque to q steering column of the vehicle. The torque is based at least in part on the position. The potential field includes an attractive potential that guides the vehicle to remain within the corridor.Type: GrantFiled: July 31, 2015Date of Patent: April 11, 2017Assignees: FORD GLOBAL TECHNOLOGIES, LLC, THE REGENTS OF THE UNIVERSITY OF MICHIGANInventors: Edwin Olson, Enric Galceran, Ryan M. Eustice, James Robert McBride
-
Publication number: 20170031362Abstract: A system includes a computer programmed to determine, along a nominal path to be traversed by a vehicle, a potential field representing a driving corridor for the vehicle. The computer is further programmed to identify a position of the vehicle relative to the potential field at a current time, and apply a torque to q steering column of the vehicle. The torque is based at least in part on the position. The potential field includes an attractive potential that guides the vehicle to remain within the corridor.Type: ApplicationFiled: July 31, 2015Publication date: February 2, 2017Inventors: Edwin Olson, Enric Galceran, Ryan M. Eustice, James Robert McBride
-
Publication number: 20170031361Abstract: A system includes a computer programmed to identify, from a first vehicle, one or more second vehicles within a specified distance to the first vehicle. The computer is further programmed to receive data about operations of each of the second vehicles, including trajectory data. Based on the data, the computer is programmed to identify, for each of the second vehicles, a distribution of probabilities of each of a set of potential planned trajectories. The computer is further programmed to determine a planned trajectory for the first vehicle, based on the respective distributions of probabilities of each of the set of potential planned trajectories for each of the second vehicles. The computer is further programmed to provide an instruction to at least one controller associated with the first vehicle based on the determined planned trajectory.Type: ApplicationFiled: July 31, 2015Publication date: February 2, 2017Inventors: Edwin Olson, Enric Galceran, Alexander G. Cunningham, Ryan M. Eustice, James Robert McBride
-
Publication number: 20160209846Abstract: An apparatus and method for visual localization of a visual camera system outputting real-time visual camera data and a graphics processing unit receiving the real-time visual camera data. The graphics processing unit accesses a database of prior map information and generates a synthetic image that is then compared to the real-time visual camera data to determine corrected position data. The graphics processing unit determines a camera position based on the corrected position data. A corrective system for applying navigation of the vehicle based on the determined camera position can be used in some embodiments.Type: ApplicationFiled: January 19, 2016Publication date: July 21, 2016Inventors: Ryan M. EUSTICE, Ryan W. WOLCOTT