Patents by Inventor Michael James Delp
Michael James Delp has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11727169Abstract: System, methods, and other embodiments described herein relate to simulating sensor data. In one embodiment, a method includes, in response to receiving a request to generate simulated information corresponding to the sensor data, acquiring the sensor data that includes at least range information about a perceived environment. The simulated information includes one or more attributes of the sensor data that are absent from the sensor data in a current format. The method includes computing simulated information of the sensor data using a machine learning model that accepts the sensor data and labels as an input and produces the simulated information as an output. The labels identify at least objects in the perceived environment that are depicted by the sensor data. The method includes providing the simulated information with the sensor data.Type: GrantFiled: September 11, 2019Date of Patent: August 15, 2023Assignee: Toyota Research Institute, Inc.Inventors: Randall J. St. Romain, II, Hiroyuki Funaya, Michael James Delp
-
Patent number: 11354547Abstract: System, methods, and other embodiments described herein relate to improving clustering of points within a point cloud. In one embodiment, a method includes grouping the points into cells of a grid. The grid divides an observed region of a surrounding environment associated with the point cloud into the cells. The method includes computing feature vectors for the cells that use cell features to characterize the points in the cells and relationships between the cells. The method includes analyzing the feature vectors according to a clustering model to identify clusters for the cells. The clustering model evaluates the cells to identify which of the cells belong to common entities. The method includes providing the clusters as assignments of the points to the entities depicted in the point cloud.Type: GrantFiled: March 31, 2020Date of Patent: June 7, 2022Assignee: Toyota Research Institute, Inc.Inventors: Michael James Delp, Antonio Prioletti, Matthew T. Kliemann, Randall J. St. Romain, II
-
Publication number: 20210303916Abstract: System, methods, and other embodiments described herein relate to improving clustering of points within a point cloud. In one embodiment, a method includes grouping the points into cells of a grid. The grid divides an observed region of a surrounding environment associated with the point cloud into the cells. The method includes computing feature vectors for the cells that use cell features to characterize the points in the cells and relationships between the cells. The method includes analyzing the feature vectors according to a clustering model to identify clusters for the cells. The clustering model evaluates the cells to identify which of the cells belong to common entities. The method includes providing the clusters as assignments of the points to the entities depicted in the point cloud.Type: ApplicationFiled: March 31, 2020Publication date: September 30, 2021Inventors: Michael James Delp, Antonio Prioletti, Matthew T. Kliemann, Randall J. St. Romain II
-
Patent number: 11126891Abstract: System, methods, and other embodiments described herein relate to simulating sensor data for a scene. In one embodiment, a method includes, in response to receiving a request to generate simulated sensor data for the scene, acquiring simulation data about the scene. The simulation data includes at least simulated information about the scene that is computer-generated. The method includes computing the simulated sensor data using a generative neural network that accepts the simulation data as an input and produces the simulated sensor data as an output. The simulated sensor data is a simulated perception of the scene by a sensor. The method includes providing the simulated sensor data as part of the scene.Type: GrantFiled: September 11, 2019Date of Patent: September 21, 2021Assignee: Toyota Research Institute, Inc.Inventors: Randall J. St. Romain, II, Hiroyuki Funaya, Michael James Delp
-
Publication number: 20210073584Abstract: System, methods, and other embodiments described herein relate to simulating sensor data for a scene. In one embodiment, a method includes, in response to receiving a request to generate simulated sensor data for the scene, acquiring simulation data about the scene. The simulation data includes at least simulated information about the scene that is computer-generated. The method includes computing the simulated sensor data using a generative neural network that accepts the simulation data as an input and produces the simulated sensor data as an output. The simulated sensor data is a simulated perception of the scene by a sensor. The method includes providing the simulated sensor data as part of the scene.Type: ApplicationFiled: September 11, 2019Publication date: March 11, 2021Inventors: Randall J. St. Romain, II, Hiroyuki Funaya, Michael James Delp
-
Publication number: 20210073345Abstract: System, methods, and other embodiments described herein relate to simulating sensor data. In one embodiment, a method includes, in response to receiving a request to generate simulated information corresponding to the sensor data, acquiring the sensor data that includes at least range information about a perceived environment. The simulated information includes one or more attributes of the sensor data that are absent from the sensor data in a current format. The method includes computing simulated information of the sensor data using a machine learning model that accepts the sensor data and labels as an input and produces the simulated information as an output. The labels identify at least objects in the perceived environment that are depicted by the sensor data. The method includes providing the simulated information with the sensor data.Type: ApplicationFiled: September 11, 2019Publication date: March 11, 2021Inventors: Randall J. St. Romain, II, Hiroyuki Funaya, Michael James Delp
-
Patent number: 10846818Abstract: Systems and methods described herein relate to registering three-dimensional (3D) data with two-dimensional (2D) image data. One embodiment receives 3D data from one or more sensors and 2D image data from one or more cameras; identifies a 3D segment in the 3D data and associates it with an object; identifies 2D boundary information for the object; determines a speed and a heading for the object; and registers the 3D segment with the 2D boundary information by adjusting the relative positions of the 3D segment and the 2D boundary information based on the speed and heading of the object and matching, in 3D space, the 3D segment with projected 2D boundary information.Type: GrantFiled: November 15, 2018Date of Patent: November 24, 2020Assignee: Toyota Research Institute, Inc.Inventors: Yusuke Kanzawa, Michael James Delp
-
Patent number: 10846817Abstract: Systems and methods described herein relate to registering three-dimensional (3D) data with two-dimensional (2D) image data. One embodiment receives 3D data from one or more sensors and 2D image data from one or more cameras; identifies a 3D segment in the 3D data and associates it with an object; classifies pixels in the 2D image data; determines a speed and a heading for the object; and registers the 3D segment with a portion of the classified pixels by either (1) shifting the 3D segment to a position that, based on the associated object's speed and heading, corresponds to a 2D image data capture time and projecting the time-shifted 3D segment onto 2D image space; or (2) projecting the 3D segment onto 2D image space and shifting the projected 3D segment to a position that, based on the associated object's speed and heading, corresponds to a 2D image data capture time.Type: GrantFiled: November 15, 2018Date of Patent: November 24, 2020Assignee: Toyota Research Institute, Inc.Inventors: Yusuke Kanzawa, Michael James Delp
-
Publication number: 20200160542Abstract: Systems and methods described herein relate to registering three-dimensional (3D) data with two-dimensional (2D) image data. One embodiment receives 3D data from one or more sensors and 2D image data from one or more cameras; identifies a 3D segment in the 3D data and associates it with an object; identifies 2D boundary information for the object; determines a speed and a heading for the object; and registers the 3D segment with the 2D boundary information by adjusting the relative positions of the 3D segment and the 2D boundary information based on the speed and heading of the object and matching, in 3D space, the 3D segment with projected 2D boundary information.Type: ApplicationFiled: November 15, 2018Publication date: May 21, 2020Inventors: Yusuke Kanzawa, Michael James Delp
-
Publication number: 20200160487Abstract: Systems and methods described herein relate to registering three-dimensional (3D) data with two-dimensional (2D) image data. One embodiment receives 3D data from one or more sensors and 2D image data from one or more cameras; identifies a 3D segment in the 3D data and associates it with an object; classifies pixels in the 2D image data; determines a speed and a heading for the object; and registers the 3D segment with a portion of the classified pixels by either (1) shifting the 3D segment to a position that, based on the associated object's speed and heading, corresponds to a 2D image data capture time and projecting the time-shifted 3D segment onto 2D image space; or (2) projecting the 3D segment onto 2D image space and shifting the projected 3D segment to a position that, based on the associated object's speed and heading, corresponds to a 2D image data capture time.Type: ApplicationFiled: November 15, 2018Publication date: May 21, 2020Inventors: Yusuke Kanzawa, Michael James Delp