Patents by Inventor Michael James Delp

Michael James Delp has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11727169
    Abstract: System, methods, and other embodiments described herein relate to simulating sensor data. In one embodiment, a method includes, in response to receiving a request to generate simulated information corresponding to the sensor data, acquiring the sensor data that includes at least range information about a perceived environment. The simulated information includes one or more attributes of the sensor data that are absent from the sensor data in a current format. The method includes computing simulated information of the sensor data using a machine learning model that accepts the sensor data and labels as an input and produces the simulated information as an output. The labels identify at least objects in the perceived environment that are depicted by the sensor data. The method includes providing the simulated information with the sensor data.
    Type: Grant
    Filed: September 11, 2019
    Date of Patent: August 15, 2023
    Assignee: Toyota Research Institute, Inc.
    Inventors: Randall J. St. Romain, II, Hiroyuki Funaya, Michael James Delp
  • Patent number: 11354547
    Abstract: System, methods, and other embodiments described herein relate to improving clustering of points within a point cloud. In one embodiment, a method includes grouping the points into cells of a grid. The grid divides an observed region of a surrounding environment associated with the point cloud into the cells. The method includes computing feature vectors for the cells that use cell features to characterize the points in the cells and relationships between the cells. The method includes analyzing the feature vectors according to a clustering model to identify clusters for the cells. The clustering model evaluates the cells to identify which of the cells belong to common entities. The method includes providing the clusters as assignments of the points to the entities depicted in the point cloud.
    Type: Grant
    Filed: March 31, 2020
    Date of Patent: June 7, 2022
    Assignee: Toyota Research Institute, Inc.
    Inventors: Michael James Delp, Antonio Prioletti, Matthew T. Kliemann, Randall J. St. Romain, II
  • Publication number: 20210303916
    Abstract: System, methods, and other embodiments described herein relate to improving clustering of points within a point cloud. In one embodiment, a method includes grouping the points into cells of a grid. The grid divides an observed region of a surrounding environment associated with the point cloud into the cells. The method includes computing feature vectors for the cells that use cell features to characterize the points in the cells and relationships between the cells. The method includes analyzing the feature vectors according to a clustering model to identify clusters for the cells. The clustering model evaluates the cells to identify which of the cells belong to common entities. The method includes providing the clusters as assignments of the points to the entities depicted in the point cloud.
    Type: Application
    Filed: March 31, 2020
    Publication date: September 30, 2021
    Inventors: Michael James Delp, Antonio Prioletti, Matthew T. Kliemann, Randall J. St. Romain II
  • Patent number: 11126891
    Abstract: System, methods, and other embodiments described herein relate to simulating sensor data for a scene. In one embodiment, a method includes, in response to receiving a request to generate simulated sensor data for the scene, acquiring simulation data about the scene. The simulation data includes at least simulated information about the scene that is computer-generated. The method includes computing the simulated sensor data using a generative neural network that accepts the simulation data as an input and produces the simulated sensor data as an output. The simulated sensor data is a simulated perception of the scene by a sensor. The method includes providing the simulated sensor data as part of the scene.
    Type: Grant
    Filed: September 11, 2019
    Date of Patent: September 21, 2021
    Assignee: Toyota Research Institute, Inc.
    Inventors: Randall J. St. Romain, II, Hiroyuki Funaya, Michael James Delp
  • Publication number: 20210073584
    Abstract: System, methods, and other embodiments described herein relate to simulating sensor data for a scene. In one embodiment, a method includes, in response to receiving a request to generate simulated sensor data for the scene, acquiring simulation data about the scene. The simulation data includes at least simulated information about the scene that is computer-generated. The method includes computing the simulated sensor data using a generative neural network that accepts the simulation data as an input and produces the simulated sensor data as an output. The simulated sensor data is a simulated perception of the scene by a sensor. The method includes providing the simulated sensor data as part of the scene.
    Type: Application
    Filed: September 11, 2019
    Publication date: March 11, 2021
    Inventors: Randall J. St. Romain, II, Hiroyuki Funaya, Michael James Delp
  • Publication number: 20210073345
    Abstract: System, methods, and other embodiments described herein relate to simulating sensor data. In one embodiment, a method includes, in response to receiving a request to generate simulated information corresponding to the sensor data, acquiring the sensor data that includes at least range information about a perceived environment. The simulated information includes one or more attributes of the sensor data that are absent from the sensor data in a current format. The method includes computing simulated information of the sensor data using a machine learning model that accepts the sensor data and labels as an input and produces the simulated information as an output. The labels identify at least objects in the perceived environment that are depicted by the sensor data. The method includes providing the simulated information with the sensor data.
    Type: Application
    Filed: September 11, 2019
    Publication date: March 11, 2021
    Inventors: Randall J. St. Romain, II, Hiroyuki Funaya, Michael James Delp
  • Patent number: 10846818
    Abstract: Systems and methods described herein relate to registering three-dimensional (3D) data with two-dimensional (2D) image data. One embodiment receives 3D data from one or more sensors and 2D image data from one or more cameras; identifies a 3D segment in the 3D data and associates it with an object; identifies 2D boundary information for the object; determines a speed and a heading for the object; and registers the 3D segment with the 2D boundary information by adjusting the relative positions of the 3D segment and the 2D boundary information based on the speed and heading of the object and matching, in 3D space, the 3D segment with projected 2D boundary information.
    Type: Grant
    Filed: November 15, 2018
    Date of Patent: November 24, 2020
    Assignee: Toyota Research Institute, Inc.
    Inventors: Yusuke Kanzawa, Michael James Delp
  • Patent number: 10846817
    Abstract: Systems and methods described herein relate to registering three-dimensional (3D) data with two-dimensional (2D) image data. One embodiment receives 3D data from one or more sensors and 2D image data from one or more cameras; identifies a 3D segment in the 3D data and associates it with an object; classifies pixels in the 2D image data; determines a speed and a heading for the object; and registers the 3D segment with a portion of the classified pixels by either (1) shifting the 3D segment to a position that, based on the associated object's speed and heading, corresponds to a 2D image data capture time and projecting the time-shifted 3D segment onto 2D image space; or (2) projecting the 3D segment onto 2D image space and shifting the projected 3D segment to a position that, based on the associated object's speed and heading, corresponds to a 2D image data capture time.
    Type: Grant
    Filed: November 15, 2018
    Date of Patent: November 24, 2020
    Assignee: Toyota Research Institute, Inc.
    Inventors: Yusuke Kanzawa, Michael James Delp
  • Publication number: 20200160542
    Abstract: Systems and methods described herein relate to registering three-dimensional (3D) data with two-dimensional (2D) image data. One embodiment receives 3D data from one or more sensors and 2D image data from one or more cameras; identifies a 3D segment in the 3D data and associates it with an object; identifies 2D boundary information for the object; determines a speed and a heading for the object; and registers the 3D segment with the 2D boundary information by adjusting the relative positions of the 3D segment and the 2D boundary information based on the speed and heading of the object and matching, in 3D space, the 3D segment with projected 2D boundary information.
    Type: Application
    Filed: November 15, 2018
    Publication date: May 21, 2020
    Inventors: Yusuke Kanzawa, Michael James Delp
  • Publication number: 20200160487
    Abstract: Systems and methods described herein relate to registering three-dimensional (3D) data with two-dimensional (2D) image data. One embodiment receives 3D data from one or more sensors and 2D image data from one or more cameras; identifies a 3D segment in the 3D data and associates it with an object; classifies pixels in the 2D image data; determines a speed and a heading for the object; and registers the 3D segment with a portion of the classified pixels by either (1) shifting the 3D segment to a position that, based on the associated object's speed and heading, corresponds to a 2D image data capture time and projecting the time-shifted 3D segment onto 2D image space; or (2) projecting the 3D segment onto 2D image space and shifting the projected 3D segment to a position that, based on the associated object's speed and heading, corresponds to a 2D image data capture time.
    Type: Application
    Filed: November 15, 2018
    Publication date: May 21, 2020
    Inventors: Yusuke Kanzawa, Michael James Delp