Patents by Inventor Steven A. Parkison

Steven A. Parkison has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240067207
    Abstract: Systems and methods for detecting roadway lane boundaries are disclosed herein. One embodiment receives image data of a portion of a roadway; receives historical vehicle trajectory data for the portion of the roadway; generates, from the historical vehicle trajectory data, a heatmap indicating, for a given pixel in the heatmap, an extent to which the given pixel coincides spatially with vehicle trajectories in the historical vehicle trajectory data; and projects the heatmap onto the image data to generate a composite image that is used in training a neural network to detect roadway lane boundaries, the projected heatmap acting as supervisory data. The trained neural network is deployed in a vehicle to generate and save map data including detected roadway lane boundaries for use by other vehicles or to control operation of the vehicle itself based, at least in part, on roadway lane boundaries detected by the trained neural network.
    Type: Application
    Filed: August 25, 2022
    Publication date: February 29, 2024
    Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki Kaisha
    Inventors: Kun-Hsin Chen, Shunsho Kaku, Jeffrey M. Walls, Jie Li, Steven A. Parkison
  • Publication number: 20240037961
    Abstract: System, methods, and other embodiments described herein relate to the detection of lanes in a driving scene through segmenting road regions using an ontology enhanced to derive semantic context. In one embodiment, a method includes segmenting an image of a driving scene, independent of maps, by lane lines and road regions defined by an ontology and a pixel subset from the image has semantics of lane information from the ontology. The method also includes computing pixel depth from the image for the lane lines and the road regions using a model. The method also includes deriving 3D context using relations between the semantics and the pixel depth, the relations infer a driving lane for a vehicle from types of the lanes lines and the road regions adjacent to the driving lane. The method also includes executing a task to control the vehicle on the driving lane using the 3D context.
    Type: Application
    Filed: July 26, 2022
    Publication date: February 1, 2024
    Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki Kaisha
    Inventors: Shunsho Kaku, Jeffrey M. Walls, Jie Li, Kun-Hsin Chen, Steven A. Parkison
  • Publication number: 20230334876
    Abstract: A method for an end-to-end boundary lane detection system is described. The method includes gridding a red-green-blue (RGB) image captured by a camera sensor mounted on an ego vehicle into a plurality of image patches. The method also includes generating different image patch embeddings to provide correlations between the plurality of image patches and the RGB image. The method further includes encoding the different image patch embeddings into predetermined categories, grid offsets, and instance identifications. The method also includes generating lane boundary keypoints of the RGB image based on the encoding of the different image patch embeddings.
    Type: Application
    Filed: April 14, 2022
    Publication date: October 19, 2023
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Kun-Hsin CHEN, Shunsho KAKU, Jie LI, Steven PARKISON, Jeffrey M. WALLS, Kuan-Hui LEE
  • Patent number: 11393127
    Abstract: A system for determining the rigid-body transformation between 2D image data and 3D point cloud data includes a first sensor configured to capture image data of an environment, a second sensor configured to capture point cloud data of the environment; and a computing device communicatively coupled to the first sensor and the second sensor. The computing device is configured to receive image data from the first sensor and point cloud data from the second sensor, parameterize one or more 2D lines from image data, parameterize one or more 3D lines from point cloud data, align the one or more 2D lines with the one or more 3D lines by solving a registration problem formulated as a mixed integer linear program to simultaneously solve for a projection transform vector and a data association set, and generate a data mesh comprising the image data aligned with the point cloud data.
    Type: Grant
    Filed: March 30, 2020
    Date of Patent: July 19, 2022
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Steven A. Parkison, Jeffrey M. Walls, Ryan W. Wolcott, Mohammad Saad, Ryan M. Eustice
  • Publication number: 20210116553
    Abstract: A system and method for calibrating sensors may include one or more processors, a first sensor configured to obtain a two-dimensional image, a second sensor configured to obtain three-dimensional point cloud data, and a memory device. The memory device stores a data collection module and a calibration module. The data collection module has instructions that configure the one or more processors to obtain the two-dimensional image and the three-dimensional point cloud data. The calibration module has instructions that configure the one or more processors to determine and project a three-dimensional point cloud edge of the three-dimensional point cloud data onto the two-dimensional image edge, apply a branch-and-bound optimization algorithm to a plurality of rigid body transforms, determine a lowest cost transform of the plurality of rigid body transforms using the branch-and-bound optimization algorithm, and calibrate the first sensor with the second sensor using the lowest cost transform.
    Type: Application
    Filed: October 18, 2019
    Publication date: April 22, 2021
    Inventors: Jeffrey M. Walls, Steven A. Parkison, Ryan W. Wolcott, Ryan M. Eustice
  • Patent number: 10962630
    Abstract: A system and method for calibrating sensors may include one or more processors, a first sensor configured to obtain a two-dimensional image, a second sensor configured to obtain three-dimensional point cloud data, and a memory device. The memory device stores a data collection module and a calibration module. The data collection module has instructions that configure the one or more processors to obtain the two-dimensional image and the three-dimensional point cloud data. The calibration module has instructions that configure the one or more processors to determine and project a three-dimensional point cloud edge of the three-dimensional point cloud data onto the two-dimensional image edge, apply a branch-and-bound optimization algorithm to a plurality of rigid body transforms, determine a lowest cost transform of the plurality of rigid body transforms using the branch-and-bound optimization algorithm, and calibrate the first sensor with the second sensor using the lowest cost transform.
    Type: Grant
    Filed: October 18, 2019
    Date of Patent: March 30, 2021
    Assignees: Toyota Research Institute, Inc., The Regents of the University of Michigan
    Inventors: Jeffrey M. Walls, Steven A. Parkison, Ryan W. Wolcott, Ryan M. Eustice
  • Publication number: 20210082148
    Abstract: A system for determining the rigid-body transformation between 2D image data and 3D point cloud data includes a first sensor configured to capture image data of an environment, a second sensor configured to capture point cloud data of the environment; and a computing device communicatively coupled to the first sensor and the second sensor. The computing device is configured to receive image data from the first sensor and point cloud data from the second sensor, parameterize one or more 2D lines from image data, parameterize one or more 3D lines from point cloud data, align the one or more 2D lines with the one or more 3D lines by solving a registration problem formulated as a mixed integer linear program to simultaneously solve for a projection transform vector and a data association set, and generate a data mesh comprising the image data aligned with the point cloud data.
    Type: Application
    Filed: March 30, 2020
    Publication date: March 18, 2021
    Applicant: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Steven A. Parkison, Jeffrey M. Walls, Ryan W. Wolcott, Mohammad Saad, Ryan M. Eustice