Patents by Inventor Jeffrey M. Walls

Jeffrey M. Walls has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240067207
    Abstract: Systems and methods for detecting roadway lane boundaries are disclosed herein. One embodiment receives image data of a portion of a roadway; receives historical vehicle trajectory data for the portion of the roadway; generates, from the historical vehicle trajectory data, a heatmap indicating, for a given pixel in the heatmap, an extent to which the given pixel coincides spatially with vehicle trajectories in the historical vehicle trajectory data; and projects the heatmap onto the image data to generate a composite image that is used in training a neural network to detect roadway lane boundaries, the projected heatmap acting as supervisory data. The trained neural network is deployed in a vehicle to generate and save map data including detected roadway lane boundaries for use by other vehicles or to control operation of the vehicle itself based, at least in part, on roadway lane boundaries detected by the trained neural network.
    Type: Application
    Filed: August 25, 2022
    Publication date: February 29, 2024
    Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki Kaisha
    Inventors: Kun-Hsin Chen, Shunsho Kaku, Jeffrey M. Walls, Jie Li, Steven A. Parkison
  • Publication number: 20240037961
    Abstract: System, methods, and other embodiments described herein relate to the detection of lanes in a driving scene through segmenting road regions using an ontology enhanced to derive semantic context. In one embodiment, a method includes segmenting an image of a driving scene, independent of maps, by lane lines and road regions defined by an ontology and a pixel subset from the image has semantics of lane information from the ontology. The method also includes computing pixel depth from the image for the lane lines and the road regions using a model. The method also includes deriving 3D context using relations between the semantics and the pixel depth, the relations infer a driving lane for a vehicle from types of the lanes lines and the road regions adjacent to the driving lane. The method also includes executing a task to control the vehicle on the driving lane using the 3D context.
    Type: Application
    Filed: July 26, 2022
    Publication date: February 1, 2024
    Applicants: Toyota Research Institute, Inc., Toyota Jidosha Kabushiki Kaisha
    Inventors: Shunsho Kaku, Jeffrey M. Walls, Jie Li, Kun-Hsin Chen, Steven A. Parkison
  • Publication number: 20230334876
    Abstract: A method for an end-to-end boundary lane detection system is described. The method includes gridding a red-green-blue (RGB) image captured by a camera sensor mounted on an ego vehicle into a plurality of image patches. The method also includes generating different image patch embeddings to provide correlations between the plurality of image patches and the RGB image. The method further includes encoding the different image patch embeddings into predetermined categories, grid offsets, and instance identifications. The method also includes generating lane boundary keypoints of the RGB image based on the encoding of the different image patch embeddings.
    Type: Application
    Filed: April 14, 2022
    Publication date: October 19, 2023
    Applicants: TOYOTA RESEARCH INSTITUTE, INC., TOYOTA JIDOSHA KABUSHIKI KAISHA
    Inventors: Kun-Hsin CHEN, Shunsho KAKU, Jie LI, Steven PARKISON, Jeffrey M. WALLS, Kuan-Hui LEE
  • Patent number: 11741724
    Abstract: A neural network can be configured to produce an electronic road map. The electronic road map can have information to distinguish lanes of a road. A feature in an image can be detected. The image can have been produced at a current time. The image can be of the road. The feature in the image can be determined to correspond to a feature, of a plurality of features, in a feature map. The feature map can have been produced at a prior time from one or more images. A labeled training map can be produced from the feature in the image and the plurality of features in the feature map. The labeled training map can have the information to distinguish the lanes of the road. The neural network can be trained to produce, in response to a receipt of the image and the feature map, the labeled training map.
    Type: Grant
    Filed: February 25, 2021
    Date of Patent: August 29, 2023
    Assignee: Toyota Research Institute, Inc.
    Inventors: Shunsho Kaku, Jeffrey M. Walls, Ryan W. Wolcott
  • Patent number: 11650059
    Abstract: A system for localizing a vehicle in an environment having a computing device comprising a processor and a non-transitory computer readable memory, a first map data stored in the non-transitory computer readable memory, where the first map data defines features within an environment used to localize a vehicle within the environment, and a machine-readable instruction set. The machine-readable instruction set causes the computing device to: determine a portion of the first map data having a first type of road, determine a first accuracy specification for the first type of road, wherein the first accuracy specification identifies one or more features of the plurality of features defined in the first map data used to localize a vehicle traversing the first type of road within a predefined degree of accuracy, and create a second map data for the first type of road.
    Type: Grant
    Filed: June 6, 2018
    Date of Patent: May 16, 2023
    Assignee: Toyota Research Institute, Inc.
    Inventor: Jeffrey M. Walls
  • Publication number: 20220269891
    Abstract: A neural network can be configured to produce an electronic road map. The electronic road map can have information to distinguish lanes of a road. A feature in an image can be detected. The image can have been produced at a current time. The image can be of the road. The feature in the image can be determined to correspond to a feature, of a plurality of features, in a feature map. The feature map can have been produced at a prior time from one or more images. A labeled training map can be produced from the feature in the image and the plurality of features in the feature map. The labeled training map can have the information to distinguish the lanes of the road. The neural network can be trained to produce, in response to a receipt of the image and the feature map, the labeled training map.
    Type: Application
    Filed: February 25, 2021
    Publication date: August 25, 2022
    Inventors: Shunsho Kaku, Jeffrey M. Walls, Ryan W. Wolcott
  • Patent number: 11393127
    Abstract: A system for determining the rigid-body transformation between 2D image data and 3D point cloud data includes a first sensor configured to capture image data of an environment, a second sensor configured to capture point cloud data of the environment; and a computing device communicatively coupled to the first sensor and the second sensor. The computing device is configured to receive image data from the first sensor and point cloud data from the second sensor, parameterize one or more 2D lines from image data, parameterize one or more 3D lines from point cloud data, align the one or more 2D lines with the one or more 3D lines by solving a registration problem formulated as a mixed integer linear program to simultaneously solve for a projection transform vector and a data association set, and generate a data mesh comprising the image data aligned with the point cloud data.
    Type: Grant
    Filed: March 30, 2020
    Date of Patent: July 19, 2022
    Assignee: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Steven A. Parkison, Jeffrey M. Walls, Ryan W. Wolcott, Mohammad Saad, Ryan M. Eustice
  • Patent number: 11067693
    Abstract: System, methods, and other embodiments described herein relate to calibrating a light detection and ranging (LiDAR) sensor with a camera sensor. In one embodiment, a method includes controlling i) the LiDAR sensor to acquire point cloud data, and ii) the camera sensor to acquire an image. The point cloud data and the image at least partially overlap in relation to a field of view of a surrounding environment. The method includes projecting the point cloud data into the image to form a combined image. The method includes adjusting sensor parameters of the LiDAR sensor and the camera sensor according to the combined image to calibrate the LiDAR sensor and the camera sensor together.
    Type: Grant
    Filed: July 12, 2018
    Date of Patent: July 20, 2021
    Assignee: Toyota Research Institute, Inc.
    Inventors: Jeffrey M. Walls, Ryan W. Wolcott
  • Patent number: 11069085
    Abstract: A point cloud management system provides labels for each point within a point cloud map. The point cloud management system also provides a method to localize a vehicle using the labeled point cloud. The point cloud management system identifies objects within a scene using an obtained image. The point cloud management system labels the identified objects to register the identified objects against the point cloud. The registration of the objects is then used to localize the vehicle.
    Type: Grant
    Filed: February 13, 2019
    Date of Patent: July 20, 2021
    Assignee: Toyota Research Institute, Inc.
    Inventors: Jeffrey M. Walls, Ryan W. Wolcott
  • Publication number: 20210116553
    Abstract: A system and method for calibrating sensors may include one or more processors, a first sensor configured to obtain a two-dimensional image, a second sensor configured to obtain three-dimensional point cloud data, and a memory device. The memory device stores a data collection module and a calibration module. The data collection module has instructions that configure the one or more processors to obtain the two-dimensional image and the three-dimensional point cloud data. The calibration module has instructions that configure the one or more processors to determine and project a three-dimensional point cloud edge of the three-dimensional point cloud data onto the two-dimensional image edge, apply a branch-and-bound optimization algorithm to a plurality of rigid body transforms, determine a lowest cost transform of the plurality of rigid body transforms using the branch-and-bound optimization algorithm, and calibrate the first sensor with the second sensor using the lowest cost transform.
    Type: Application
    Filed: October 18, 2019
    Publication date: April 22, 2021
    Inventors: Jeffrey M. Walls, Steven A. Parkison, Ryan W. Wolcott, Ryan M. Eustice
  • Patent number: 10962630
    Abstract: A system and method for calibrating sensors may include one or more processors, a first sensor configured to obtain a two-dimensional image, a second sensor configured to obtain three-dimensional point cloud data, and a memory device. The memory device stores a data collection module and a calibration module. The data collection module has instructions that configure the one or more processors to obtain the two-dimensional image and the three-dimensional point cloud data. The calibration module has instructions that configure the one or more processors to determine and project a three-dimensional point cloud edge of the three-dimensional point cloud data onto the two-dimensional image edge, apply a branch-and-bound optimization algorithm to a plurality of rigid body transforms, determine a lowest cost transform of the plurality of rigid body transforms using the branch-and-bound optimization algorithm, and calibrate the first sensor with the second sensor using the lowest cost transform.
    Type: Grant
    Filed: October 18, 2019
    Date of Patent: March 30, 2021
    Assignees: Toyota Research Institute, Inc., The Regents of the University of Michigan
    Inventors: Jeffrey M. Walls, Steven A. Parkison, Ryan W. Wolcott, Ryan M. Eustice
  • Publication number: 20210082148
    Abstract: A system for determining the rigid-body transformation between 2D image data and 3D point cloud data includes a first sensor configured to capture image data of an environment, a second sensor configured to capture point cloud data of the environment; and a computing device communicatively coupled to the first sensor and the second sensor. The computing device is configured to receive image data from the first sensor and point cloud data from the second sensor, parameterize one or more 2D lines from image data, parameterize one or more 3D lines from point cloud data, align the one or more 2D lines with the one or more 3D lines by solving a registration problem formulated as a mixed integer linear program to simultaneously solve for a projection transform vector and a data association set, and generate a data mesh comprising the image data aligned with the point cloud data.
    Type: Application
    Filed: March 30, 2020
    Publication date: March 18, 2021
    Applicant: TOYOTA RESEARCH INSTITUTE, INC.
    Inventors: Steven A. Parkison, Jeffrey M. Walls, Ryan W. Wolcott, Mohammad Saad, Ryan M. Eustice
  • Patent number: 10788585
    Abstract: System, methods, and other embodiments described herein relate to predicting a presence of occluded objects from a robotic device. In one embodiment, a method includes, in response to acquiring sensor data about a surrounding environment, analyzing the sensor data to identify a perceived object in the surrounding environment by determining at least a class of the perceived object. The method includes determining a presence factor associated with the perceived object according to an observation model. The presence factor indicates a likelihood of an occluded object existing in an occluded region associated with the perceived object. The method includes controlling one or more systems of the robotic device according to the presence factor.
    Type: Grant
    Filed: February 26, 2018
    Date of Patent: September 29, 2020
    Assignee: Toyota Research Institute, Inc.
    Inventors: Arash K. Ushani, Jeffrey M. Walls, Ryan W. Wolcott
  • Patent number: 10753750
    Abstract: System, methods, and other embodiments described herein relate to improving mapping of a surrounding environment by a mapping vehicle. In one embodiment, a method includes identifying dynamic objects within the surrounding environment that are proximate to the mapping vehicle from sensor data of at least one sensor of the mapping vehicle. The dynamic objects are trackable objects that are moving within the surrounding environment. The method includes generating paths of the dynamic objects through the surrounding environment relative to the mapping vehicle according to separate observations of the dynamic objects embodied within the sensor data. The method includes producing a map of the surrounding environment from the paths.
    Type: Grant
    Filed: July 12, 2018
    Date of Patent: August 25, 2020
    Assignee: Toyota Research Institute, Inc.
    Inventors: Ryan W. Wolcott, Jeffrey M. Walls
  • Publication number: 20200257901
    Abstract: A point cloud management system provides labels for each point within a point cloud map. The point cloud management system also provides a method to localize a vehicle using the labeled point cloud. The point cloud management system identifies objects within a scene using an obtained image. The point cloud management system labels the identified objects to register the identified objects against the point cloud. The registration of the objects is then used to localize the vehicle.
    Type: Application
    Filed: February 13, 2019
    Publication date: August 13, 2020
    Applicant: Toyota Research Institute, Inc.
    Inventors: Jeffrey M. WALLS, Ryan W. WOLCOTT
  • Patent number: 10710599
    Abstract: System, methods, and other embodiments described herein relate to identifying changes within a surrounding environment of a vehicle. In one embodiment, a method includes collecting, using at least one sensor of the vehicle, sensor data about the surrounding environment. The method includes analyzing the sensor data to identify features of the surrounding environment. The features are landmarks of the surrounding environment that are indicated within a map of the surrounding environment. The sensor data indicates at least measurements associated with the features as acquired by the at least one sensor. The method includes computing persistence likelihoods for the features according to a persistence model that characterizes at least prior observations of the features and relationships between the features. The persistence likelihoods indicate estimated persistences of the features to indicate whether the features likely still exist.
    Type: Grant
    Filed: February 27, 2018
    Date of Patent: July 14, 2020
    Assignee: Toyota Research Institute, Inc.
    Inventors: Fernando Nobre, Jeffrey M. Walls, Paul J. Ozog
  • Publication number: 20200018606
    Abstract: System, methods, and other embodiments described herein relate to improving mapping of a surrounding environment by a mapping vehicle. In one embodiment, a method includes identifying dynamic objects within the surrounding environment that are proximate to the mapping vehicle from sensor data of at least one sensor of the mapping vehicle. The dynamic objects are trackable objects that are moving within the surrounding environment. The method includes generating paths of the dynamic objects through the surrounding environment relative to the mapping vehicle according to separate observations of the dynamic objects embodied within the sensor data. The method includes producing a map of the surrounding environment from the paths.
    Type: Application
    Filed: July 12, 2018
    Publication date: January 16, 2020
    Inventors: Ryan W. Wolcott, Jeffrey M. Walls
  • Publication number: 20200018852
    Abstract: System, methods, and other embodiments described herein relate to calibrating a light detection and ranging (LiDAR) sensor with a camera sensor. In one embodiment, a method includes controlling i) the LiDAR sensor to acquire point cloud data, and ii) the camera sensor to acquire an image. The point cloud data and the image at least partially overlap in relation to a field of view of a surrounding environment. The method includes projecting the point cloud data into the image to form a combined image. The method includes adjusting sensor parameters of the LiDAR sensor and the camera sensor according to the combined image to calibrate the LiDAR sensor and the camera sensor together.
    Type: Application
    Filed: July 12, 2018
    Publication date: January 16, 2020
    Inventors: Jeffrey M. Walls, Ryan W. Wolcott
  • Publication number: 20190376797
    Abstract: A system for localizing a vehicle in an environment having a computing device comprising a processor and a non-transitory computer readable memory, a first map data stored in the non-transitory computer readable memory, where the first map data defines features within an environment used to localize a vehicle within the environment, and a machine-readable instruction set. The machine-readable instruction set causes the computing device to: determine a portion of the first map data having a first type of road, determine a first accuracy specification for the first type of road, wherein the first accuracy specification identifies one or more features of the plurality of features defined in the first map data used to localize a vehicle traversing the first type of road within a predefined degree of accuracy, and create a second map data for the first type of road.
    Type: Application
    Filed: June 6, 2018
    Publication date: December 12, 2019
    Applicant: Toyota Research Institute, Inc.
    Inventor: Jeffrey M. Walls
  • Publication number: 20190084577
    Abstract: System, methods, and other embodiments described herein relate to identifying changes within a surrounding environment of a vehicle. In one embodiment, a method includes collecting, using at least one sensor of the vehicle, sensor data about the surrounding environment. The method includes analyzing the sensor data to identify features of the surrounding environment. The features are landmarks of the surrounding environment that are indicated within a map of the surrounding environment. The sensor data indicates at least measurements associated with the features as acquired by the at least one sensor. The method includes computing persistence likelihoods for the features according to a persistence model that characterizes at least prior observations of the features and relationships between the features. The persistence likelihoods indicate estimated persistences of the features to indicate whether the features likely still exist.
    Type: Application
    Filed: February 27, 2018
    Publication date: March 21, 2019
    Inventors: Fernando Nobre, Jeffrey M. Walls, Paul J. Ozog