Patents by Inventor Benjamin Englard

Benjamin Englard has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11435479
    Abstract: A computer-implemented method of determining relative velocity between a vehicle and an object. The method includes receiving sensor data generated by one or more sensors of the vehicle. The one or more sensors are configured to sense an environment through which the vehicle is moving by following a scan pattern comprising component scan lines. The method includes obtaining, by one or more processors, a point cloud frame based on the sensor data and representative of the environment and identifying, by the one or more processors, a point cloud object within the point cloud frame. The method further includes determining, by the one or more processors, that the point cloud object is skewed relative to an expected configuration of the point cloud object, and determining, by the one or more processors, a relative velocity of the point cloud object by analyzing the skew of the object.
    Type: Grant
    Filed: November 20, 2018
    Date of Patent: September 6, 2022
    Assignee: Luminar, LLC
    Inventors: Eric C. Danziger, Austin K. Russell, Benjamin Englard
  • Publication number: 20220187463
    Abstract: A method for determining a scan pattern according to which a sensor equipped with a scanner scans a field of regard (FOR) is presented. The method comprises obtaining, by processing hardware, a plurality of objective functions, each of the objective functions specifying a cost for a respective property of the scan pattern, expressed in terms of one or more operational parameters of the scanner. The method further includes applying, by the processing hardware, an optimization scheme to the plurality of objective functions to generate the scan pattern. The method further includes scanning the FOR according to the generated scan pattern.
    Type: Application
    Filed: December 14, 2020
    Publication date: June 16, 2022
    Inventors: Pranav Maheshwari, Vahid R. Ramezani, Benjamin Englard, István Peter Burbank, Shubham C. Khilari, Meseret R. Gebre, Austin K. Russell
  • Patent number: 11360197
    Abstract: A system includes multiple sensors having different, and overlapping, fields of regard and a controller communicatively coupled to sensors. Methods for calibrating the multiple sensors include obtaining data from the sensors, determining optimized transformation parameters for at least one of the sensors, and transforming the data from one sensor projection plane to the other sensor projection plane. The method is an iterative process that uses the metric of mutual information between the data sets, and transformed data sets of the sensors, to determine an optimized set of transformation parameters. The sensors may be a plurality of lidar sensors, a camera and a lidar sensor, or other sets of sensors.
    Type: Grant
    Filed: May 7, 2020
    Date of Patent: June 14, 2022
    Assignee: Luminar, LLC
    Inventors: Amey Sutavani, Lekha Walajapet Mohan, Benjamin Englard
  • Patent number: 11361449
    Abstract: A method for multi-object tracking includes receiving a sequence of images generated at respective times by one or more sensors configured to sense an environment through which objects are moving relative to the one or more sensors, and constructing a message passing graph in which each of a multiplicity of layers corresponds to a respective one in the sequence of images. The method also includes tracking multiple features through the sequence of images, including passing messages in a forward direction and a backward direction through the message passing graph to share information across time.
    Type: Grant
    Filed: September 4, 2020
    Date of Patent: June 14, 2022
    Assignee: Luminar, LLC
    Inventors: Vahid R. Ramezani, Akshay Rangesh, Benjamin Englard, Siddhesh S. Mhatre, Meseret R. Gebre, Pranav Maheshwari
  • Publication number: 20220076432
    Abstract: A method for multi-object tracking includes receiving a sequence of images generated at respective times by one or more sensors configured to sense an environment through which objects are moving relative to the one or more sensors, and constructing a message passing graph in which each of a multiplicity of layers corresponds to a respective one in the sequence of images.
    Type: Application
    Filed: September 4, 2020
    Publication date: March 10, 2022
    Inventors: Vahid R. Ramezani, Akshay Rangesh, Benjamin Englard, Siddhesh S. Mhatre, Meseret R. Gebre, Pranav Maheshwari
  • Publication number: 20210208263
    Abstract: A system includes multiple sensors having different, and overlapping, fields of regard and a controller communicatively coupled to sensors. Methods for calibrating the multiple sensors include obtaining data from the sensors, determining optimized transformation parameters for at least one of the sensors, and transforming the data from one sensor projection plane to the other sensor projection plane. The method is an iterative process that uses the metric of mutual information between the data sets, and transformed data sets of the sensors, to determine an optimized set of transformation parameters. The sensors may be a plurality of lidar sensors, a camera and a lidar sensor, or other sets of sensors.
    Type: Application
    Filed: May 7, 2020
    Publication date: July 8, 2021
    Inventors: Amey Sutavani, Lekha Walajapet Mohan, Benjamin Englard
  • Publication number: 20210209941
    Abstract: A method for detecting boundaries of lanes on a road is presented. The method comprises receiving, by one or more processors from an imaging system, a set of pixels associated with lane markings. The method further includes partitioning, by the one or more processors, the set of pixels into a plurality of groups. Each of the plurality of groups is associated with one or more control points. The method further includes generating, by the one or more processors, a spline that traverses the control points of the plurality of groups. The spline traversing the control points describes a boundary of a lane.
    Type: Application
    Filed: July 8, 2020
    Publication date: July 8, 2021
    Inventors: Pranav Maheshwari, Vahid R. Ramezani, Ismail El Houcheimi, Benjamin Englard
  • Patent number: 10984257
    Abstract: A method for controlling a vehicle based on sensor data having variable sensor parameter settings includes receiving sensor data generated by a vehicle sensor while the sensor is configured with a first sensor parameter setting. The method also includes receiving an indicator specifying the first sensor parameter setting, and selecting, based on the received indicator, one of a plurality of neural networks of a perception component, each neural network having been trained using training data corresponding to a different sensor parameter setting. The method also includes generating signals descriptive of a current state of the environment using the selected neural network and based on the received sensor data. The method further includes generating driving decisions based on the signals descriptive of the current state of the environment, and causing one or more operational subsystems of the vehicle to maneuver the vehicle in accordance with the generated driving decisions.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: April 20, 2021
    Assignee: Luminar Holdco, LLC
    Inventors: Benjamin Englard, Eric C. Danziger
  • Patent number: 10809364
    Abstract: A computer-implemented method of determining relative velocity between a vehicle and an object. The method includes receiving sensor data generated by one or more sensors of the vehicle configured to sense an environment by following a scan pattern comprising component scan lines. The method includes obtaining, based on the sensor data, a point cloud frame. Additionally, the method includes identifying a first pixel and a second pixel that are co-located within a field of regard and overlap a point cloud object within the point cloud frame and calculating a difference between a depth associated with the first pixel and a depth associated with the second pixel. The method includes determining a relative velocity of the point cloud object by dividing the difference in depth data by a time difference between when the depth associated with the first pixel was sensed and the depth associated with the second pixel was sensed.
    Type: Grant
    Filed: November 20, 2018
    Date of Patent: October 20, 2020
    Assignee: Luminar Technologies, Inc.
    Inventors: Tomi P. Maila, Pranav Maheshwari, Benjamin Englard
  • Patent number: 10768304
    Abstract: A method for processing point clouds having variable spatial distributions of scan lines includes receiving a point cloud frame generated by a sensor configured to sense an environment through which a vehicle is moving. The point cloud frame includes scan lines arranged according to a particular spatial distribution. The method also includes either generating an enhanced point cloud frame with a larger number of points than the received point cloud frame, or constructing, by one or more processors and based on points of the received point cloud frame, a three-dimensional mesh. The method also includes generating, by performing an interpolation function on the enhanced point cloud frame or a virtual surface provided by the three-dimensional mesh, a normalized point cloud frame, and generating, using the normalized point cloud frame, signals descriptive of a current state of the environment through which the vehicle is moving.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: September 8, 2020
    Assignee: Luminar Technologies, Inc.
    Inventors: Benjamin Englard, Eric C. Danziger, Austin K. Russell
  • Patent number: 10754037
    Abstract: A method for processing point clouds having variable spatial distributions of scan lines includes receiving a point cloud portion corresponding to an object in a vehicle environment, the point cloud portion including scan lines arranged according to a particular spatial distribution. The method also includes constructing a voxel grid corresponding to the received point cloud portion. The voxel grid includes a plurality of volumes in a stacked, three-dimensional arrangement, and constructing the voxel grid includes (i) determining an initial classification of the object, (ii) setting one or more parameters of the voxel grid based on the initial classification, and (iii) associating each volume of the plurality of volumes with an attribute specifying how many points, from the point cloud portion, fall within that volume. The method also includes generating, using the constructed voxel grid, signals descriptive of a current state of the environment through which the vehicle is moving.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: August 25, 2020
    Assignee: Luminar Technologies, Inc.
    Inventors: Benjamin Englard, Eric C. Danziger
  • Patent number: 10627521
    Abstract: A method for controlling at least a first vehicle sensor includes receiving sensor data generated by one or more vehicle sensors that are configured to sense an environment through which the vehicle is moving, and identifying, based on the received sensor data, one or more current and/or predicted positions of one or more dynamic objects that are currently moving, or are capable of movement, within the environment. The method also includes causing, based on the current and/or predicted positions of the dynamic objects, an area of focus of the first sensor to be adjusted, at least by causing (i) a field of regard of the first sensor, and/or (ii) a spatial distribution of scan lines produced by the first sensor, to be adjusted.
    Type: Grant
    Filed: October 31, 2018
    Date of Patent: April 21, 2020
    Assignee: Luminar Technologies, Inc.
    Inventors: Benjamin Englard, Eric C. Danziger, Austin K. Russell
  • Patent number: 10606270
    Abstract: A computer-readable medium stores instructions executable by one or more processors to implement a self-driving control architecture for controlling an autonomous vehicle. A perception and prediction component receives sensor data, and generates (1) an observed occupancy grid indicating which cells are currently occupied in a two-dimensional representation of the environment, and (2) predicted occupancy grids indicating which cells are expected to be occupied later. A mapping component provides navigation data for guiding the vehicle toward a destination, and a cost map generation component is configured to generate, based on the observed occupancy grid, the predicted occupancy grid(s), and the navigation data, cost maps that each specify numerical values representing a cost, at a respective instance of time, of occupying certain cells in a two-dimensional representation of the environment.
    Type: Grant
    Filed: October 2, 2018
    Date of Patent: March 31, 2020
    Assignee: Luminar Technologies, Inc.
    Inventors: Benjamin Englard, Gauri Gandhi, Pranav Maheshwari
  • Publication number: 20200097010
    Abstract: Various software techniques for managing operation of autonomous vehicles based on sensor data are disclosed herein. A computing system may generate, based on a set of signals descriptive of a current state of an environment in which the autonomous vehicle is operating, a normal path plan separate from a safe path plan, or a hybrid path plan including a normal path plan and a safe path plan. In generating the safe path plan, the computing system may generate and concatenate a set of motion primitives. When a fault condition occurs, the computing device may transition from executing the normal path plan to executing the safe path plan to safely stop the autonomous vehicle.
    Type: Application
    Filed: September 21, 2018
    Publication date: March 26, 2020
    Inventors: Tomi P. Maila, Vahid R. Ramezani, Benjamin Englard
  • Publication number: 20200074233
    Abstract: Automated training dataset generators that generate feature training datasets for use in real-world autonomous driving applications based on virtual environments are disclosed herein. The feature training datasets may be associated with training a machine learning model to control real-world autonomous vehicles. In some embodiments, an occupancy grid generator is used to generate an occupancy grid indicative of an environment of an autonomous vehicle from an imaging scene that depicts the environment. The occupancy grid is used to control the vehicle as the vehicle moves through the environment. In further embodiments, a sensor parameter optimizer may determine parameter settings for use by real-world sensors in autonomous driving applications.
    Type: Application
    Filed: September 4, 2019
    Publication date: March 5, 2020
    Inventors: Benjamin Englard, Miguel Alexander Peake
  • Publication number: 20200074266
    Abstract: Automated training dataset generators that generate feature training datasets for use in real-world autonomous driving applications based on virtual environments are disclosed herein. The feature training datasets may be associated with training a machine learning model to control real-world autonomous vehicles. In some embodiments, an occupancy grid generator is used to generate an occupancy grid indicative of an environment of an autonomous vehicle from an imaging scene that depicts the environment. The occupancy grid is used to control the vehicle as the vehicle moves through the environment. In further embodiments, a sensor parameter optimizer may determine parameter settings for use by real-world sensors in autonomous driving applications.
    Type: Application
    Filed: September 4, 2019
    Publication date: March 5, 2020
    Inventors: Miguel Alexander Peake, Benjamin Englard
  • Publication number: 20200074230
    Abstract: Automated training dataset generators that generate feature training datasets for use in real-world autonomous driving applications based on virtual environments are disclosed herein. The feature training datasets may be associated with training a machine learning model to control real-world autonomous vehicles. In some embodiments, an occupancy grid generator is used to generate an occupancy grid indicative of an environment of an autonomous vehicle from an imaging scene that depicts the environment. The occupancy grid is used to control the vehicle as the vehicle moves through the environment. In further embodiments, a sensor parameter optimizer may determine parameter settings for use by real-world sensors in autonomous driving applications.
    Type: Application
    Filed: September 4, 2019
    Publication date: March 5, 2020
    Inventors: Benjamin Englard, Miguel Alexander Peake
  • Publication number: 20200043176
    Abstract: A computer-implemented method of determining relative velocity between a vehicle and an object. The method includes receiving sensor data generated by one or more sensors of the vehicle configured to sense an environment by following a scan pattern comprising component scan lines. The method includes obtaining, based on the sensor data, a point cloud frame. Additionally, the method includes identifying a first pixel and a second pixel that are co-located within a field of regard and overlap a point cloud object within the point cloud frame and calculating a difference between a depth associated with the first pixel and a depth associated with the second pixel. The method includes determining a relative velocity of the point cloud object by dividing the difference in depth data by a time difference between when the depth associated with the first pixel was sensed and the depth associated with the second pixel was sensed.
    Type: Application
    Filed: November 20, 2018
    Publication date: February 6, 2020
    Inventors: Tomi P. Maila, Pranav Maheshwari, Benjamin Englard
  • Publication number: 20200041648
    Abstract: A computer-implemented method of determining relative velocity between a vehicle and an object. The method includes receiving sensor data generated by one or more sensors of the vehicle. The one or more sensors are configured to sense an environment through which the vehicle is moving by following a scan pattern comprising component scan lines. The method includes obtaining, by one or more processors, a point cloud frame based on the sensor data and representative of the environment and identifying, by the one or more processors, a point cloud object within the point cloud frame. The method further includes determining, by the one or more processors, that the point cloud object is skewed relative to an expected configuration of the point cloud object, and determining, by the one or more processors, a relative velocity of the point cloud object by analyzing the skew of the object.
    Type: Application
    Filed: November 20, 2018
    Publication date: February 6, 2020
    Inventors: Eric C. Danziger, Austin K. Russell, Benjamin Englard
  • Publication number: 20200041619
    Abstract: A computer-implemented method of determining a relative velocity between a vehicle and an object. The method includes receiving sensor data generated by one or more sensors of the vehicle configured to sense an environment following a scan pattern. The method also includes obtaining, based on the sensor data, a point cloud frame. The point cloud frame comprises a plurality of points of depth data and a time at which the depth data was captured. Additionally, the method includes selecting two or more points of the scan pattern that overlap the object. The selected points are located on or near a two-dimensional surface corresponding to the object, and the depth data for two or more of the selected points are captured at different times. The method includes calculating the relative velocity between the vehicle and the object based on the depth data and capture times associated with the selected points.
    Type: Application
    Filed: November 20, 2018
    Publication date: February 6, 2020
    Inventors: Pranav Maheshwari, Tomi P. Maila, Benjamin Englard