Patents by Inventor Pranav Maheshwari

Pranav Maheshwari has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230334816
    Abstract: A system for detecting boundaries of lanes on a road is presented. The system includes an imaging system configured to produce a set of pixels associated with lane markings on a road. The system also includes one or more processors configured to detect boundaries of lanes on the road, including: receive, from the imaging system, the set of pixels associated with lane markings; partition the set of pixels into a plurality of groups, each of the plurality of groups associated with one or more control points; and generate a first spline that traverses the control points of the plurality of groups, the first spline describing a boundary of a lane on the road.
    Type: Application
    Filed: June 23, 2023
    Publication date: October 19, 2023
    Inventors: Pranav Maheshwari, Vahid R. Ramezani, Ismail El Houcheimi, Benjamin Englard
  • Publication number: 20230251656
    Abstract: First training sensor data detected by a plurality of real-world sensors are obtained. The first training sensor data is associated with physical environment conditions. Second training sensor data detected by a plurality of virtual sensors are obtained. The second training sensor data is associated with simulated physical conditions of a virtual environment. A machine learning model is trained using both real-world and virtual training datasets including the first training sensor data, the second training sensor data, and respective sensor setting parameters of the plurality of real-world sensors and the plurality of virtual sensors. The real-world and virtual training datasets used to train the machine learning model include indications associated with the respective sensor parameter settings including one or more of the following: different scan line settings or different exposure settings.
    Type: Application
    Filed: April 12, 2023
    Publication date: August 10, 2023
    Inventors: Dmytro Trofymov, Pranav Maheshwari, Vahi R. Ramezani
  • Patent number: 11688155
    Abstract: A method for detecting boundaries of lanes on a road is presented. The method comprises receiving, by one or more processors from an imaging system, a set of pixels associated with lane markings. The method further includes partitioning, by the one or more processors, the set of pixels into a plurality of groups. Each of the plurality of groups is associated with one or more control points. The method further includes generating, by the one or more processors, a spline that traverses the control points of the plurality of groups. The spline traversing the control points describes a boundary of a lane.
    Type: Grant
    Filed: July 8, 2020
    Date of Patent: June 27, 2023
    Assignee: Luminar, LLC
    Inventors: Pranav Maheshwari, Vahid R. Ramezani, Ismail El Houcheimi, Benjamin Englard
  • Patent number: 11656620
    Abstract: To generate a machine learning model for controlling autonomous vehicles, training sensor data is obtained from sensors associated with one or more vehicles, the sensor data indicative of physical conditions of an environment in which the one or more vehicles operate, and a machine learning (ML) model is trained using the training sensor data. The ML model generates parameters of the environment in response to input sensor data. A controller in an autonomous vehicle receives sensor data from one or more sensors operating in the autonomous vehicle, applies the received sensor data to the ML model to obtain parameters of an environment in which the autonomous vehicle operates, provides the generated parameters to a motion planner component to generate decisions for controlling the autonomous vehicle, and causes the autonomous vehicle to maneuver in accordance with the generated decisions.
    Type: Grant
    Filed: March 6, 2019
    Date of Patent: May 23, 2023
    Assignee: Luminar, LLC
    Inventors: Dmytro Trofymov, Pranav Maheshwari, Vahid R. Ramezani
  • Publication number: 20230152466
    Abstract: An imaging system is described for generating an estimate for the virtual horizon for a moving vehicle. The estimate of the virtual horizon can correspond to lower and higher boundaries of a region within the field of regard, such that the virtual horizon is between the lower and the higher boundaries, in cases where determination of the virtual horizon may be unreliable due to traffic, weather or other road conditions that obscure the visibility in front of the vehicle the imaging system may switch to a static vertical scan density pattern having a broad central focus, which can mitigate the possibility that the system focuses on an incorrect virtual horizon and fails to capture significant objects or conditions in the roadway.
    Type: Application
    Filed: November 15, 2022
    Publication date: May 18, 2023
    Inventors: Dominik Nuss, Pranav Maheshwari, Shubham C. Khilari, Flavian Pegado, Benjamin England, Manuel Birke
  • Patent number: 11551547
    Abstract: A method for tracking a lane on a road is presented. The method comprises receiving, by one or more processors from an imaging system, a set of pixels associated with lane markings. The method further includes generating, by the one or more processors, a predicted spline comprising (i) a first spline and (ii) a predicted extension of the first spline in a direction in which the imaging system is moving. The first spline describes a boundary of a lane and is generated based on the set of pixels. The predicted extension of the first spline is generated based at least in part on a curvature of at least a portion of the first spline.
    Type: Grant
    Filed: July 9, 2020
    Date of Patent: January 10, 2023
    Assignee: Luminar, LLC
    Inventors: Pranav Maheshwari, Vahid R. Ramezani, Ismail El Houcheimi, Shubham C. Khilari, Rounak Mehta
  • Publication number: 20220309685
    Abstract: A method for multi-object tracking includes receiving a sequence of images generated at respective times by one or more sensors configured to sense an environment through which objects are moving relative to the one or more sensors, and constructing a message passing graph in which each of a multiplicity of layers corresponds to a respective one in the sequence of images.
    Type: Application
    Filed: June 13, 2022
    Publication date: September 29, 2022
    Inventors: Vahid R. Ramezani, Akshay Rangesh, Benjamin Englard, Siddhesh S. Mhatre, Meseret R. Gebre, Pranav Maheshwari
  • Publication number: 20220187463
    Abstract: A method for determining a scan pattern according to which a sensor equipped with a scanner scans a field of regard (FOR) is presented. The method comprises obtaining, by processing hardware, a plurality of objective functions, each of the objective functions specifying a cost for a respective property of the scan pattern, expressed in terms of one or more operational parameters of the scanner. The method further includes applying, by the processing hardware, an optimization scheme to the plurality of objective functions to generate the scan pattern. The method further includes scanning the FOR according to the generated scan pattern.
    Type: Application
    Filed: December 14, 2020
    Publication date: June 16, 2022
    Inventors: Pranav Maheshwari, Vahid R. Ramezani, Benjamin Englard, István Peter Burbank, Shubham C. Khilari, Meseret R. Gebre, Austin K. Russell
  • Patent number: 11361449
    Abstract: A method for multi-object tracking includes receiving a sequence of images generated at respective times by one or more sensors configured to sense an environment through which objects are moving relative to the one or more sensors, and constructing a message passing graph in which each of a multiplicity of layers corresponds to a respective one in the sequence of images. The method also includes tracking multiple features through the sequence of images, including passing messages in a forward direction and a backward direction through the message passing graph to share information across time.
    Type: Grant
    Filed: September 4, 2020
    Date of Patent: June 14, 2022
    Assignee: Luminar, LLC
    Inventors: Vahid R. Ramezani, Akshay Rangesh, Benjamin Englard, Siddhesh S. Mhatre, Meseret R. Gebre, Pranav Maheshwari
  • Publication number: 20220107414
    Abstract: A scanning imaging sensor is configured to sense an environment through which a vehicle is moving. A method for determining one or velocities associated with objects in the environment includes generating features from the first set of scan lines and the second set of scan lines, the two sets corresponding to two instances in time. The method further includes generating a collection of candidate velocities based on feature locations and time differences, the features selected pairwise with one from the first set and another from the second set. Furthermore, the method includes analyzing the distribution of candidate velocities, for example, by identifying one or more modes from the collection of the candidate velocities.
    Type: Application
    Filed: October 7, 2020
    Publication date: April 7, 2022
    Inventors: Pranav Maheshwari, Meseret R. Gebre, Shubham C. Khilari, Vahid R. Ramezani
  • Publication number: 20220076432
    Abstract: A method for multi-object tracking includes receiving a sequence of images generated at respective times by one or more sensors configured to sense an environment through which objects are moving relative to the one or more sensors, and constructing a message passing graph in which each of a multiplicity of layers corresponds to a respective one in the sequence of images.
    Type: Application
    Filed: September 4, 2020
    Publication date: March 10, 2022
    Inventors: Vahid R. Ramezani, Akshay Rangesh, Benjamin Englard, Siddhesh S. Mhatre, Meseret R. Gebre, Pranav Maheshwari
  • Publication number: 20210209941
    Abstract: A method for detecting boundaries of lanes on a road is presented. The method comprises receiving, by one or more processors from an imaging system, a set of pixels associated with lane markings. The method further includes partitioning, by the one or more processors, the set of pixels into a plurality of groups. Each of the plurality of groups is associated with one or more control points. The method further includes generating, by the one or more processors, a spline that traverses the control points of the plurality of groups. The spline traversing the control points describes a boundary of a lane.
    Type: Application
    Filed: July 8, 2020
    Publication date: July 8, 2021
    Inventors: Pranav Maheshwari, Vahid R. Ramezani, Ismail El Houcheimi, Benjamin Englard
  • Publication number: 20210206380
    Abstract: A method for tracking a lane on a road is presented. The method comprises receiving, by one or more processors from an imaging system, a set of pixels associated with lane markings. The method further includes generating, by the one or more processors, a predicted spline comprising (i) a first spline and (ii) a predicted extension of the first spline in a direction in which the imaging system is moving. The first spline describes a boundary of a lane and is generated based on the set of pixels. The predicted extension of the first spline is generated based at least in part on a curvature of at least a portion of the first spline.
    Type: Application
    Filed: July 9, 2020
    Publication date: July 8, 2021
    Inventors: Pranav Maheshwari, Vahid R. Ramezani, Ismail El Houcheimi, Shubham C. Khilari, Rounak Mehta
  • Patent number: 10809364
    Abstract: A computer-implemented method of determining relative velocity between a vehicle and an object. The method includes receiving sensor data generated by one or more sensors of the vehicle configured to sense an environment by following a scan pattern comprising component scan lines. The method includes obtaining, based on the sensor data, a point cloud frame. Additionally, the method includes identifying a first pixel and a second pixel that are co-located within a field of regard and overlap a point cloud object within the point cloud frame and calculating a difference between a depth associated with the first pixel and a depth associated with the second pixel. The method includes determining a relative velocity of the point cloud object by dividing the difference in depth data by a time difference between when the depth associated with the first pixel was sensed and the depth associated with the second pixel was sensed.
    Type: Grant
    Filed: November 20, 2018
    Date of Patent: October 20, 2020
    Assignee: Luminar Technologies, Inc.
    Inventors: Tomi P. Maila, Pranav Maheshwari, Benjamin Englard
  • Publication number: 20200209858
    Abstract: To generate a machine learning model for controlling autonomous vehicles, training sensor data is obtained from sensors associated with one or more vehicles, the sensor data indicative of physical conditions of an environment in which the one or more vehicles operate, and a machine learning (ML) model is trained using the training sensor data. The ML model generates parameters of the environment in response to input sensor data. A controller in an autonomous vehicle receives sensor data from one or more sensors operating in the autonomous vehicle, applies the received sensor data to the ML model to obtain parameters of an environment in which the autonomous vehicle operates, provides the generated parameters to a motion planner component to generate decisions for controlling the autonomous vehicle, and causes the autonomous vehicle to maneuver in accordance with the generated decisions.
    Type: Application
    Filed: March 6, 2019
    Publication date: July 2, 2020
    Inventors: Dmytro Trofymov, Pranav Maheshwari, Vahid R. Ramezani
  • Patent number: 10606270
    Abstract: A computer-readable medium stores instructions executable by one or more processors to implement a self-driving control architecture for controlling an autonomous vehicle. A perception and prediction component receives sensor data, and generates (1) an observed occupancy grid indicating which cells are currently occupied in a two-dimensional representation of the environment, and (2) predicted occupancy grids indicating which cells are expected to be occupied later. A mapping component provides navigation data for guiding the vehicle toward a destination, and a cost map generation component is configured to generate, based on the observed occupancy grid, the predicted occupancy grid(s), and the navigation data, cost maps that each specify numerical values representing a cost, at a respective instance of time, of occupying certain cells in a two-dimensional representation of the environment.
    Type: Grant
    Filed: October 2, 2018
    Date of Patent: March 31, 2020
    Assignee: Luminar Technologies, Inc.
    Inventors: Benjamin Englard, Gauri Gandhi, Pranav Maheshwari
  • Publication number: 20200041619
    Abstract: A computer-implemented method of determining a relative velocity between a vehicle and an object. The method includes receiving sensor data generated by one or more sensors of the vehicle configured to sense an environment following a scan pattern. The method also includes obtaining, based on the sensor data, a point cloud frame. The point cloud frame comprises a plurality of points of depth data and a time at which the depth data was captured. Additionally, the method includes selecting two or more points of the scan pattern that overlap the object. The selected points are located on or near a two-dimensional surface corresponding to the object, and the depth data for two or more of the selected points are captured at different times. The method includes calculating the relative velocity between the vehicle and the object based on the depth data and capture times associated with the selected points.
    Type: Application
    Filed: November 20, 2018
    Publication date: February 6, 2020
    Inventors: Pranav Maheshwari, Tomi P. Maila, Benjamin Englard
  • Publication number: 20200043176
    Abstract: A computer-implemented method of determining relative velocity between a vehicle and an object. The method includes receiving sensor data generated by one or more sensors of the vehicle configured to sense an environment by following a scan pattern comprising component scan lines. The method includes obtaining, based on the sensor data, a point cloud frame. Additionally, the method includes identifying a first pixel and a second pixel that are co-located within a field of regard and overlap a point cloud object within the point cloud frame and calculating a difference between a depth associated with the first pixel and a depth associated with the second pixel. The method includes determining a relative velocity of the point cloud object by dividing the difference in depth data by a time difference between when the depth associated with the first pixel was sensed and the depth associated with the second pixel was sensed.
    Type: Application
    Filed: November 20, 2018
    Publication date: February 6, 2020
    Inventors: Tomi P. Maila, Pranav Maheshwari, Benjamin Englard
  • Patent number: 10551485
    Abstract: A computer-implemented method of determining a relative velocity between a vehicle and an object. The method includes receiving sensor data generated by one or more sensors of the vehicle configured to sense an environment following a scan pattern. The method also includes obtaining, based on the sensor data, a point cloud frame. The point cloud frame comprises a plurality of points of depth data and a time at which the depth data was captured. Additionally, the method includes selecting two or more points of the scan pattern that overlap the object. The selected points are located on or near a two-dimensional surface corresponding to the object, and the depth data for two or more of the selected points are captured at different times. The method includes calculating the relative velocity between the vehicle and the object based on the depth data and capture times associated with the selected points.
    Type: Grant
    Filed: November 20, 2018
    Date of Patent: February 4, 2020
    Assignee: Luminar Technologies, Inc.
    Inventors: Pranav Maheshwari, Tomi P. Maila, Benjamin Englard
  • Publication number: 20190113920
    Abstract: A computer-readable medium stores instructions executable by one or more processors to implement a self-driving control architecture for controlling an autonomous vehicle. A perception component receives sensor data and generates signals descriptive of a current state of the environment. Based on those signals, a prediction component generates signals descriptive of one or more predicted future environment states. A motion planner generates decisions for maneuvering the vehicle toward a destination, at least by using the signals descriptive of the current and future environment states to set values of one or more independent variables in an objective equation. The objective equation includes terms corresponding to different driving objectives over a finite time horizon. Values of one or more dependent variables in the objective equation are determined by solving the equation subject to a set of constraints, and values of the dependent variables are used to generate decisions for maneuvering the vehicle.
    Type: Application
    Filed: October 2, 2018
    Publication date: April 18, 2019
    Inventors: Benjamin Englard, Pranav Maheshwari, Shubham C. Khilari, Vahid R. Ramezani