Patents by Inventor Dmytro Trofymov

Dmytro Trofymov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11841440
    Abstract: In one embodiment, a lidar system includes a light source configured to emit pulses of light and a scanner configured to scan the emitted pulses of light along a high-resolution scan pattern located within a field of regard of the lidar system. The scanner includes one or more scan mirrors configured to (i) scan the emitted pulses of light along a first scan axis to produce multiple scan lines of the high-resolution scan pattern, where each scan line is associated with multiple pixels, each pixel corresponding to one of the emitted pulses of light and (ii) distribute the scan lines of the high-resolution scan pattern along a second scan axis. The high-resolution scan pattern includes one or more of: interlaced scan lines and interlaced pixels.
    Type: Grant
    Filed: November 24, 2021
    Date of Patent: December 12, 2023
    Assignee: Luminar Technologies, Inc.
    Inventors: Istvan Peter Burbank, Matthew D. Weed, Jason Paul Wojack, Jason M. Eichenholz, Dmytro Trofymov
  • Publication number: 20230251656
    Abstract: First training sensor data detected by a plurality of real-world sensors are obtained. The first training sensor data is associated with physical environment conditions. Second training sensor data detected by a plurality of virtual sensors are obtained. The second training sensor data is associated with simulated physical conditions of a virtual environment. A machine learning model is trained using both real-world and virtual training datasets including the first training sensor data, the second training sensor data, and respective sensor setting parameters of the plurality of real-world sensors and the plurality of virtual sensors. The real-world and virtual training datasets used to train the machine learning model include indications associated with the respective sensor parameter settings including one or more of the following: different scan line settings or different exposure settings.
    Type: Application
    Filed: April 12, 2023
    Publication date: August 10, 2023
    Inventors: Dmytro Trofymov, Pranav Maheshwari, Vahi R. Ramezani
  • Patent number: 11656620
    Abstract: To generate a machine learning model for controlling autonomous vehicles, training sensor data is obtained from sensors associated with one or more vehicles, the sensor data indicative of physical conditions of an environment in which the one or more vehicles operate, and a machine learning (ML) model is trained using the training sensor data. The ML model generates parameters of the environment in response to input sensor data. A controller in an autonomous vehicle receives sensor data from one or more sensors operating in the autonomous vehicle, applies the received sensor data to the ML model to obtain parameters of an environment in which the autonomous vehicle operates, provides the generated parameters to a motion planner component to generate decisions for controlling the autonomous vehicle, and causes the autonomous vehicle to maneuver in accordance with the generated decisions.
    Type: Grant
    Filed: March 6, 2019
    Date of Patent: May 23, 2023
    Assignee: Luminar, LLC
    Inventors: Dmytro Trofymov, Pranav Maheshwari, Vahid R. Ramezani
  • Patent number: 11391842
    Abstract: An imaging sensor is configured to sense an environment through which a vehicle is moving. A method for controlling the imaging sensor includes receiving sensor data generated by the imaging sensor of the vehicle as the vehicle moves through the environment, determining (i) a lower bound for a vertical region of interest (VROI) within a vertical field of regard of the imaging sensor, the VROI comprising a virtual horizon, using a first subset of the sensor data, and (ii) an upper bound for the VROI within the vertical field of regard of the imaging sensor, using at least a second subset of the sensor data, where the first subset is smaller than the second subset. The method also includes causing the imaging sensor to be adjusted in accordance with the determined lower bound of the VROI and the determined upper bound of the VROI.
    Type: Grant
    Filed: February 12, 2020
    Date of Patent: July 19, 2022
    Assignee: Luminar, LLC
    Inventor: Dmytro Trofymov
  • Publication number: 20220082702
    Abstract: In one embodiment, a lidar system includes a light source configured to emit pulses of light and a scanner configured to scan the emitted pulses of light along a high-resolution scan pattern located within a field of regard of the lidar system. The scanner includes one or more scan mirrors configured to (i) scan the emitted pulses of light along a first scan axis to produce multiple scan lines of the high-resolution scan pattern, where each scan line is associated with multiple pixels, each pixel corresponding to one of the emitted pulses of light and (ii) distribute the scan lines of the high-resolution scan pattern along a second scan axis. The high-resolution scan pattern includes one or more of: interlaced scan lines and interlaced pixels.
    Type: Application
    Filed: November 24, 2021
    Publication date: March 17, 2022
    Inventors: Istvan Peter Burbank, Matthew D. Weed, Jason Paul Wojack, Jason M. Eichenholz, Dmytro Trofymov
  • Patent number: 11194048
    Abstract: In one embodiment, a lidar system includes a light source configured to emit pulses of light and a scanner configured to scan the emitted pulses of light along a high-resolution scan pattern located within a field of regard of the lidar system. The scanner includes one or more scan mirrors configured to (i) scan the emitted pulses of light along a first scan axis to produce multiple scan lines of the high-resolution scan pattern, where each scan line is associated with multiple pixels, each pixel corresponding to one of the emitted pulses of light and (ii) distribute the scan lines of the high-resolution scan pattern along a second scan axis. The high-resolution scan pattern includes one or more of: interlaced scan lines and interlaced pixels.
    Type: Grant
    Filed: May 12, 2021
    Date of Patent: December 7, 2021
    Assignee: Luminar, LLC
    Inventors: Istvan Peter Burbank, Matthew D. Weed, Jason Paul Wojack, Jason M. Eichenholz, Dmytro Trofymov
  • Publication number: 20210356600
    Abstract: In one embodiment, a lidar system includes a light source configured to emit pulses of light and a scanner configured to scan the emitted pulses of light along a high-resolution scan pattern located within a field of regard of the lidar system. The scanner includes one or more scan mirrors configured to (i) scan the emitted pulses of light along a first scan axis to produce multiple scan lines of the high-resolution scan pattern, where each scan line is associated with multiple pixels, each pixel corresponding to one of the emitted pulses of light and (ii) distribute the scan lines of the high-resolution scan pattern along a second scan axis. The high-resolution scan pattern includes one or more of: interlaced scan lines and interlaced pixels.
    Type: Application
    Filed: May 12, 2021
    Publication date: November 18, 2021
    Inventors: Istvan Peter Burbank, Matthew D. Weed, Jason Paul Wojack, Jason M. Eichenholz, Dmytro Trofymov
  • Publication number: 20210208281
    Abstract: An imaging sensor is configured to sense an environment through which a vehicle is moving. A method for controlling the imaging sensor includes receiving sensor data generated by the imaging sensor of the vehicle as the vehicle moves through the environment, determining (i) a lower bound for a vertical region of interest (VROI) within a vertical field of regard of the imaging sensor, the VROI comprising a virtual horizon, using a first subset of the sensor data, and (ii) an upper bound for the VROI within the vertical field of regard of the imaging sensor, using at least a second subset of the sensor data, where the first subset is smaller than the second subset. The method also includes causing the imaging sensor to be adjusted in accordance with the determined lower bound of the VROI and the determined upper bound of the VROI.
    Type: Application
    Filed: February 12, 2020
    Publication date: July 8, 2021
    Inventor: Dmytro Trofymov
  • Publication number: 20200209858
    Abstract: To generate a machine learning model for controlling autonomous vehicles, training sensor data is obtained from sensors associated with one or more vehicles, the sensor data indicative of physical conditions of an environment in which the one or more vehicles operate, and a machine learning (ML) model is trained using the training sensor data. The ML model generates parameters of the environment in response to input sensor data. A controller in an autonomous vehicle receives sensor data from one or more sensors operating in the autonomous vehicle, applies the received sensor data to the ML model to obtain parameters of an environment in which the autonomous vehicle operates, provides the generated parameters to a motion planner component to generate decisions for controlling the autonomous vehicle, and causes the autonomous vehicle to maneuver in accordance with the generated decisions.
    Type: Application
    Filed: March 6, 2019
    Publication date: July 2, 2020
    Inventors: Dmytro Trofymov, Pranav Maheshwari, Vahid R. Ramezani
  • Patent number: 10535191
    Abstract: Techniques for identifying and labeling distinct objects within 3-D images of environments in which vehicles operate, to thereby generate training data used to train models that autonomously control and/or operate vehicles, are disclosed. A 3-D image may be presented from various perspective views (in some cases, dynamically), and/or may be presented with a corresponding 2-D environment image in a side-by-side and/or a layered manner, thereby allowing a user to more accurately identify groups/clusters of data points within the 3-D image that represent distinct objects. Automatic identification/delineation of various types of objects depicted within 3-D images, automatic labeling of identified/delineated objects, and automatic tracking of objects across various frames of a 3-D video are disclosed. A user may modify and/or refine any automatically generated information. Further, at least some of the techniques described herein are equally applicable to 2-D images.
    Type: Grant
    Filed: February 27, 2018
    Date of Patent: January 14, 2020
    Assignee: Luminar Technologies, Inc.
    Inventors: Prateek Sachdeva, Dmytro Trofymov
  • Publication number: 20190197778
    Abstract: Techniques for identifying and labeling distinct objects within 3-D images of environments in which vehicles operate, to thereby generate training data used to train models that autonomously control and/or operate vehicles, are disclosed. A 3-D image may be presented from various perspective views (in some cases, dynamically), and/or may be presented with a corresponding 2-D environment image in a side-by-side and/or a layered manner, thereby allowing a user to more accurately identify groups/clusters of data points within the 3-D image that represent distinct objects. Automatic identification/delineation of various types of objects depicted within 3-D images, automatic labeling of identified/delineated objects, and automatic tracking of objects across various frames of a 3-D video are disclosed. A user may modify and/or refine any automatically generated information. Further, at least some of the techniques described herein are equally applicable to 2-D images.
    Type: Application
    Filed: February 27, 2018
    Publication date: June 27, 2019
    Inventors: Prateek Sachdeva, Dmytro Trofymov
  • Patent number: 10275689
    Abstract: Techniques for identifying and labeling distinct objects within 3-D images of environments in which vehicles operate, to thereby generate training data used to train models that autonomously control and/or operate vehicles, are disclosed. A 3-D image may be presented from various perspective views (in some cases, dynamically), and/or may be presented with a corresponding 2-D environment image in a side-by-side and/or a layered manner, thereby allowing a user to more accurately identify groups/clusters of data points within the 3-D image that represent distinct objects. Automatic identification/delineation of various types of objects depicted within 3-D images, automatic labeling of identified/delineated objects, and automatic tracking of objects across various frames of a 3-D video are disclosed. A user may modify and/or refine any automatically generated information. Further, at least some of the techniques described herein are equally applicable to 2-D images.
    Type: Grant
    Filed: February 27, 2018
    Date of Patent: April 30, 2019
    Assignee: LUMINAR TECHNOLOGIES, INC.
    Inventors: Prateek Sachdeva, Dmytro Trofymov
  • Patent number: 10175697
    Abstract: Techniques for identifying and labeling distinct objects within 3-D images of environments in which vehicles operate, to thereby generate training data used to train models that autonomously control and/or operate vehicles, are disclosed. A 3-D image may be presented from various perspective views (in some cases, dynamically), and/or may be presented with a corresponding 2-D environment image in a side-by-side and/or a layered manner, thereby allowing a user to more accurately identify groups/clusters of data points within the 3-D image that represent distinct objects. Automatic identification/delineation of various types of objects depicted within 3-D images, automatic labeling of identified/delineated objects, and automatic tracking of objects across various frames of a 3-D video are disclosed. A user may modify and/or refine any automatically generated information. Further, at least some of the techniques described herein are equally applicable to 2-D images.
    Type: Grant
    Filed: February 27, 2018
    Date of Patent: January 8, 2019
    Assignee: LUMINAR TECHNOLOGIES, INC.
    Inventors: Prateek Sachdeva, Dmytro Trofymov
  • Patent number: 10169678
    Abstract: Techniques for identifying and labeling distinct objects within 3-D images of environments in which vehicles operate, to thereby generate training data used to train models that autonomously control and/or operate vehicles, are disclosed. A 3-D image may be presented from various perspective views (in some cases, dynamically), and/or may be presented with a corresponding 2-D environment image in a side-by-side and/or a layered manner, thereby allowing a user to more accurately identify groups/clusters of data points within the 3-D image that represent distinct objects. Automatic identification/delineation of various types of objects depicted within 3-D images, automatic labeling of identified/delineated objects, and automatic tracking of objects across various frames of a 3-D video are disclosed. A user may modify and/or refine any automatically generated information. Further, at least some of the techniques described herein are equally applicable to 2-D images.
    Type: Grant
    Filed: February 27, 2018
    Date of Patent: January 1, 2019
    Assignee: LUMINAR TECHNOLOGIES, INC.
    Inventors: Prateek Sachdeva, Dmytro Trofymov
  • Patent number: 10169680
    Abstract: Techniques for identifying and labeling distinct objects within 3-D images of environments in which vehicles operate, to thereby generate training data used to train models that autonomously control and/or operate vehicles, are disclosed. A 3-D image may be presented from various perspective views (in some cases, dynamically), and/or may be presented with a corresponding 2-D environment image in a side-by-side and/or a layered manner, thereby allowing a user to more accurately identify groups/clusters of data points within the 3-D image that represent distinct objects. Automatic identification/delineation of various types of objects depicted within 3-D images, automatic labeling of identified/delineated objects, and automatic tracking of objects across various frames of a 3-D video are disclosed. A user may modify and/or refine any automatically generated information. Further, at least some of the techniques described herein are equally applicable to 2-D images.
    Type: Grant
    Filed: February 27, 2018
    Date of Patent: January 1, 2019
    Assignee: LUMINAR TECHNOLOGIES, INC.
    Inventors: Prateek Sachdeva, Dmytro Trofymov