Patents by Inventor Dmytro Trofymov
Dmytro Trofymov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11841440Abstract: In one embodiment, a lidar system includes a light source configured to emit pulses of light and a scanner configured to scan the emitted pulses of light along a high-resolution scan pattern located within a field of regard of the lidar system. The scanner includes one or more scan mirrors configured to (i) scan the emitted pulses of light along a first scan axis to produce multiple scan lines of the high-resolution scan pattern, where each scan line is associated with multiple pixels, each pixel corresponding to one of the emitted pulses of light and (ii) distribute the scan lines of the high-resolution scan pattern along a second scan axis. The high-resolution scan pattern includes one or more of: interlaced scan lines and interlaced pixels.Type: GrantFiled: November 24, 2021Date of Patent: December 12, 2023Assignee: Luminar Technologies, Inc.Inventors: Istvan Peter Burbank, Matthew D. Weed, Jason Paul Wojack, Jason M. Eichenholz, Dmytro Trofymov
-
Publication number: 20230251656Abstract: First training sensor data detected by a plurality of real-world sensors are obtained. The first training sensor data is associated with physical environment conditions. Second training sensor data detected by a plurality of virtual sensors are obtained. The second training sensor data is associated with simulated physical conditions of a virtual environment. A machine learning model is trained using both real-world and virtual training datasets including the first training sensor data, the second training sensor data, and respective sensor setting parameters of the plurality of real-world sensors and the plurality of virtual sensors. The real-world and virtual training datasets used to train the machine learning model include indications associated with the respective sensor parameter settings including one or more of the following: different scan line settings or different exposure settings.Type: ApplicationFiled: April 12, 2023Publication date: August 10, 2023Inventors: Dmytro Trofymov, Pranav Maheshwari, Vahi R. Ramezani
-
Patent number: 11656620Abstract: To generate a machine learning model for controlling autonomous vehicles, training sensor data is obtained from sensors associated with one or more vehicles, the sensor data indicative of physical conditions of an environment in which the one or more vehicles operate, and a machine learning (ML) model is trained using the training sensor data. The ML model generates parameters of the environment in response to input sensor data. A controller in an autonomous vehicle receives sensor data from one or more sensors operating in the autonomous vehicle, applies the received sensor data to the ML model to obtain parameters of an environment in which the autonomous vehicle operates, provides the generated parameters to a motion planner component to generate decisions for controlling the autonomous vehicle, and causes the autonomous vehicle to maneuver in accordance with the generated decisions.Type: GrantFiled: March 6, 2019Date of Patent: May 23, 2023Assignee: Luminar, LLCInventors: Dmytro Trofymov, Pranav Maheshwari, Vahid R. Ramezani
-
Patent number: 11391842Abstract: An imaging sensor is configured to sense an environment through which a vehicle is moving. A method for controlling the imaging sensor includes receiving sensor data generated by the imaging sensor of the vehicle as the vehicle moves through the environment, determining (i) a lower bound for a vertical region of interest (VROI) within a vertical field of regard of the imaging sensor, the VROI comprising a virtual horizon, using a first subset of the sensor data, and (ii) an upper bound for the VROI within the vertical field of regard of the imaging sensor, using at least a second subset of the sensor data, where the first subset is smaller than the second subset. The method also includes causing the imaging sensor to be adjusted in accordance with the determined lower bound of the VROI and the determined upper bound of the VROI.Type: GrantFiled: February 12, 2020Date of Patent: July 19, 2022Assignee: Luminar, LLCInventor: Dmytro Trofymov
-
Publication number: 20220082702Abstract: In one embodiment, a lidar system includes a light source configured to emit pulses of light and a scanner configured to scan the emitted pulses of light along a high-resolution scan pattern located within a field of regard of the lidar system. The scanner includes one or more scan mirrors configured to (i) scan the emitted pulses of light along a first scan axis to produce multiple scan lines of the high-resolution scan pattern, where each scan line is associated with multiple pixels, each pixel corresponding to one of the emitted pulses of light and (ii) distribute the scan lines of the high-resolution scan pattern along a second scan axis. The high-resolution scan pattern includes one or more of: interlaced scan lines and interlaced pixels.Type: ApplicationFiled: November 24, 2021Publication date: March 17, 2022Inventors: Istvan Peter Burbank, Matthew D. Weed, Jason Paul Wojack, Jason M. Eichenholz, Dmytro Trofymov
-
Patent number: 11194048Abstract: In one embodiment, a lidar system includes a light source configured to emit pulses of light and a scanner configured to scan the emitted pulses of light along a high-resolution scan pattern located within a field of regard of the lidar system. The scanner includes one or more scan mirrors configured to (i) scan the emitted pulses of light along a first scan axis to produce multiple scan lines of the high-resolution scan pattern, where each scan line is associated with multiple pixels, each pixel corresponding to one of the emitted pulses of light and (ii) distribute the scan lines of the high-resolution scan pattern along a second scan axis. The high-resolution scan pattern includes one or more of: interlaced scan lines and interlaced pixels.Type: GrantFiled: May 12, 2021Date of Patent: December 7, 2021Assignee: Luminar, LLCInventors: Istvan Peter Burbank, Matthew D. Weed, Jason Paul Wojack, Jason M. Eichenholz, Dmytro Trofymov
-
Publication number: 20210356600Abstract: In one embodiment, a lidar system includes a light source configured to emit pulses of light and a scanner configured to scan the emitted pulses of light along a high-resolution scan pattern located within a field of regard of the lidar system. The scanner includes one or more scan mirrors configured to (i) scan the emitted pulses of light along a first scan axis to produce multiple scan lines of the high-resolution scan pattern, where each scan line is associated with multiple pixels, each pixel corresponding to one of the emitted pulses of light and (ii) distribute the scan lines of the high-resolution scan pattern along a second scan axis. The high-resolution scan pattern includes one or more of: interlaced scan lines and interlaced pixels.Type: ApplicationFiled: May 12, 2021Publication date: November 18, 2021Inventors: Istvan Peter Burbank, Matthew D. Weed, Jason Paul Wojack, Jason M. Eichenholz, Dmytro Trofymov
-
Publication number: 20210208281Abstract: An imaging sensor is configured to sense an environment through which a vehicle is moving. A method for controlling the imaging sensor includes receiving sensor data generated by the imaging sensor of the vehicle as the vehicle moves through the environment, determining (i) a lower bound for a vertical region of interest (VROI) within a vertical field of regard of the imaging sensor, the VROI comprising a virtual horizon, using a first subset of the sensor data, and (ii) an upper bound for the VROI within the vertical field of regard of the imaging sensor, using at least a second subset of the sensor data, where the first subset is smaller than the second subset. The method also includes causing the imaging sensor to be adjusted in accordance with the determined lower bound of the VROI and the determined upper bound of the VROI.Type: ApplicationFiled: February 12, 2020Publication date: July 8, 2021Inventor: Dmytro Trofymov
-
Publication number: 20200209858Abstract: To generate a machine learning model for controlling autonomous vehicles, training sensor data is obtained from sensors associated with one or more vehicles, the sensor data indicative of physical conditions of an environment in which the one or more vehicles operate, and a machine learning (ML) model is trained using the training sensor data. The ML model generates parameters of the environment in response to input sensor data. A controller in an autonomous vehicle receives sensor data from one or more sensors operating in the autonomous vehicle, applies the received sensor data to the ML model to obtain parameters of an environment in which the autonomous vehicle operates, provides the generated parameters to a motion planner component to generate decisions for controlling the autonomous vehicle, and causes the autonomous vehicle to maneuver in accordance with the generated decisions.Type: ApplicationFiled: March 6, 2019Publication date: July 2, 2020Inventors: Dmytro Trofymov, Pranav Maheshwari, Vahid R. Ramezani
-
Patent number: 10535191Abstract: Techniques for identifying and labeling distinct objects within 3-D images of environments in which vehicles operate, to thereby generate training data used to train models that autonomously control and/or operate vehicles, are disclosed. A 3-D image may be presented from various perspective views (in some cases, dynamically), and/or may be presented with a corresponding 2-D environment image in a side-by-side and/or a layered manner, thereby allowing a user to more accurately identify groups/clusters of data points within the 3-D image that represent distinct objects. Automatic identification/delineation of various types of objects depicted within 3-D images, automatic labeling of identified/delineated objects, and automatic tracking of objects across various frames of a 3-D video are disclosed. A user may modify and/or refine any automatically generated information. Further, at least some of the techniques described herein are equally applicable to 2-D images.Type: GrantFiled: February 27, 2018Date of Patent: January 14, 2020Assignee: Luminar Technologies, Inc.Inventors: Prateek Sachdeva, Dmytro Trofymov
-
Publication number: 20190197778Abstract: Techniques for identifying and labeling distinct objects within 3-D images of environments in which vehicles operate, to thereby generate training data used to train models that autonomously control and/or operate vehicles, are disclosed. A 3-D image may be presented from various perspective views (in some cases, dynamically), and/or may be presented with a corresponding 2-D environment image in a side-by-side and/or a layered manner, thereby allowing a user to more accurately identify groups/clusters of data points within the 3-D image that represent distinct objects. Automatic identification/delineation of various types of objects depicted within 3-D images, automatic labeling of identified/delineated objects, and automatic tracking of objects across various frames of a 3-D video are disclosed. A user may modify and/or refine any automatically generated information. Further, at least some of the techniques described herein are equally applicable to 2-D images.Type: ApplicationFiled: February 27, 2018Publication date: June 27, 2019Inventors: Prateek Sachdeva, Dmytro Trofymov
-
Patent number: 10275689Abstract: Techniques for identifying and labeling distinct objects within 3-D images of environments in which vehicles operate, to thereby generate training data used to train models that autonomously control and/or operate vehicles, are disclosed. A 3-D image may be presented from various perspective views (in some cases, dynamically), and/or may be presented with a corresponding 2-D environment image in a side-by-side and/or a layered manner, thereby allowing a user to more accurately identify groups/clusters of data points within the 3-D image that represent distinct objects. Automatic identification/delineation of various types of objects depicted within 3-D images, automatic labeling of identified/delineated objects, and automatic tracking of objects across various frames of a 3-D video are disclosed. A user may modify and/or refine any automatically generated information. Further, at least some of the techniques described herein are equally applicable to 2-D images.Type: GrantFiled: February 27, 2018Date of Patent: April 30, 2019Assignee: LUMINAR TECHNOLOGIES, INC.Inventors: Prateek Sachdeva, Dmytro Trofymov
-
Patent number: 10175697Abstract: Techniques for identifying and labeling distinct objects within 3-D images of environments in which vehicles operate, to thereby generate training data used to train models that autonomously control and/or operate vehicles, are disclosed. A 3-D image may be presented from various perspective views (in some cases, dynamically), and/or may be presented with a corresponding 2-D environment image in a side-by-side and/or a layered manner, thereby allowing a user to more accurately identify groups/clusters of data points within the 3-D image that represent distinct objects. Automatic identification/delineation of various types of objects depicted within 3-D images, automatic labeling of identified/delineated objects, and automatic tracking of objects across various frames of a 3-D video are disclosed. A user may modify and/or refine any automatically generated information. Further, at least some of the techniques described herein are equally applicable to 2-D images.Type: GrantFiled: February 27, 2018Date of Patent: January 8, 2019Assignee: LUMINAR TECHNOLOGIES, INC.Inventors: Prateek Sachdeva, Dmytro Trofymov
-
Patent number: 10169678Abstract: Techniques for identifying and labeling distinct objects within 3-D images of environments in which vehicles operate, to thereby generate training data used to train models that autonomously control and/or operate vehicles, are disclosed. A 3-D image may be presented from various perspective views (in some cases, dynamically), and/or may be presented with a corresponding 2-D environment image in a side-by-side and/or a layered manner, thereby allowing a user to more accurately identify groups/clusters of data points within the 3-D image that represent distinct objects. Automatic identification/delineation of various types of objects depicted within 3-D images, automatic labeling of identified/delineated objects, and automatic tracking of objects across various frames of a 3-D video are disclosed. A user may modify and/or refine any automatically generated information. Further, at least some of the techniques described herein are equally applicable to 2-D images.Type: GrantFiled: February 27, 2018Date of Patent: January 1, 2019Assignee: LUMINAR TECHNOLOGIES, INC.Inventors: Prateek Sachdeva, Dmytro Trofymov
-
Patent number: 10169680Abstract: Techniques for identifying and labeling distinct objects within 3-D images of environments in which vehicles operate, to thereby generate training data used to train models that autonomously control and/or operate vehicles, are disclosed. A 3-D image may be presented from various perspective views (in some cases, dynamically), and/or may be presented with a corresponding 2-D environment image in a side-by-side and/or a layered manner, thereby allowing a user to more accurately identify groups/clusters of data points within the 3-D image that represent distinct objects. Automatic identification/delineation of various types of objects depicted within 3-D images, automatic labeling of identified/delineated objects, and automatic tracking of objects across various frames of a 3-D video are disclosed. A user may modify and/or refine any automatically generated information. Further, at least some of the techniques described herein are equally applicable to 2-D images.Type: GrantFiled: February 27, 2018Date of Patent: January 1, 2019Assignee: LUMINAR TECHNOLOGIES, INC.Inventors: Prateek Sachdeva, Dmytro Trofymov