Patents by Inventor Ibrahim Eden
Ibrahim Eden has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230033470Abstract: In various examples, systems and methods are described that generate scene flow in 3D space through simplifying the 3D LiDAR data to “2.5D” optical flow space (e.g., x, y, and depth flow). For example, LiDAR range images may be used to generate 2.5D representations of depth flow information between frames of LiDAR data, and two or more range images may be compared to generate depth flow information, and messages may be passed—e.g., using a belief propagation algorithm—to update pixel values in the 2.5D representation. The resulting images may then be used to generate 2.5D motion vectors, and the 2.5D motion vectors may be converted back to 3D space to generate a 3D scene flow representation of an environment around an autonomous machine.Type: ApplicationFiled: August 2, 2021Publication date: February 2, 2023Inventors: David Wehr, Ibrahim Eden, Joachim Pehserl
-
Publication number: 20220415059Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: ApplicationFiled: August 25, 2022Publication date: December 29, 2022Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Patent number: 11532168Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: GrantFiled: June 29, 2020Date of Patent: December 20, 2022Assignee: NVIDIA CORPORATIONInventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Publication number: 20220237925Abstract: LiDAR (light detection and ranging) and RADAR (radio detection and ranging) systems are commonly used to generate point cloud data for 3D space around vehicles, for such functions as localization, mapping, and tracking. This disclosure provides improved techniques for processing the point cloud data that has been collected. The improved techniques include mapping one or more point cloud data points into a depth map, the one or more point cloud data points being generated using one or more sensors; determining one or more mapped point cloud data points within a bounded area of the depth map, and detecting, using one or more processing units and for an environment surrounding a machine corresponding to the one or more sensors, a location of one or more entities based on the one or more mapped point cloud data points.Type: ApplicationFiled: April 12, 2022Publication date: July 28, 2022Inventors: Ishwar Kulkarni, Ibrahim Eden, Michael Kroepfl, David Nister
-
Patent number: 11301697Abstract: Various types of systems or technologies can be used to collect data in a 3D space. For example, LiDAR (light detection and ranging) and RADAR (radio detection and ranging) systems are commonly used to generate point cloud data for 3D space around vehicles, for such functions as localization, mapping, and tracking. This disclosure provides improved techniques for processing the point cloud data that has been collected. The improved techniques include mapping 3D point cloud data points into a 2D depth map, fetching a group of the mapped 3D point cloud data points that are within a bounded window of the 2D depth map; and generating geometric space parameters based on the group of the mapped 3D point cloud data points. The generated geometric space parameters may be used for object motion, obstacle detection, freespace detection, and/or landmark detection for an area surrounding a vehicle.Type: GrantFiled: July 24, 2020Date of Patent: April 12, 2022Assignee: Nvidia CorporationInventors: Ishwar Kulkarni, Ibrahim Eden, Michael Kroepfl, David Nister
-
Publication number: 20210342609Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: ApplicationFiled: July 15, 2021Publication date: November 4, 2021Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Publication number: 20210342608Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: ApplicationFiled: July 15, 2021Publication date: November 4, 2021Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Publication number: 20210233307Abstract: In various examples, locations of directional landmarks, such as vertical landmarks, may be identified using 3D reconstruction. A set of observations of directional landmarks (e.g., images captured from a moving vehicle) may be reduced to 1D lookups by rectifying the observations to align directional landmarks along a particular direction of the observations. Object detection may be applied, and corresponding 1D lookups may be generated to represent the presence of a detected vertical landmark in an image.Type: ApplicationFiled: April 12, 2021Publication date: July 29, 2021Inventors: Philippe Bouttefroy, David Nister, Ibrahim Eden
-
Publication number: 20210150230Abstract: A deep neural network(s) (DNN) may be used to detect objects from sensor data of a three dimensional (3D) environment. For example, a multi-view perception DNN may include multiple constituent DNNs or stages chained together that sequentially process different views of the 3D environment. An example DNN may include a first stage that performs class segmentation in a first view (e.g., perspective view) and a second stage that performs class segmentation and/or regresses instance geometry in a second view (e.g., top-down). The DNN outputs may be processed to generate 2D and/or 3D bounding boxes and class labels for detected objects in the 3D environment. As such, the techniques described herein may be used to detect and classify animate objects and/or parts of an environment, and these detections and classifications may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: ApplicationFiled: June 29, 2020Publication date: May 20, 2021Inventors: Nikolai Smolyanskiy, Ryan Oldja, Ke Chen, Alexander Popov, Joachim Pehserl, Ibrahim Eden, Tilman Wekel, David Wehr, Ruchi Bhargava, David Nister
-
Patent number: 10991155Abstract: In various examples, locations of directional landmarks, such as vertical landmarks, may be identified using 3D reconstruction. A set of observations of directional landmarks (e.g., images captured from a moving vehicle) may be reduced to 1D lookups by rectifying the observations to align directional landmarks along a particular direction of the observations. Object detection may be applied, and corresponding 1D lookups may be generated to represent the presence of a detected vertical landmark in an image.Type: GrantFiled: April 16, 2019Date of Patent: April 27, 2021Assignee: NVIDIA CorporationInventors: Philippe Bouttefroy, David Nister, Ibrahim Eden
-
Publication number: 20210063198Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.Type: ApplicationFiled: August 31, 2020Publication date: March 4, 2021Inventors: David Nister, Ruchi Bhargava, Vaibhav Thukral, Michael Grabner, Ibrahim Eden, Jeffrey Liu
-
Publication number: 20210063200Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.Type: ApplicationFiled: August 31, 2020Publication date: March 4, 2021Inventors: Michael Kroepfl, Amir Akbarzadeh, Ruchi Bhargava, Vaibhav Thukral, Neda Cvijetic, Vadim Cugunovs, David Nister, Birgit Henke, Ibrahim Eden, Youding Zhu, Michael Grabner, Ivana Stojanovic, Yu Sheng, Jeffrey Liu, Enliang Zheng, Jordan Marr, Andrew Carley
-
Publication number: 20210063578Abstract: In various examples, a deep neural network (DNN) may be used to detect and classify animate objects and/or parts of an environment. The DNN may be trained using camera-to-LiDAR cross injection to generate reliable ground truth data for LiDAR range images. For example, annotations generated in the image domain may be propagated to the LiDAR domain to increase the accuracy of the ground truth data in the LiDAR domain—e.g., without requiring manual annotation in the LiDAR domain. Once trained, the DNN may output instance segmentation masks, class segmentation masks, and/or bounding shape proposals corresponding to two-dimensional (2D) LiDAR range images, and the outputs may be fused together to project the outputs into three-dimensional (3D) LiDAR point clouds. This 2D and/or 3D information output by the DNN may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.Type: ApplicationFiled: August 28, 2020Publication date: March 4, 2021Inventors: Tilman Wekel, Sangmin Oh, David Nister, Joachim Pehserl, Neda Cvijetic, Ibrahim Eden
-
Publication number: 20210026355Abstract: A deep neural network(s) (DNN) may be used to perform panoptic segmentation by performing pixel-level class and instance segmentation of a scene using a single pass of the DNN. Generally, one or more images and/or other sensor data may be stitched together, stacked, and/or combined, and fed into a DNN that includes a common trunk and several heads that predict different outputs. The DNN may include a class confidence head that predicts a confidence map representing pixels that belong to particular classes, an instance regression head that predicts object instance data for detected objects, an instance clustering head that predicts a confidence map of pixels that belong to particular instances, and/or a depth head that predicts range values. These outputs may be decoded to identify bounding shapes, class labels, instance labels, and/or range values for detected objects, and used to enable safe path planning and control of an autonomous vehicle.Type: ApplicationFiled: July 24, 2020Publication date: January 28, 2021Inventors: Ke Chen, Nikolai Smolyanskiy, Alexey Kamenev, Ryan Oldja, Tilman Wekel, David Nister, Joachim Pehserl, Ibrahim Eden, Sangmin Oh, Ruchi Bhargava
-
Publication number: 20200357160Abstract: Various types of systems or technologies can be used to collect data in a 3D space. For example, LiDAR (light detection and ranging) and RADAR (radio detection and ranging) systems are commonly used to generate point cloud data for 3D space around vehicles, for such functions as localization, mapping, and tracking. This disclosure provides improved techniques for processing the point cloud data that has been collected. The improved techniques include mapping 3D point cloud data points into a 2D depth map, fetching a group of the mapped 3D point cloud data points that are within a bounded window of the 2D depth map; and generating geometric space parameters based on the group of the mapped 3D point cloud data points. The generated geometric space parameters may be used for object motion, obstacle detection, freespace detection, and/or landmark detection for an area surrounding a vehicle.Type: ApplicationFiled: July 24, 2020Publication date: November 12, 2020Inventors: Ishwar Kulkarni, Ibrahim Eden, Michael Kroepfl, David Nister
-
Publication number: 20200334900Abstract: In various examples, locations of directional landmarks, such as vertical landmarks, may be identified using 3D reconstruction. A set of observations of directional landmarks (e.g., images captured from a moving vehicle) may be reduced to 1D lookups by rectifying the observations to align directional landmarks along a particular direction of the observations. Object detection may be applied, and corresponding 1D lookups may be generated to represent the presence of a detected vertical landmark in an image.Type: ApplicationFiled: April 16, 2019Publication date: October 22, 2020Inventors: Philippe Bouttefroy, David Nister, Ibrahim Eden
-
Patent number: 10776983Abstract: Various types of systems or technologies can be used to collect data in a 3D space. For example, LiDAR (light detection and ranging) and RADAR (radio detection and ranging) systems are commonly used to generate point cloud data for 3D space around vehicles, for such functions as localization, mapping, and tracking. This disclosure provides improvements for processing the point cloud data that has been collected. The processing improvements include analyzing point cloud data using trajectory equations, depth maps, and texture maps. The processing improvements also include representing the point cloud data by a two dimensional depth map or a texture map and using the depth map or texture map to provide object motion, obstacle detection, freespace detection, and landmark detection for an area surrounding a vehicle.Type: GrantFiled: July 31, 2018Date of Patent: September 15, 2020Assignee: Nvidia CorporationInventors: Ishwar Kulkarni, Ibrahim Eden, Michael Kroepfl, David Nister
-
Patent number: 10769840Abstract: Various types of systems or technologies can be used to collect data in a 3D space. For example, LiDAR (light detection and ranging) and RADAR (radio detection and ranging) systems are commonly used to generate point cloud data for 3D space around vehicles, for such functions as localization, mapping, and tracking. This disclosure provides improvements for processing the point cloud data that has been collected. The processing improvements include using a three dimensional polar depth map to assist in performing nearest neighbor analysis on point cloud data for object detection, trajectory detection, freespace detection, obstacle detection, landmark detection, and providing other geometric space parameters.Type: GrantFiled: July 31, 2018Date of Patent: September 8, 2020Assignee: Nvidia CorporationInventors: Ishwar Kulkarni, Ibrahim Eden, Michael Kroepfl, David Nister
-
Patent number: 10424103Abstract: Examples relating to attracting the gaze of a viewer of a display are disclosed. One example method comprises controlling the display to display a target object and using gaze tracking data to monitor a viewer gaze location. A guide element is displayed moving along a computed dynamic path that traverses adjacent to a viewer gaze location and leads to the target object. If the viewer's gaze location is within a predetermined divergence threshold of the guide element, then the display continues displaying the guide element moving along the computed dynamic guide path to the target object. If the viewer's gaze location diverts from the guide element by at least the predetermined divergence threshold, then the display discontinues displaying the guide element moving along the computed dynamic guide path to the target object.Type: GrantFiled: April 29, 2014Date of Patent: September 24, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventor: Ibrahim Eden
-
Publication number: 20190266779Abstract: Various types of systems or technologies can be used to collect data in a 3D space. For example, LiDAR (light detection and ranging) and RADAR (radio detection and ranging) systems are commonly used to generate point cloud data for 3D space around vehicles, for such functions as localization, mapping, and tracking. This disclosure provides improvements for processing the point cloud data that has been collected. The processing improvements include using a three dimensional polar depth map to assist in performing nearest neighbor analysis on point cloud data for object detection, trajectory detection, freespace detection, obstacle detection, landmark detection, and providing other geometric space parameters.Type: ApplicationFiled: July 31, 2018Publication date: August 29, 2019Inventors: Ishwar Kulkarni, Ibrahim Eden, Michael Kroepfl, David Nister