Patents by Inventor Ibrahim Eden

Ibrahim Eden has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10991155
    Abstract: In various examples, locations of directional landmarks, such as vertical landmarks, may be identified using 3D reconstruction. A set of observations of directional landmarks (e.g., images captured from a moving vehicle) may be reduced to 1D lookups by rectifying the observations to align directional landmarks along a particular direction of the observations. Object detection may be applied, and corresponding 1D lookups may be generated to represent the presence of a detected vertical landmark in an image.
    Type: Grant
    Filed: April 16, 2019
    Date of Patent: April 27, 2021
    Assignee: NVIDIA Corporation
    Inventors: Philippe Bouttefroy, David Nister, Ibrahim Eden
  • Publication number: 20210063198
    Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.
    Type: Application
    Filed: August 31, 2020
    Publication date: March 4, 2021
    Inventors: David Nister, Ruchi Bhargava, Vaibhav Thukral, Michael Grabner, Ibrahim Eden, Jeffrey Liu
  • Publication number: 20210063578
    Abstract: In various examples, a deep neural network (DNN) may be used to detect and classify animate objects and/or parts of an environment. The DNN may be trained using camera-to-LiDAR cross injection to generate reliable ground truth data for LiDAR range images. For example, annotations generated in the image domain may be propagated to the LiDAR domain to increase the accuracy of the ground truth data in the LiDAR domain—e.g., without requiring manual annotation in the LiDAR domain. Once trained, the DNN may output instance segmentation masks, class segmentation masks, and/or bounding shape proposals corresponding to two-dimensional (2D) LiDAR range images, and the outputs may be fused together to project the outputs into three-dimensional (3D) LiDAR point clouds. This 2D and/or 3D information output by the DNN may be provided to an autonomous vehicle drive stack to enable safe planning and control of the autonomous vehicle.
    Type: Application
    Filed: August 28, 2020
    Publication date: March 4, 2021
    Inventors: Tilman Wekel, Sangmin Oh, David Nister, Joachim Pehserl, Neda Cvijetic, Ibrahim Eden
  • Publication number: 20210063200
    Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.
    Type: Application
    Filed: August 31, 2020
    Publication date: March 4, 2021
    Inventors: Michael Kroepfl, Amir Akbarzadeh, Ruchi Bhargava, Vaibhav Thukral, Neda Cvijetic, Vadim Cugunovs, David Nister, Birgit Henke, Ibrahim Eden, Youding Zhu, Michael Grabner, Ivana Stojanovic, Yu Sheng, Jeffrey Liu, Enliang Zheng, Jordan Marr, Andrew Carley
  • Publication number: 20210026355
    Abstract: A deep neural network(s) (DNN) may be used to perform panoptic segmentation by performing pixel-level class and instance segmentation of a scene using a single pass of the DNN. Generally, one or more images and/or other sensor data may be stitched together, stacked, and/or combined, and fed into a DNN that includes a common trunk and several heads that predict different outputs. The DNN may include a class confidence head that predicts a confidence map representing pixels that belong to particular classes, an instance regression head that predicts object instance data for detected objects, an instance clustering head that predicts a confidence map of pixels that belong to particular instances, and/or a depth head that predicts range values. These outputs may be decoded to identify bounding shapes, class labels, instance labels, and/or range values for detected objects, and used to enable safe path planning and control of an autonomous vehicle.
    Type: Application
    Filed: July 24, 2020
    Publication date: January 28, 2021
    Inventors: Ke Chen, Nikolai Smolyanskiy, Alexey Kamenev, Ryan Oldja, Tilman Wekel, David Nister, Joachim Pehserl, Ibrahim Eden, Sangmin Oh, Ruchi Bhargava
  • Publication number: 20200357160
    Abstract: Various types of systems or technologies can be used to collect data in a 3D space. For example, LiDAR (light detection and ranging) and RADAR (radio detection and ranging) systems are commonly used to generate point cloud data for 3D space around vehicles, for such functions as localization, mapping, and tracking. This disclosure provides improved techniques for processing the point cloud data that has been collected. The improved techniques include mapping 3D point cloud data points into a 2D depth map, fetching a group of the mapped 3D point cloud data points that are within a bounded window of the 2D depth map; and generating geometric space parameters based on the group of the mapped 3D point cloud data points. The generated geometric space parameters may be used for object motion, obstacle detection, freespace detection, and/or landmark detection for an area surrounding a vehicle.
    Type: Application
    Filed: July 24, 2020
    Publication date: November 12, 2020
    Inventors: Ishwar Kulkarni, Ibrahim Eden, Michael Kroepfl, David Nister
  • Publication number: 20200334900
    Abstract: In various examples, locations of directional landmarks, such as vertical landmarks, may be identified using 3D reconstruction. A set of observations of directional landmarks (e.g., images captured from a moving vehicle) may be reduced to 1D lookups by rectifying the observations to align directional landmarks along a particular direction of the observations. Object detection may be applied, and corresponding 1D lookups may be generated to represent the presence of a detected vertical landmark in an image.
    Type: Application
    Filed: April 16, 2019
    Publication date: October 22, 2020
    Inventors: Philippe Bouttefroy, David Nister, Ibrahim Eden
  • Patent number: 10776983
    Abstract: Various types of systems or technologies can be used to collect data in a 3D space. For example, LiDAR (light detection and ranging) and RADAR (radio detection and ranging) systems are commonly used to generate point cloud data for 3D space around vehicles, for such functions as localization, mapping, and tracking. This disclosure provides improvements for processing the point cloud data that has been collected. The processing improvements include analyzing point cloud data using trajectory equations, depth maps, and texture maps. The processing improvements also include representing the point cloud data by a two dimensional depth map or a texture map and using the depth map or texture map to provide object motion, obstacle detection, freespace detection, and landmark detection for an area surrounding a vehicle.
    Type: Grant
    Filed: July 31, 2018
    Date of Patent: September 15, 2020
    Assignee: Nvidia Corporation
    Inventors: Ishwar Kulkarni, Ibrahim Eden, Michael Kroepfl, David Nister
  • Patent number: 10769840
    Abstract: Various types of systems or technologies can be used to collect data in a 3D space. For example, LiDAR (light detection and ranging) and RADAR (radio detection and ranging) systems are commonly used to generate point cloud data for 3D space around vehicles, for such functions as localization, mapping, and tracking. This disclosure provides improvements for processing the point cloud data that has been collected. The processing improvements include using a three dimensional polar depth map to assist in performing nearest neighbor analysis on point cloud data for object detection, trajectory detection, freespace detection, obstacle detection, landmark detection, and providing other geometric space parameters.
    Type: Grant
    Filed: July 31, 2018
    Date of Patent: September 8, 2020
    Assignee: Nvidia Corporation
    Inventors: Ishwar Kulkarni, Ibrahim Eden, Michael Kroepfl, David Nister
  • Patent number: 10424103
    Abstract: Examples relating to attracting the gaze of a viewer of a display are disclosed. One example method comprises controlling the display to display a target object and using gaze tracking data to monitor a viewer gaze location. A guide element is displayed moving along a computed dynamic path that traverses adjacent to a viewer gaze location and leads to the target object. If the viewer's gaze location is within a predetermined divergence threshold of the guide element, then the display continues displaying the guide element moving along the computed dynamic guide path to the target object. If the viewer's gaze location diverts from the guide element by at least the predetermined divergence threshold, then the display discontinues displaying the guide element moving along the computed dynamic guide path to the target object.
    Type: Grant
    Filed: April 29, 2014
    Date of Patent: September 24, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventor: Ibrahim Eden
  • Publication number: 20190266736
    Abstract: Various types of systems or technologies can be used to collect data in a 3D space. For example, LiDAR (light detection and ranging) and RADAR (radio detection and ranging) systems are commonly used to generate point cloud data for 3D space around vehicles, for such functions as localization, mapping, and tracking. This disclosure provides improvements for processing the point cloud data that has been collected. The processing improvements include analyzing point cloud data using trajectory equations, depth maps, and texture maps. The processing improvements also include representing the point cloud data by a two dimensional depth map or a texture map and using the depth map or texture map to provide object motion, obstacle detection, freespace detection, and landmark detection for an area surrounding a vehicle.
    Type: Application
    Filed: July 31, 2018
    Publication date: August 29, 2019
    Inventors: Ishwar Kulkarni, Ibrahim Eden, Michael Kroepfl, David Nister
  • Publication number: 20190266779
    Abstract: Various types of systems or technologies can be used to collect data in a 3D space. For example, LiDAR (light detection and ranging) and RADAR (radio detection and ranging) systems are commonly used to generate point cloud data for 3D space around vehicles, for such functions as localization, mapping, and tracking. This disclosure provides improvements for processing the point cloud data that has been collected. The processing improvements include using a three dimensional polar depth map to assist in performing nearest neighbor analysis on point cloud data for object detection, trajectory detection, freespace detection, obstacle detection, landmark detection, and providing other geometric space parameters.
    Type: Application
    Filed: July 31, 2018
    Publication date: August 29, 2019
    Inventors: Ishwar Kulkarni, Ibrahim Eden, Michael Kroepfl, David Nister
  • Patent number: 9946339
    Abstract: A method to furnish input representing gaze direction in a computer system operatively coupled to a vision system. In this method, a first image of an eye at a first level of illumination is acquired by a camera of the vision system. The first image is obtained from the camera, and a second image of the eye corresponding to a second, different level of illumination is also obtained. Brightness of corresponding pixels of the first and second images is compared in order to distinguish a reflection of the illumination by the eye from a reflection of the illumination by eyewear. The input is then furnished based on the reflection of the illumination by the eye.
    Type: Grant
    Filed: October 8, 2014
    Date of Patent: April 17, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Huimin Guo, Ibrahim Eden, Vaibhav Thukral, David Zachris Nister
  • Patent number: 9916502
    Abstract: Embodiments are disclosed for eye tracking systems and methods. An example eye tracking system comprises a plurality of light sources and a camera configured to capture an image of light from the light sources as reflected from an eye. The eye tracking system further comprises a logic device and a storage device storing instructions executable by the logic device to acquire frames of eye tracking data by iteratively projecting light from different combinations of light sources of the plurality of light sources and capturing an image of the eye during projection of each combination. The instructions may be further executable to select a selected combination of light sources for eye tracking based on a determination of occlusion detected in the image arising from a transparent or semi-transparent optical structure positioned between the eye and the camera and project light via the selected combination of light sources for eye tracking.
    Type: Grant
    Filed: August 22, 2016
    Date of Patent: March 13, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Mudit Agrawal, Vaibhav Thukral, Ibrahim Eden, David Nister, Shivkumar Swaminathan
  • Publication number: 20160358009
    Abstract: Embodiments are disclosed for eye tracking systems and methods. An example eye tracking system comprises a plurality of light sources and a camera configured to capture an image of light from the light sources as reflected from an eye. The eye tracking system further comprises a logic device and a storage device storing instructions executable by the logic device to acquire frames of eye tracking data by iteratively projecting light from different combinations of light sources of the plurality of light sources and capturing an image of the eye during projection of each combination. The instructions may be further executable to select a selected combination of light sources for eye tracking based on a determination of occlusion detected in the image arising from a transparent or semi-transparent optical structure positioned between the eye and the camera and project light via the selected combination of light sources for eye tracking.
    Type: Application
    Filed: August 22, 2016
    Publication date: December 8, 2016
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Mudit Agrawal, Vaibhav Thukral, Ibrahim Eden, David Nister, Shivkumar Swaminathan
  • Patent number: 9454699
    Abstract: Embodiments are disclosed for eye tracking systems and methods. An example eye tracking system comprises a plurality of light sources and a camera configured to capture an image of light from the light sources as reflected from an eye. The eye tracking system further comprises a logic device and a storage device storing instructions executable by the logic device to acquire frames of eye tracking data by iteratively projecting light from different combinations of light sources of the plurality of light sources and capturing an image of the eye during projection of each combination. The instructions may be further executable to select a selected combination of light sources for eye tracking based on a determination of occlusion detected in the image arising from a transparent or semi-transparent optical structure positioned between the eye and the camera and project light via the selected combination of light sources for eye tracking.
    Type: Grant
    Filed: April 29, 2014
    Date of Patent: September 27, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Mudit Agrawal, Vaibhav Thukral, Ibrahim Eden, David Nister, Shivkumar Swaminathan
  • Patent number: 9342147
    Abstract: Examples relating to using non-visual feedback to alert a viewer of a display that a visual change has been triggered are disclosed. One disclosed example provides a method comprising using gaze tracking data from a gaze tracking system to determine that a viewer changes a gaze location. Based on determining that the viewer changes the gaze location, a visual change is triggered and non-visual feedback indicating the triggering of the visual change is provided to the viewer. If a cancel change input is received within a predetermined timeframe, the visual change is not displayed. If a cancel change input is not received within the timeframe, the visual change is displayed via the display.
    Type: Grant
    Filed: April 10, 2014
    Date of Patent: May 17, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Weerapan Wilairat, Ibrahim Eden, Vaibhav Thukral, David Nister, Vivek Pradeep
  • Publication number: 20160103484
    Abstract: A method to furnish input representing gaze direction in a computer system operatively coupled to a vision system. In this method, a first image of an eye at a first level of illumination is acquired by a camera of the vision system. The first image is obtained from the camera, and a second image of the eye corresponding to a second, different level of illumination is also obtained. Brightness of corresponding pixels of the first and second images is compared in order to distinguish a reflection of the illumination by the eye from a reflection of the illumination by eyewear. The input is then furnished based on the reflection of the illumination by the eye.
    Type: Application
    Filed: October 8, 2014
    Publication date: April 14, 2016
    Inventors: Huimin Guo, Ibrahim Eden, Vaibhav Thukral, David Zachris Nister
  • Publication number: 20150346814
    Abstract: One or more techniques and/or systems are provided for gaze tracking of one or more users. A user tracking component (e.g., a depth camera or a relatively lower resolution camera) may be utilized to obtain user tracking data for a user. The user tracking data is evaluated to identify a spatial location of the user. An eye capture camera (e.g., a relatively higher resolution camera) may be selected from an eye capture camera configuration based upon the eye capture camera having a view frustum corresponding to the spatial location of the user. The eye capture camera may be invoked to obtain eye region imagery of the user. Other eye capture cameras within the eye capture camera configuration are maintained in a powered down state to reduce power and/or bandwidth consumption. Gaze tracking information may be generated based upon the eye region imagery, and may be used to perform a task.
    Type: Application
    Filed: May 30, 2014
    Publication date: December 3, 2015
    Inventors: Vaibhav Thukral, Ibrahim Eden, Shivkumar Swaminathan, David Nister, Morgan Venable
  • Patent number: 9189095
    Abstract: Embodiments are disclosed that relate to calibrating an eye tracking system via touch inputs. For example, one disclosed embodiment provides, on a computing system comprising a touch sensitive display and an eye tracking system, a method comprising displaying a user interface on the touch sensitive display, determining a gaze location via the eye tracking system, receiving a touch input at a touch location on the touch sensitive display, and calibrating the eye tracking system based upon an offset between the gaze location and the touch location.
    Type: Grant
    Filed: June 6, 2013
    Date of Patent: November 17, 2015
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Ibrahim Eden, Ruchita Bhargava