Patents by Inventor Vaibhav Thukral

Vaibhav Thukral has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230406315
    Abstract: Embodiments of the present disclosure relate to encoding of junction area information in map data. In particular, the encoding may include organizing vehicle paths that traverse through a junction area according to path groups and organizing contentions that influence behavior of vehicles traveling along the vehicle paths according to contention groups. In addition, the encoding may include generating direction data structures that associate respective path groups with one or more of the contention groups. In these or other embodiments, the map data that corresponds to the junction area may be updated with direction data structures.
    Type: Application
    Filed: June 17, 2022
    Publication date: December 21, 2023
    Inventors: Russell CHREPTYK, Matthew ASHMAN, Andy CAMPBELL, Tharun BATTULA, Vaibhav THUKRAL
  • Publication number: 20230366698
    Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.
    Type: Application
    Filed: July 14, 2023
    Publication date: November 16, 2023
    Inventors: David Nister, Ruchi Bhargava, Vaibhav Thukral, Michael Grabner, Ibrahim Eden, Jeffrey Liu
  • Patent number: 11788861
    Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: October 17, 2023
    Assignee: NVIDIA Corporation
    Inventors: David Nister, Ruchi Bhargava, Vaibhav Thukral, Michael Grabner, Ibrahim Eden, Jeffrey Liu
  • Publication number: 20230288223
    Abstract: In various examples, a method includes computing a current keyframe, the current keyframe being representative of an area around an autonomous vehicle at a current time based on map data. The method includes transforming a preceding keyframe to a coordinate frame of the autonomous vehicle at a first time prior to completing computation of the current keyframe to generate a first world model frame. The method includes transforming the preceding keyframe to the coordinate frame of the autonomous vehicle at a second time after the first time and prior to completing computation of the current keyframe to generate a second world model frame.
    Type: Application
    Filed: March 10, 2022
    Publication date: September 14, 2023
    Inventors: Akash Chandra Shekar, Matthew Ashman, Vaibhav Thukral
  • Patent number: 11698272
    Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.
    Type: Grant
    Filed: August 31, 2020
    Date of Patent: July 11, 2023
    Assignee: NVIDIA Corporation
    Inventors: Michael Kroepfl, Amir Akbarzadeh, Ruchi Bhargava, Vaibhav Thukral, Neda Cvijetic, Vadim Cugunovs, David Nister, Birgit Henke, Ibrahim Eden, Youding Zhu, Michael Grabner, Ivana Stojanovic, Yu Sheng, Jeffrey Liu, Enliang Zheng, Jordan Marr, Andrew Carley
  • Publication number: 20230130814
    Abstract: In examples, autonomous vehicles are enabled to negotiate yield scenarios in a safe and predictable manner. In response to detecting a yield scenario, a wait element data structure is generated that encodes geometries of an ego path, a contender path that includes at least one contention point with the ego path, as well as a state of contention associated with the at least on contention point. Geometry of yield scenario context may also be encoded, such as inside ground of an intersection, entry or exit lines, etc. The wait element data structure is passed to a yield planner of the autonomous vehicle. The yield planner determines a yielding behavior for the autonomous vehicle based at least on the wait element data structure. A control system of the autonomous vehicle may operate the autonomous vehicle in accordance with the yield behavior, such that the autonomous vehicle safely negotiates the yield scenario.
    Type: Application
    Filed: October 27, 2021
    Publication date: April 27, 2023
    Inventors: David Nister, Minwoo Park, Miguel Sainz Serra, Vaibhav Thukral, Berta Rodriguez Hervas
  • Publication number: 20220349725
    Abstract: In various examples, a high definition (HD) map is provided that includes a segmented data structure that allows for selective access to desired road segments and corresponding layers of map data. For example, the HD map may be segmented into a series of tiles that may correspond to a geographic region, and each of the tiles may include any number of road segments corresponding to portions of the geographic region. Each road segment may include a corresponding set of layers—which may include driving layers for use by the ego-machine and/or training layers for generating ground truth data—from the HD map that are associated with the road segment alone. As such, when traversing the environment, an ego-machine may determine one or more road segments within a tile corresponding to a current location, and may selectively download one or more layers for each of the one or more road segments.
    Type: Application
    Filed: April 21, 2022
    Publication date: November 3, 2022
    Inventors: Russell Chreptyk, Vaibhav Thukral, David Nister
  • Publication number: 20220341750
    Abstract: In various examples, health of a high definition (HD) map may be monitored to determine whether inaccuracies exist in one or more layers of the HD map. For example, as one or more vehicles rely on the HD map to traverse portions of an environment, disagreements between perception of the one or more vehicles, map layers of the HD map, and/or other disagreement types may be identified and aggregated. Where errors are identified that indicate a drop in health of the HD map, updated data may be crowdsourced from one or more vehicles corresponding to a location of disagreement within the HD map, and the updated data may be used to update, verify, and validate the HD map.
    Type: Application
    Filed: April 21, 2022
    Publication date: October 27, 2022
    Inventors: Amir Akbarzadeh, Ruchi Bhargava, Vaibhav Thukral
  • Publication number: 20220333950
    Abstract: Systems and methods for vehicle-based determination of HD map update information. Sensor-equipped vehicles may determine locations of various detected objects relative to the vehicles. Vehicles may also determine the location of reference objects relative to the vehicles, where the location of the reference objects in an absolute coordinate system is also known. The absolute coordinates of various detected objects may then be determined from the absolute position of the reference objects and the locations of other objects relative to the reference objects. Newly-determined absolute locations of detected objects may then be transmitted to HD map services for updating.
    Type: Application
    Filed: April 19, 2021
    Publication date: October 20, 2022
    Inventors: Amir Akbarzadeh, Ruchita Bhargava, Bhaven Dedhia, Rambod Jacoby, Jeffrey Liu, Vaibhav Thukral
  • Publication number: 20210063200
    Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.
    Type: Application
    Filed: August 31, 2020
    Publication date: March 4, 2021
    Inventors: Michael Kroepfl, Amir Akbarzadeh, Ruchi Bhargava, Vaibhav Thukral, Neda Cvijetic, Vadim Cugunovs, David Nister, Birgit Henke, Ibrahim Eden, Youding Zhu, Michael Grabner, Ivana Stojanovic, Yu Sheng, Jeffrey Liu, Enliang Zheng, Jordan Marr, Andrew Carley
  • Publication number: 20210063198
    Abstract: An end-to-end system for data generation, map creation using the generated data, and localization to the created map is disclosed. Mapstreams—or streams of sensor data, perception outputs from deep neural networks (DNNs), and/or relative trajectory data—corresponding to any number of drives by any number of vehicles may be generated and uploaded to the cloud. The mapstreams may be used to generate map data—and ultimately a fused high definition (HD) map—that represents data generated over a plurality of drives. When localizing to the fused HD map, individual localization results may be generated based on comparisons of real-time data from a sensor modality to map data corresponding to the same sensor modality. This process may be repeated for any number of sensor modalities and the results may be fused together to determine a final fused localization result.
    Type: Application
    Filed: August 31, 2020
    Publication date: March 4, 2021
    Inventors: David Nister, Ruchi Bhargava, Vaibhav Thukral, Michael Grabner, Ibrahim Eden, Jeffrey Liu
  • Patent number: 10757328
    Abstract: Disclosed are an apparatus and a method of low-latency, low-power eye tracking. In some embodiments, the eye tracking method operates a first sensor having a first level of power consumption that tracks positions of an eye of a user. In response to detection that the eye does not change position for a time period, the method stops operation of the first sensor and instead operates a second sensor that detects a change of position of the eye. The second sensor has a level of power consumption lower than the level of power consumption of the first sensor. Once the eye position changes, the second sensor resumes operation.
    Type: Grant
    Filed: December 23, 2016
    Date of Patent: August 25, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Vaibhav Thukral, Chris Aholt, Christopher Maurice Mei, Bill Chau, Nguyen Bach, Lev Cherkashin, Jaeyoun Kim
  • Patent number: 10447960
    Abstract: Systems and methods for controlling closed captioning using an eye tracking device are provided. The system for controlling closed captioning may comprise a display device, a closed captioning controller configured to display closed captioning text for a media item during playback on the display device, and an eye tracking device configured to detect a location of a user's gaze relative to the display device and send the location to the closed captioning controller. The closed captioning controller may be configured to recognize a predetermined gaze pattern of the user's gaze and, upon detecting the predetermined gaze pattern, partially or completely deemphasize the display of the closed captioning text.
    Type: Grant
    Filed: February 13, 2017
    Date of Patent: October 15, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Weerapan Wilairat, Vaibhav Thukral
  • Patent number: 10248199
    Abstract: Examples relating to calibrating an estimated gaze location are disclosed. One example method comprises monitoring the estimated gaze location of a viewer using gaze tracking data from a gaze tracking system. Image data for display via a display device is received, the image data comprising at least one target visual and target visual metadata that identifies the at least one target visual. The target visual metadata is used to identify a target location of the at least one target visual. The estimated gaze location of the viewer is monitored and a probability that the viewer is gazing at the target location is estimated. The gaze tracking system is calibrated using the probability, the estimated gaze location and the target location to generate an updated estimated gaze location of the viewer.
    Type: Grant
    Filed: August 7, 2017
    Date of Patent: April 2, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Weerapan Wilairat, Vaibhav Thukral, David Nister, Morgan Kolya Venable, Bernard James Kerr, Chris Aholt
  • Patent number: 10025378
    Abstract: Embodiments are disclosed herein that relate to selecting user interface elements via a periodically updated position signal. For example, one disclosed embodiment provides a method comprising displaying on a graphical user interface a representation of a user interface element and a representation of an interactive target. The method further comprises receiving an input of coordinates of the periodically updated position signal, and determining a selection of the user interface element if a motion interaction of the periodically updated position signal with the interactive target meets a predetermined motion condition.
    Type: Grant
    Filed: June 25, 2013
    Date of Patent: July 17, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Morgan Kolya Venable, Bernard James Kerr, Vaibhav Thukral, David Nister
  • Publication number: 20180184002
    Abstract: Disclosed are an apparatus and a method of low-latency, low-power eye tracking. In some embodiments, the eye tracking method operates a first sensor having a first level of power consumption that tracks positions of an eye of a user. In response to detection that the eye does not change position for a time period, the method stops operation of the first sensor and instead operates a second sensor that detects a change of position of the eye. The second sensor has a level of power consumption lower than the level of power consumption of the first sensor. Once the eye position changes, the second sensor resumes operation.
    Type: Application
    Filed: December 23, 2016
    Publication date: June 28, 2018
    Inventors: Vaibhav Thukral, Chris Aholt, Christopher Maurice Mei, Bill Chau, Nguyen Bach, Lev Cherkashin, Jaeyoun Kim
  • Patent number: 9946339
    Abstract: A method to furnish input representing gaze direction in a computer system operatively coupled to a vision system. In this method, a first image of an eye at a first level of illumination is acquired by a camera of the vision system. The first image is obtained from the camera, and a second image of the eye corresponding to a second, different level of illumination is also obtained. Brightness of corresponding pixels of the first and second images is compared in order to distinguish a reflection of the illumination by the eye from a reflection of the illumination by eyewear. The input is then furnished based on the reflection of the illumination by the eye.
    Type: Grant
    Filed: October 8, 2014
    Date of Patent: April 17, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Huimin Guo, Ibrahim Eden, Vaibhav Thukral, David Zachris Nister
  • Patent number: 9916502
    Abstract: Embodiments are disclosed for eye tracking systems and methods. An example eye tracking system comprises a plurality of light sources and a camera configured to capture an image of light from the light sources as reflected from an eye. The eye tracking system further comprises a logic device and a storage device storing instructions executable by the logic device to acquire frames of eye tracking data by iteratively projecting light from different combinations of light sources of the plurality of light sources and capturing an image of the eye during projection of each combination. The instructions may be further executable to select a selected combination of light sources for eye tracking based on a determination of occlusion detected in the image arising from a transparent or semi-transparent optical structure positioned between the eye and the camera and project light via the selected combination of light sources for eye tracking.
    Type: Grant
    Filed: August 22, 2016
    Date of Patent: March 13, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Mudit Agrawal, Vaibhav Thukral, Ibrahim Eden, David Nister, Shivkumar Swaminathan
  • Publication number: 20170336867
    Abstract: Examples relating to calibrating an estimated gaze location are disclosed. One example method comprises monitoring the estimated gaze location of a viewer using gaze tracking data from a gaze tracking system. Image data for display via a display device is received, the image data comprising at least one target visual and target visual metadata that identifies the at least one target visual. The target visual metadata is used to identify a target location of the at least one target visual. The estimated gaze location of the viewer is monitored and a probability that the viewer is gazing at the target location is estimated. The gaze tracking system is calibrated using the probability, the estimated gaze location and the target location to generate an updated estimated gaze location of the viewer.
    Type: Application
    Filed: August 7, 2017
    Publication date: November 23, 2017
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Weerapan Wilairat, Vaibhav Thukral, David Nister, Morgan Kolya Venable, Bernard James Kerr, Chris Aholt
  • Patent number: 9727136
    Abstract: Examples relating calibrating an estimated gaze location are disclosed. One example method comprises monitoring the estimated gaze location of a viewer using gaze tracking data from a gaze tracking system. Image data for display via a display device is received and, without using input from the viewer, at least one target visual that may attract a gaze of the viewer and a target location of the target visual are identified within the image data. The estimated gaze location of the viewer is compared with the target location of the target visual. An offset vector is calculated based on the estimated gaze location and the target location. The gaze tracking system is calibrated using the offset vector.
    Type: Grant
    Filed: May 19, 2014
    Date of Patent: August 8, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Weerapan Wilairat, Vaibhav Thukral, David Nister, Morgan Kolya Venable, Bernard James Kerr, Chris Aholt