Patents by Inventor David Nistér

David Nistér has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190384304
    Abstract: In various examples, a deep learning solution for path detection is implemented to generate a more abstract definition of a drivable path without reliance on explicit lane-markings—by using a detection-based approach. Using approaches of the present disclosure, the identification of drivable paths may be possible in environments where conventional approaches are unreliable, or fail—such as where lane markings do not exist or are occluded. The deep learning solution may generate outputs that represent geometries for one or more drivable paths in an environment and confidence values corresponding to path types or classes that the geometries correspond. These outputs may be directly useable by an autonomous vehicle—such as an autonomous driving software stack—with minimal post-processing.
    Type: Application
    Filed: June 6, 2019
    Publication date: December 19, 2019
    Inventors: Regan Blythe Towal, Maroof Mohammed Farooq, Vijay Chintalapudi, Carolina Parada, David Nister
  • Publication number: 20190265703
    Abstract: A system and method for an on-demand shuttle, bus, or taxi service able to operate on private and public roads provides situational awareness and confidence displays. The shuttle may include ISO 26262 Level 4 or Level 5 functionality and can vary the route dynamically on-demand, and/or follow a predefined route or virtual rail. The shuttle is able to stop at any predetermined station along the route. The system allows passengers to request rides and interact with the system via a variety of interfaces, including without limitation a mobile device, desktop computer, or kiosks. Each shuttle preferably includes an in-vehicle controller, which preferably is an AI Supercomputer designed and optimized for autonomous vehicle functionality, with computer vision, deep learning, and real time ray tracing accelerators. An AI Dispatcher performs AI simulations to optimize system performance according to operator-specified system parameters.
    Type: Application
    Filed: February 26, 2019
    Publication date: August 29, 2019
    Inventors: Gary HICOK, Michael COX, Miguel SAINZ, Martin HEMPEL, Ratin KUMAR, Timo ROMAN, Gordon GRIGOR, David NISTER, Justin EBERT, Chin SHIH, Tony TAM, Ruchi BHARGAVA
  • Publication number: 20190266736
    Abstract: Various types of systems or technologies can be used to collect data in a 3D space. For example, LiDAR (light detection and ranging) and RADAR (radio detection and ranging) systems are commonly used to generate point cloud data for 3D space around vehicles, for such functions as localization, mapping, and tracking. This disclosure provides improvements for processing the point cloud data that has been collected. The processing improvements include analyzing point cloud data using trajectory equations, depth maps, and texture maps. The processing improvements also include representing the point cloud data by a two dimensional depth map or a texture map and using the depth map or texture map to provide object motion, obstacle detection, freespace detection, and landmark detection for an area surrounding a vehicle.
    Type: Application
    Filed: July 31, 2018
    Publication date: August 29, 2019
    Inventors: Ishwar Kulkarni, Ibrahim Eden, Michael Kroepfl, David Nister
  • Publication number: 20190266779
    Abstract: Various types of systems or technologies can be used to collect data in a 3D space. For example, LiDAR (light detection and ranging) and RADAR (radio detection and ranging) systems are commonly used to generate point cloud data for 3D space around vehicles, for such functions as localization, mapping, and tracking. This disclosure provides improvements for processing the point cloud data that has been collected. The processing improvements include using a three dimensional polar depth map to assist in performing nearest neighbor analysis on point cloud data for object detection, trajectory detection, freespace detection, obstacle detection, landmark detection, and providing other geometric space parameters.
    Type: Application
    Filed: July 31, 2018
    Publication date: August 29, 2019
    Inventors: Ishwar Kulkarni, Ibrahim Eden, Michael Kroepfl, David Nister
  • Publication number: 20190258251
    Abstract: Autonomous driving is one of the world's most challenging computational problems. Very large amounts of data from cameras, RADARs, LIDARs, and HD-Maps must be processed to generate commands to control the car safely and comfortably in real-time. This challenging task requires a dedicated supercomputer that is energy-efficient and low-power, complex high-performance software, and breakthroughs in deep learning AI algorithms. To meet this task, the present technology provides advanced systems and methods that facilitate autonomous driving functionality, including a platform for autonomous driving Levels 3, 4, and/or 5. In preferred embodiments, the technology provides an end-to-end platform with a flexible architecture, including an architecture for autonomous vehicles that leverages computer vision and known ADAS techniques, providing diversity and redundancy, and meeting functional safety standards.
    Type: Application
    Filed: November 9, 2018
    Publication date: August 22, 2019
    Inventors: Michael Alan DITTY, Gary HICOK, Jonathan SWEEDLER, Clement FARABET, Mohammed Abdulla YOUSUF, Tai-Yuen CHAN, Ram GANAPATHI, Ashok SRINIVASAN, Michael Rod TRUOG, Karl GREB, John George MATHIESON, David Nister, Kevin Flory, Daniel Perrin, Dan Hettena
  • Publication number: 20190250622
    Abstract: In various examples, sensor data representative of a field of view of at least one sensor of a vehicle in an environment is received from the at least one sensor. Based at least in part on the sensor data, parameters of an object located in the environment are determined. Trajectories of the object are modeled toward target positions based at least in part on the parameters of the object. From the trajectories, safe time intervals (and/or safe arrival times) over which the vehicle occupying the plurality of target positions would not result in a collision with the object are computed. Based at least in part on the safe time intervals (and/or safe arrival times) and a position of the vehicle in the environment a trajectory for the vehicle may be generated and/or analyzed.
    Type: Application
    Filed: February 7, 2019
    Publication date: August 15, 2019
    Inventors: David Nister, Anton Vorontsov
  • Publication number: 20190243371
    Abstract: In various examples, a current claimed set of points representative of a volume in an environment occupied by a vehicle at a time may be determined. A vehicle-occupied trajectory and at least one object-occupied trajectory may be generated at the time. An intersection between the vehicle-occupied trajectory and an object-occupied trajectory may be determined based at least in part on comparing the vehicle-occupied trajectory to the object-occupied trajectory. Based on the intersection, the vehicle may then execute the first safety procedure or an alternative procedure that, when implemented by the vehicle when the object implements the second safety procedure, is determined to have a lesser likelihood of incurring a collision between the vehicle and the object than the first safety procedure.
    Type: Application
    Filed: February 1, 2019
    Publication date: August 8, 2019
    Inventors: David Nister, Hon-Leung Lee, Julia Ng, Yizhou Wang
  • Patent number: 10345903
    Abstract: A method of detecting eye location for a head-mounted display system includes directing positioning light to an eye of a user and detecting the positioning light reflected from the eye of the user. The method further includes determining a distance between the eye and a near-eye optic of the head-mounted display system based on attributes of the detected positioning light, and providing feedback for adjusting the distance between the eye and the near-eye optic.
    Type: Grant
    Filed: August 22, 2013
    Date of Patent: July 9, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Steve Robbins, Scott C. McEldowney, Xinye Lou, David D. Bohn, Quentin Simon Charles Miller, David Nister, Gerhard Schneider, Christopher Maurice Mei, Nathan Ackerman
  • Patent number: 10248199
    Abstract: Examples relating to calibrating an estimated gaze location are disclosed. One example method comprises monitoring the estimated gaze location of a viewer using gaze tracking data from a gaze tracking system. Image data for display via a display device is received, the image data comprising at least one target visual and target visual metadata that identifies the at least one target visual. The target visual metadata is used to identify a target location of the at least one target visual. The estimated gaze location of the viewer is monitored and a probability that the viewer is gazing at the target location is estimated. The gaze tracking system is calibrated using the probability, the estimated gaze location and the target location to generate an updated estimated gaze location of the viewer.
    Type: Grant
    Filed: August 7, 2017
    Date of Patent: April 2, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Weerapan Wilairat, Vaibhav Thukral, David Nister, Morgan Kolya Venable, Bernard James Kerr, Chris Aholt
  • Patent number: 10228561
    Abstract: An example see-through head-mounted display system includes a freeform prism and a display device configured to emit display light through the freeform prism to an eye of a user. The see-through head-mounted display system may also include an imaging device configured to receive gaze-detection light reflected from the eye and directed through the freeform prism.
    Type: Grant
    Filed: June 25, 2013
    Date of Patent: March 12, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Steve Robbins, Scott McEldowney, Xinye Lou, David Nister, Drew Steedly, Quentin Simon Charles Miller, David D Bohn, James Peele Terrell, Jr., Andrew C. Goris, Nathan Ackerman
  • Patent number: 10025378
    Abstract: Embodiments are disclosed herein that relate to selecting user interface elements via a periodically updated position signal. For example, one disclosed embodiment provides a method comprising displaying on a graphical user interface a representation of a user interface element and a representation of an interactive target. The method further comprises receiving an input of coordinates of the periodically updated position signal, and determining a selection of the user interface element if a motion interaction of the periodically updated position signal with the interactive target meets a predetermined motion condition.
    Type: Grant
    Filed: June 25, 2013
    Date of Patent: July 17, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Morgan Kolya Venable, Bernard James Kerr, Vaibhav Thukral, David Nister
  • Publication number: 20180197047
    Abstract: A method of object detection includes receiving a first image taken from a first perspective by a first camera and receiving a second image taken from a second perspective, different from the first perspective, by a second camera. Each pixel in the first image is offset relative to a corresponding pixel in the second image by a predetermined offset distance resulting in offset first and second images. A particular pixel of the offset first image depicts a same object locus as a corresponding pixel in the offset second image only if the object locus is at an expected object-detection distance from the first and second cameras. The method includes recognizing that a target object is imaged by the particular pixel of the offset first image and the corresponding pixel of the offset second image.
    Type: Application
    Filed: March 6, 2018
    Publication date: July 12, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: David Nister, Piotr Dollar, Wolf Kienzle, Mladen Radojevic, Matthew S. Ashman, Ivan Stojiljkovic, Magdalena Vukosavljevic
  • Patent number: 9934451
    Abstract: A method of object detection includes receiving a first image taken by a first stereo camera, receiving a second image taken by a second stereo camera, and offsetting the first image relative to the second image by an offset distance selected such that each corresponding pixel of offset first and second images depict a same object locus if the object locus is at an assumed distance from the first and second stereo cameras. The method further includes locating a target object in the offset first and second images.
    Type: Grant
    Filed: June 25, 2013
    Date of Patent: April 3, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: David Nister, Piotr Dollar, Wolf Kienzle, Mladen Radojevic, Matthew S. Ashman, Ivan Stojiljkovic, Magdalena Vukosavljevic
  • Patent number: 9916502
    Abstract: Embodiments are disclosed for eye tracking systems and methods. An example eye tracking system comprises a plurality of light sources and a camera configured to capture an image of light from the light sources as reflected from an eye. The eye tracking system further comprises a logic device and a storage device storing instructions executable by the logic device to acquire frames of eye tracking data by iteratively projecting light from different combinations of light sources of the plurality of light sources and capturing an image of the eye during projection of each combination. The instructions may be further executable to select a selected combination of light sources for eye tracking based on a determination of occlusion detected in the image arising from a transparent or semi-transparent optical structure positioned between the eye and the camera and project light via the selected combination of light sources for eye tracking.
    Type: Grant
    Filed: August 22, 2016
    Date of Patent: March 13, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Mudit Agrawal, Vaibhav Thukral, Ibrahim Eden, David Nister, Shivkumar Swaminathan
  • Publication number: 20170336867
    Abstract: Examples relating to calibrating an estimated gaze location are disclosed. One example method comprises monitoring the estimated gaze location of a viewer using gaze tracking data from a gaze tracking system. Image data for display via a display device is received, the image data comprising at least one target visual and target visual metadata that identifies the at least one target visual. The target visual metadata is used to identify a target location of the at least one target visual. The estimated gaze location of the viewer is monitored and a probability that the viewer is gazing at the target location is estimated. The gaze tracking system is calibrated using the probability, the estimated gaze location and the target location to generate an updated estimated gaze location of the viewer.
    Type: Application
    Filed: August 7, 2017
    Publication date: November 23, 2017
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Weerapan Wilairat, Vaibhav Thukral, David Nister, Morgan Kolya Venable, Bernard James Kerr, Chris Aholt
  • Patent number: 9727136
    Abstract: Examples relating calibrating an estimated gaze location are disclosed. One example method comprises monitoring the estimated gaze location of a viewer using gaze tracking data from a gaze tracking system. Image data for display via a display device is received and, without using input from the viewer, at least one target visual that may attract a gaze of the viewer and a target location of the target visual are identified within the image data. The estimated gaze location of the viewer is compared with the target location of the target visual. An offset vector is calculated based on the estimated gaze location and the target location. The gaze tracking system is calibrated using the offset vector.
    Type: Grant
    Filed: May 19, 2014
    Date of Patent: August 8, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Weerapan Wilairat, Vaibhav Thukral, David Nister, Morgan Kolya Venable, Bernard James Kerr, Chris Aholt
  • Publication number: 20160358009
    Abstract: Embodiments are disclosed for eye tracking systems and methods. An example eye tracking system comprises a plurality of light sources and a camera configured to capture an image of light from the light sources as reflected from an eye. The eye tracking system further comprises a logic device and a storage device storing instructions executable by the logic device to acquire frames of eye tracking data by iteratively projecting light from different combinations of light sources of the plurality of light sources and capturing an image of the eye during projection of each combination. The instructions may be further executable to select a selected combination of light sources for eye tracking based on a determination of occlusion detected in the image arising from a transparent or semi-transparent optical structure positioned between the eye and the camera and project light via the selected combination of light sources for eye tracking.
    Type: Application
    Filed: August 22, 2016
    Publication date: December 8, 2016
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Mudit Agrawal, Vaibhav Thukral, Ibrahim Eden, David Nister, Shivkumar Swaminathan
  • Patent number: 9488837
    Abstract: A system and related methods for near-eye display of an image are provided. In one example, a near-eye display system includes a light source comprising a surface and a plurality of pixels having a pixel pitch of 5 microns or less. An aperture array is located between 2 mm and 5 mm from the surface of the light source. The aperture array comprises non-overlapping apertures that are each centered on a vertex of an equilateral triangle within a grid of equilateral triangles. The center of each aperture is spaced from the center of each adjacent aperture by an aperture spacing of between 1 mm and 9 mm. The aperture array selectively passes the light emitted from the pixels to display the image without a double image condition.
    Type: Grant
    Filed: June 28, 2013
    Date of Patent: November 8, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: David Nister, Georg Klein
  • Patent number: 9454699
    Abstract: Embodiments are disclosed for eye tracking systems and methods. An example eye tracking system comprises a plurality of light sources and a camera configured to capture an image of light from the light sources as reflected from an eye. The eye tracking system further comprises a logic device and a storage device storing instructions executable by the logic device to acquire frames of eye tracking data by iteratively projecting light from different combinations of light sources of the plurality of light sources and capturing an image of the eye during projection of each combination. The instructions may be further executable to select a selected combination of light sources for eye tracking based on a determination of occlusion detected in the image arising from a transparent or semi-transparent optical structure positioned between the eye and the camera and project light via the selected combination of light sources for eye tracking.
    Type: Grant
    Filed: April 29, 2014
    Date of Patent: September 27, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Mudit Agrawal, Vaibhav Thukral, Ibrahim Eden, David Nister, Shivkumar Swaminathan
  • Patent number: 9342147
    Abstract: Examples relating to using non-visual feedback to alert a viewer of a display that a visual change has been triggered are disclosed. One disclosed example provides a method comprising using gaze tracking data from a gaze tracking system to determine that a viewer changes a gaze location. Based on determining that the viewer changes the gaze location, a visual change is triggered and non-visual feedback indicating the triggering of the visual change is provided to the viewer. If a cancel change input is received within a predetermined timeframe, the visual change is not displayed. If a cancel change input is not received within the timeframe, the visual change is displayed via the display.
    Type: Grant
    Filed: April 10, 2014
    Date of Patent: May 17, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Weerapan Wilairat, Ibrahim Eden, Vaibhav Thukral, David Nister, Vivek Pradeep