Patents by Inventor David Nister

David Nister has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20140375541
    Abstract: Embodiments are disclosed that relate to tracking a user's eye based on time-of-flight depth image data of the user's eye are disclosed. For example, one disclosed embodiment provides an eye tracking system comprising a light source, a sensing subsystem configured to obtain a two-dimensional image of a user's eye and depth data of the user's eye using a depth sensor having an unconstrained baseline distance, and a logic subsystem configured to control the light source to emit light, control the sensing subsystem to acquire a two-dimensional image of the user's eye while illuminating the light source, control the sensing subsystem to acquire depth data of the user's eye, determine a gaze direction of the user's eye from the two-dimensional image, determine a location on a display at which the gaze direction intersects the display based on the gaze direction and the depth data, and output the location.
    Type: Application
    Filed: June 25, 2013
    Publication date: December 25, 2014
    Inventors: David Nister, Ibrahim Eden
  • Publication number: 20140375790
    Abstract: Embodiments are disclosed for a see-through head-mounted display system. In one embodiment, the see-through head-mounted display system comprises a freeform prism, and a display device configured to emit display light through the freeform prism to an eye of a user. The see-through head-mounted display system may also comprise an imaging device having an entrance pupil positioned at a back focal plane of the freeform prism, the imaging device configured to receive gaze-detection light reflected from the eye and directed through the freeform prism.
    Type: Application
    Filed: June 25, 2013
    Publication date: December 25, 2014
    Inventors: Steve Robbins, Scott McEldowney, Xinye Lou, David Nister, Drew Steedly, Quentin Simon Charles Miller, David D. Bohn, James Peele Terrell, JR., Andrew C. Goris, Nathan Ackerman
  • Patent number: 8917238
    Abstract: Various embodiments related to entering text into a computing device via eye-typing are disclosed. For example, one embodiment provides a method that includes receiving a data set including a plurality of gaze samples, each gaze sample including a gaze location and a corresponding point in time. The method further comprises processing the plurality of gaze samples to determine one or more likely terms represented by the data set.
    Type: Grant
    Filed: June 28, 2012
    Date of Patent: December 23, 2014
    Assignee: Microsoft Corporation
    Inventors: David Nister, Vaibhav Thukral, Djordje Nijemcevic, Ruchi Bhargava
  • Publication number: 20140357290
    Abstract: A source wireless fingerprint is associated with a source image. One or more eligible cataloged wireless fingerprints having a threshold similarity to the source wireless fingerprint are found. Similarly, one or more eligible cataloged images having a threshold similarity to the source image are found. A current location of a device that acquires the source wireless fingerprint and source image is inferred as a chosen cataloged location of a chosen eligible cataloged wireless fingerprint and a chosen eligible cataloged image.
    Type: Application
    Filed: May 31, 2013
    Publication date: December 4, 2014
    Inventors: Michael Grabner, Ethan Eade, David Nister
  • Patent number: 8880545
    Abstract: Various embodiments enable audio data, such as music data, to be captured, by a device, from a background environment and processed to formulate a query that can then be transmitted to a content recognition service. In one or more embodiments, multiple queries are transmitted to the content recognition service. In at least some embodiments, subsequent queries can progressively incorporate previous queries plus additional data that is captured. In one or more embodiments, responsive to receiving the query, the content recognition service can employ a multi-stage matching technique to identify content items responding to the query. This matching technique can be employed as queries are progressively received.
    Type: Grant
    Filed: May 18, 2011
    Date of Patent: November 4, 2014
    Assignee: Microsoft Corporation
    Inventors: Kazuhito Koishida, David Nister, Ian Simon, Tom Butcher
  • Publication number: 20140168261
    Abstract: A system and method are disclosed for interacting with virtual objects in a virtual environment using an accessory such as a hand held object. The virtual object may be viewed using a display device. The display device and hand held object may cooperate to determine a scene map of the virtual environment, the display device and hand held object being registered in the scene map.
    Type: Application
    Filed: December 13, 2012
    Publication date: June 19, 2014
    Inventors: Jeffrey N. Margolis, Benjamin I. Vaught, Alex Aben-Athar Kipman, Georg Klein, Frederik Schaffalitzky, David Nister, Russ McMackin, Doug Barnes
  • Publication number: 20140112527
    Abstract: Architecture that enables optical character recognition (OCR) of text in video frames at the rate at which the frames are received. Additionally, conflation is performed on multiple text recognition results in the frame sequence. The architecture comprises an OCR text recognition engine and a tracker system; the tracker system establishes a common coordinate system in which OCR results from different frames may be compared and/or combined. From a set of sequential video frames, a keyframe is chosen from which the reference coordinate system is established. An estimated transformation from keyframe coordinates to subsequent video frames is computed using the tracker system. When text recognition is completed for any subsequent frame, the result coordinates can be related to the keyframe using the inverse transformation from the processed frame to the reference keyframe. The results can be rendered for viewing as the results are obtained.
    Type: Application
    Filed: October 18, 2012
    Publication date: April 24, 2014
    Applicant: Microsoft Corporation
    Inventors: David Nister, Frederik Schaffalitzky, Michael Grabner, Matthew S. Ashman, Milan Vugdelija, Ivan Stojiljkovic
  • Publication number: 20140002341
    Abstract: Various embodiments related to entering text into a computing device via eye-typing are disclosed. For example, one embodiment provides a method that includes receiving a data set including a plurality of gaze samples, each gaze sample including a gaze location and a corresponding point in time. The method further comprises processing the plurality of gaze samples to determine one or more likely terms represented by the data set.
    Type: Application
    Filed: June 28, 2012
    Publication date: January 2, 2014
    Inventors: David Nister, Vaibhav Thukral, Djordje Nijemcevic, Ruchi Bhargava
  • Publication number: 20130194304
    Abstract: A method for presenting real and virtual images correctly positioned with respect to each other. The method includes, in a first field of view, receiving a first real image of an object and displaying a first virtual image. The method also includes, in a second field of view oriented independently relative to the first field of view, receiving a second real image of the object and displaying a second virtual image, the first and second virtual images positioned coincidently within a coordinate system.
    Type: Application
    Filed: February 1, 2012
    Publication date: August 1, 2013
    Inventors: Stephen Latta, Darren Bennett, Peter Tobias Kinnebrew, Kevin Geisner, Brian Mount, Arthur Tomlin, Mike Scavezze, Daniel McCulloch, David Nister, Drew Steedly, Jeffrey Alan Kohler, Ben Sugden, Sebastian Sylvan
  • Publication number: 20120296458
    Abstract: Various embodiments enable audio data, such as music data, to be captured, by a device, from a background environment and processed to formulate a query that can then be transmitted to a content recognition service. In one or more embodiments, the audio data is captured prior to receiving user input associated with audio data capture, e.g., launch of an application associated with the content recognition service, provision of user input proactively indicating that audio data capture is desired, and the like. Responsive to transmitting the query, displayable information associated with the audio data is returned by the content recognition service and can be consumed by the device.
    Type: Application
    Filed: May 18, 2011
    Publication date: November 22, 2012
    Applicant: Microsoft Corporation
    Inventors: Kazuhito Koishida, David Nister, Ian Simon, Tom Butcher
  • Publication number: 20120296938
    Abstract: Various embodiments enable audio data, such as music data, to be captured, by a device, from a background environment and processed to formulate a query that can then be transmitted to a content recognition service. In one or more embodiments, multiple queries are transmitted to the content recognition service. In at least some embodiments, subsequent queries can progressively incorporate previous queries plus additional data that is captured. In one or more embodiments, responsive to receiving the query, the content recognition service can employ a multi-stage matching technique to identify content items responding to the query. This matching technique can be employed as queries are progressively received.
    Type: Application
    Filed: May 18, 2011
    Publication date: November 22, 2012
    Applicant: Microsoft Corporation
    Inventors: Kazuhito Koishida, David Nister, Ian Simon, Tom Butcher
  • Patent number: 7725484
    Abstract: An image retrieval technique employing a novel hierarchical feature/descriptor vector quantizer tool—‘vocabulary tree’, of sorts comprising hierarchically organized sets of feature vectors—that effectively partitions feature space in a hierarchical manner, creating a quantized space that is mapped to integer encoding. The computerized implementation of the new technique(s) employs subroutine components, such as: A trainer component of the tool generates a hierarchical quantizer, Q, for application/use in novel image-insertion and image-query stages. The hierarchical quantizer, Q, tool is generated by running k-means on the feature (a/k/a descriptor) space, recursively, on each of a plurality of nodes of a resulting quantization level to ‘split’ each node of each resulting quantization level. Preferably, training of the hierarchical quantizer, Q, is performed in an ‘offline’ fashion.
    Type: Grant
    Filed: November 20, 2006
    Date of Patent: May 25, 2010
    Assignee: University of Kentucky Research Foundation (UKRF)
    Inventors: David Nistér, Henrik Stewénius
  • Patent number: 7613323
    Abstract: A method and apparatus for determining camera pose characterized by six degrees of freedom (e.g., for use in computer vision systems) is disclosed. In one embodiment an image captured by the camera is received, and at least two constraints on the potential pose are enforced in accordance with known relations of the image to the camera, such that the potential pose is constrained to two remaining degrees of freedom. At least one potential pose is then determined in accordance with the remaining two degrees of freedom.
    Type: Grant
    Filed: June 22, 2005
    Date of Patent: November 3, 2009
    Assignee: Sarnoff Corporation
    Inventors: David Nister, James Bergen
  • Publication number: 20090237508
    Abstract: A method and apparatus for providing immersive surveillance wherein a remote security guard may monitor a scene using a variety of imagery sources that are rendered upon a model to provide a three-dimensional conceptual view of the scene. Using a view selector, the security guard may dynamically select a camera view to be displayed on his conceptual model, perform a walk through of the scene, identify moving objects and select the best view of those moving objects and so on.
    Type: Application
    Filed: March 13, 2009
    Publication date: September 24, 2009
    Applicant: L-3 COMMUNICATIONS CORPORATION
    Inventors: Aydin Arpa, Keith Hanna, Rakesh Kumar, Supun Samarasekera, Harpreet Singh Sawhney, Manoj Aggarwal, David Nister, Stephen Hsu
  • Patent number: 7522186
    Abstract: A method and apparatus for providing immersive surveillance wherein a remote security guard may monitor a scene using a variety of imagery sources that are rendered upon a model to provide a three-dimensional conceptual view of the scene. Using a view selector, the security guard may dynamically select a camera view to be displayed on his conceptual model, perform a walk through of the scene, identify moving objects and select the best view of those moving objects and so on.
    Type: Grant
    Filed: July 24, 2002
    Date of Patent: April 21, 2009
    Assignee: L-3 Communications Corporation
    Inventors: Aydin Arpa, Keith J. Hanna, Rakesh Kumar, Supun Samarasekera, Harpreet Singh Sawhney, Manoj Aggarwal, David Nister, Stephen Hsu
  • Patent number: 7359526
    Abstract: A method and apparatus for determining camera pose from point correspondences. Specifically, an efficient solution to the classical five-point relative pose problem is presented. The problem is to find the possible solutions for relative camera motion between two calibrated views given five corresponding points. The method consists of computing the coefficients of a tenth degree polynomial and subsequently finding its roots. The method is well suited for numerical implementation that also corresponds to the inherent complexity of the problem. The method is used in a robust hypothesize- and-test framework to estimate structure and motion in real-time.
    Type: Grant
    Filed: March 11, 2004
    Date of Patent: April 15, 2008
    Assignee: Sarnoff Corporation
    Inventor: David Nister
  • Patent number: 7324686
    Abstract: The technology described relates to reconstruction of 3-dimensional scenes from uncalibrated images, and provides a robust and systematic strategy for using cheirality in scene reconstruction and camera calibration. A general projective reconstruction is upgraded to a quasi-affine reconstruction. Cheirality constraints are deduced with regard to the cameras by statistical use of scene points in a voting procedure. The deduced cheirality constraints constrain the position of the plane at infinity. Linear programming is used to determine a tentative plane at infinity. Based on this tentative plane at infinity, the initial projective reconstruction can be transformed into a reconstruction that is quasi-affine with respect to the cameras.
    Type: Grant
    Filed: September 26, 2001
    Date of Patent: January 29, 2008
    Assignee: Telefonaktiebolaget LM Ericsson (publ)
    Inventor: David Nister
  • Publication number: 20070288141
    Abstract: A method and apparatus for visual odometry (e.g., for navigating a surrounding environment) is disclosed. In one embodiment a sequence of scene imagery is received (e.g., from a video camera or a stereo head) that represents at least a portion of the surrounding environment. The sequence of scene imagery is processed (e.g., in accordance with video processing techniques) to derive an estimate of a pose relative to the surrounding environment. This estimate may be further supplemented with data from other sensors, such as a global positioning system or inertial or mechanical sensors.
    Type: Application
    Filed: June 22, 2005
    Publication date: December 13, 2007
    Inventors: James Bergen, Oleg Naroditsky, David Nister
  • Patent number: 7271827
    Abstract: The present invention relates to the recording of moving images by means of a portable communication device, such as a videophone. The communication device includes a main device (101), comprising a video camera (105). Furthermore, an accessory device (102), such as a headset, is also associated with the main device (101) and co-located with a relevant object (103). The video camera (105) records an original image of the relevant object (103). At least one tracking point (107a ) is located on the accessory device (102) and at least one automatic tracking sensor (108a-108c) responsive to the at least one tracking point is located on the main device (101). The main device (101) further comprises a tracking data generator, which receives signals from the automatic tracking sensor(s) (108a-108c) and generates in response thereto tracking data representing a target direction (104) between the main device (101) and the accessory device (102).
    Type: Grant
    Filed: October 11, 2001
    Date of Patent: September 18, 2007
    Assignee: Telefonaktiebolaget LM Ericsson (publ)
    Inventor: David Nister
  • Publication number: 20070214172
    Abstract: An image retrieval technique employing a novel hierarchical feature/descriptor vector quantizer tool—‘vocabulary tree’, of sorts comprising hierarchically organized sets of feature vectors—that effectively partitions feature space in a hierarchical manner, creating a quantized space that is mapped to integer encoding. The computerized implementation of the new technique(s) employs subroutine components, such as: A trainer component of the tool generates a hierarchical quantizer, Q, for application/use in novel image-insertion and image-query stages. The hierarchical quantizer, Q, tool is generated by running k-means on the feature (a/k/a descriptor) space, recursively, on each of a plurality of nodes of a resulting quantization level to ‘split’ each node of each resulting quantization level. Preferably, training of the hierarchical quantizer, Q, is performed in an ‘offline’ fashion.
    Type: Application
    Filed: November 20, 2006
    Publication date: September 13, 2007
    Inventors: David Nister, Henrik Stewenius