Patents by Inventor Stephen Latta

Stephen Latta has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20130342568
    Abstract: Embodiments related to providing low light scene augmentation are disclosed. One embodiment provides, on a computing device comprising a see-through display device, a method including recognizing, from image data received from an image sensor, a background scene of an environment viewable through the see-through display device, the environment comprising a physical object. The method further includes identifying one or more geometrical features of the physical object and displaying, on the see through display device, an image augmenting the one or more geometrical features.
    Type: Application
    Filed: June 20, 2012
    Publication date: December 26, 2013
    Inventors: Tony Ambrus, Mike Scavezze, Stephen Latta, Daniel McCulloch, Brian Mount
  • Publication number: 20130335435
    Abstract: Embodiments related to improving a color-resolving ability of a user of a see-thru display device are disclosed. For example, one disclosed embodiment includes, on a see-thru display device, constructing and displaying virtual imagery to superpose onto real imagery sighted by the user through the see-thru display device. The virtual imagery is configured to accentuate a locus of the real imagery of a color poorly distinguishable by the user. Such virtual imagery is then displayed by superposing it onto the real imagery, in registry with the real imagery, in a field of view of the user.
    Type: Application
    Filed: June 18, 2012
    Publication date: December 19, 2013
    Inventors: Tony Ambrus, Adam Smith-Kipnis, Stephen Latta, Daniel McCulloch, Brian Mount, Kevin Geisner, Ian McIntyre
  • Publication number: 20130335404
    Abstract: One embodiment provides a method for controlling a virtual depth of field perceived by a wearer of a see-thru display device. The method includes estimating the ocular depth of field of the wearer and projecting virtual imagery with a specified amount of blur. The amount of blur is determined as a function of the ocular depth of field. Another embodiment provides a method for controlling an ocular depth of field of a wearer of a see-thru display device. This method includes computing a target value for the depth of field and increasing the pixel brightness of the virtual imagery presented to the wearer. The increase in pixel brightness contracts the wearer's pupils and thereby deepens the depth of field to the target value.
    Type: Application
    Filed: June 15, 2012
    Publication date: December 19, 2013
    Inventors: Jeff Westerinen, Rod G. Fleck, Jack Clevenger, Stephen Latta
  • Publication number: 20130335442
    Abstract: Various embodiments are disclosed that relate to enhancing the display of images comprising text on various computing device displays. For example, one disclosed embodiment provides, on a computing device, a method of displaying an image, the method including receiving from a remote computing device image data representing a non-text portion of the image, receiving from the remote computing device unrendered text data representing a text portion of the image, rendering the unrendered text data based upon local contextual rendering information to form locally rendered text data, compositing the locally rendered text data and the image data to form a composited image, and providing the composited image to a display.
    Type: Application
    Filed: June 18, 2012
    Publication date: December 19, 2013
    Inventors: Rod G. Fleck, Stephen Latta
  • Publication number: 20130208014
    Abstract: A blocking image generating system including a head-mounted display device having an opacity layer and related methods are disclosed. A method may include receiving a virtual image to be presented by display optics in the head-mounted display device. Lighting information and an eye-position parameter may be received from an optical sensor system in the head-mounted display device. A blocking image may be generated in the opacity layer of the head-mounted display device based on the lighting information and the virtual image. The location of the blocking image in the opacity layer may be adjusted based on the eye-position parameter.
    Type: Application
    Filed: February 10, 2012
    Publication date: August 15, 2013
    Inventors: Rod G. Fleck, David D. Bohn, Stephen Latta, Julia Meinershagen, Sebastian Sylvan, Brian McDowell, Jeff Cole, Jeffrey Alan Kohler
  • Publication number: 20130194389
    Abstract: A method for assessing a attentiveness to visual stimuli received through a head-mounted display device. The method employs first and second detectors arranged in the head-mounted display device. An ocular state of the wearer of the head-mounted display device is detected with the first detector while the wearer is receiving a visual stimulus. With the second detector, the visual stimulus received by the wearer is detected. The ocular state is then correlated to the wearer's attentiveness to the visual stimulus.
    Type: Application
    Filed: January 31, 2012
    Publication date: August 1, 2013
    Inventors: Ben Vaught, Ben Sugden, Stephen Latta, John Clavin
  • Publication number: 20130196757
    Abstract: A system and related methods for inviting a potential player to participate in a multiplayer game via a user head-mounted display device are provided. In one example, a potential player invitation program receives user voice data and determines that the user voice data is an invitation to participate in a multiplayer game. The program receives eye-tracking information, depth information, facial recognition information, potential player head-mounted display device information, and/or potential player voice data. The program associates the invitation with the potential player using the eye-tracking information, the depth information, the facial recognition information, the potential player head-mounted display device information, and/or the potential player voice data. The program matches a potential player account with the potential player. The program receives an acceptance response from the potential player, and joins the potential player account with a user account in participating in the multiplayer game.
    Type: Application
    Filed: January 30, 2012
    Publication date: August 1, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: Stephen Latta, Kevin Geisner, Brian Mount, Jonathan Steed, Tony Ambrus, Arnulfo Zepeda, Aaron Krauss
  • Publication number: 20130196772
    Abstract: Embodiments for matching participants in a virtual multiplayer entertainment experience are provided. For example, one embodiment provides a method including receiving from each user of a plurality of users a request to join the virtual multiplayer entertainment experience, receiving from each user of the plurality of users information regarding characteristics of a physical space in which each user is located, and matching two or more users of the plurality of users for participation in the virtual multiplayer entertainment experience based on the characteristics of the physical space of each of the two or more users.
    Type: Application
    Filed: January 31, 2012
    Publication date: August 1, 2013
    Inventors: Stephen Latta, Kevin Geisner, Brian Mount, Daniel McCulloch, Cameron Brown, Jeffrey Alan Kohler, Wei Zhang, Ryan Hastings, Darren Bennett, Ian McIntyre
  • Publication number: 20130194304
    Abstract: A method for presenting real and virtual images correctly positioned with respect to each other. The method includes, in a first field of view, receiving a first real image of an object and displaying a first virtual image. The method also includes, in a second field of view oriented independently relative to the first field of view, receiving a second real image of the object and displaying a second virtual image, the first and second virtual images positioned coincidently within a coordinate system.
    Type: Application
    Filed: February 1, 2012
    Publication date: August 1, 2013
    Inventors: Stephen Latta, Darren Bennett, Peter Tobias Kinnebrew, Kevin Geisner, Brian Mount, Arthur Tomlin, Mike Scavezze, Daniel McCulloch, David Nister, Drew Steedly, Jeffrey Alan Kohler, Ben Sugden, Sebastian Sylvan
  • Publication number: 20130194259
    Abstract: A system and related methods for visually augmenting an appearance of a physical environment as seen by a user through a head-mounted display device are provided. In one embodiment, a virtual environment generating program receives eye-tracking information, lighting information, and depth information from the head-mounted display. The program generates a virtual environment that models the physical environment and is based on the lighting information and the distance of a real-world object from the head-mounted display. The program visually augments a virtual object representation in the virtual environment based on the eye-tracking information, and renders the virtual object representation on a transparent display of the head-mounted display device.
    Type: Application
    Filed: January 27, 2012
    Publication date: August 1, 2013
    Inventors: Darren Bennett, Brian Mount, Stephen Latta, Alex Kipman, Ryan Hastings, Arthur Tomlin, Sebastian Sylvan, Daniel McCulloch, Jonathan Steed, Jason Scott, Mathew Lamb
  • Publication number: 20130194164
    Abstract: Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object.
    Type: Application
    Filed: January 27, 2012
    Publication date: August 1, 2013
    Inventors: Ben Sugden, John Clavin, Ben Vaught, Stephen Latta, Kathryn Stone Perez, Daniel McCulloch, Jason Scott, Wei Zhang, Darren Bennett, Ryan Hastings, Arthur Tomlin, Kevin Geisner
  • Publication number: 20130187835
    Abstract: Embodiments are disclosed that relate to the recognition via a see-through display system of an object displayed on an external display device at which a user of the see-through display system is gazing. For example, one embodiment provides a method of operating a see-through display system comprising acquiring an image of an external display screen located in the background scene via an outward facing image sensor, determining via a gaze detection subsystem a location on the external display screen at which the user is gazing, obtaining an identity of an object displayed on the external display screen at the location determined, and performing an action based upon the identity of the object.
    Type: Application
    Filed: January 25, 2012
    Publication date: July 25, 2013
    Inventors: Ben Vaught, Ben Sugden, Stephen Latta
  • Publication number: 20130174213
    Abstract: A system for automatically sharing virtual objects between different mixed reality environments is described. In some embodiments, a see-through head-mounted display device (HMD) automatically determines a privacy setting associated with another HMD by inferring a particular social relationship with a person associated with the other HMD (e.g., inferring that the person is a friend or acquaintance). The particular social relationship may be inferred by considering the distance to the person associated with the other HMD, the type of environment (e.g., at home or work), and particular physical interactions involving the person (e.g., handshakes or hugs). The HMD may subsequently transmit one or more virtual objects associated with the privacy setting to the other HMD. The HMD may also receive and display one or more other virtual objects from the other HMD based on the privacy setting.
    Type: Application
    Filed: November 29, 2012
    Publication date: July 4, 2013
    Inventors: James Liu, Stephen Latta, Anton O.A. Andrews, Benjamin Isaac Vaught, Sheridan Martin Small
  • Publication number: 20130169682
    Abstract: A system for automatically displaying virtual objects within a mixed reality environment is described. In some embodiments, a see-through head-mounted display device (HMD) identifies a real object (e.g., a person or book) within a field of view of the HMD, detects one or more interactions associated with real object, and automatically displays virtual objects associated with the real object if the one or more interactions involve touching or satisfy one or more social rules stored in a social rules database. The one or more social rules may be used to infer a particular social relationship by considering the distance to another person, the type of environment (e.g., at home or work), and particular physical interactions (e.g., handshakes or hugs). The virtual objects displayed on the HMD may depend on the particular social relationship inferred (e.g., a friend or acquaintance).
    Type: Application
    Filed: November 29, 2012
    Publication date: July 4, 2013
    Inventors: Christopher Michael Novak, James Liu, Stephen Latta, Anton O.A. Andrews, Craig R. Maitlen, Sheridan Martin
  • Publication number: 20130141421
    Abstract: A head-mounted display includes a see-through display and a virtual reality engine. The see-through display is configured to visually augment an appearance of a physical space to a user viewing the physical space through the see-through display. The virtual reality engine is configured to cause the see-through display to visually present a virtual monitor that appears to be integrated with the physical space to a user viewing the physical space through the see-through display.
    Type: Application
    Filed: December 6, 2011
    Publication date: June 6, 2013
    Inventors: Brian Mount, Stephen Latta, Adam Poulos, Daniel McCulloch, Darren Bennett, Ryan Hastings, Jason Scott
  • Publication number: 20130141419
    Abstract: A head-mounted display device is configured to visually augment an observed physical space to a user. The head-mounted display device includes a see-through display and is configured to receive augmented display information, such as a virtual object with occlusion relative to a real world object from a perspective of the see-through display.
    Type: Application
    Filed: December 1, 2011
    Publication date: June 6, 2013
    Inventors: Brian Mount, Stephen Latta, Daniel McCulloch, Kevin Geisner, Jason Scott, Jonathan Steed, Arthur Tomlin, Mark Mihelich
  • Publication number: 20130135180
    Abstract: Various embodiments are provided for a shared collaboration system and related methods for enabling an active user to interact with one or more additional users and with collaboration items. In one embodiment a head-mounted display device is operatively connected to a computing device that includes a collaboration engine program. The program receives observation information of a physical space from the head-mounted display device along with a collaboration item. The program visually augments an appearance of the physical space as seen through the head-mounted display device to include an active user collaboration item representation of the collaboration item. The program populates the active user collaboration item representation with additional user collaboration item input from an additional user.
    Type: Application
    Filed: November 30, 2011
    Publication date: May 30, 2013
    Inventors: Daniel McCulloch, Stephen Latta, Darren Bennett, Ryan Hastings, Jason Scott, Relja Markovic, Kevin Geisner, Jonathan Steed
  • Publication number: 20130127994
    Abstract: Optical sensor information captured via one or more optical sensors imaging a scene that includes a human subject is received by a computing device. The optical sensor information is processed by the computing device to model the human subject with a virtual skeleton, and to obtain surface information representing the human subject. The virtual skeleton is transmitted by the computing device to a remote computing device at a higher frame rate than the surface information. Virtual skeleton frames are used by the remote computing device to estimate surface information for frames that have not been transmitted by the computing device.
    Type: Application
    Filed: November 17, 2011
    Publication date: May 23, 2013
    Inventors: Mark Mihelich, Kevin Geisner, Mike Scavezze, Stephen Latta, Daniel McCulloch, Brian Mount
  • Patent number: 8448094
    Abstract: Systems and methods for mapping natural input devices to legacy system inputs are disclosed. One example system may include a computing device having an algorithmic preprocessing module configured to receive input data containing a natural user input and to identify the natural user input in the input data. The computing device may further include a gesture module coupled to the algorithmic preprocessing module, the gesture module being configured to associate the natural user input to a gesture in a gesture library. The computing device may also include a mapping module to map the gesture to a legacy controller input, and to send the legacy controller input to a legacy system in response to the natural user input.
    Type: Grant
    Filed: March 25, 2009
    Date of Patent: May 21, 2013
    Assignee: Microsoft Corporation
    Inventors: Alex Kipman, R. Stephen Polzin, Kudo Tsunoda, Darren Bennett, Stephen Latta, Mark Finocchio, Gregory G. Snook, Relja Markovic
  • Patent number: 8418085
    Abstract: A capture device may capture a user's motion and a display device may display a model that maps to the user's motion, including gestures that are applicable for control. A user may be unfamiliar with a system that maps the user's motions or not know what gestures are applicable for an executing application. A user may not understand or know how to perform gestures that are applicable for the executing application. User motion data and/or outputs of filters corresponding to gestures may be analyzed to determine those cases where assistance to the user on performing the gesture is appropriate.
    Type: Grant
    Filed: May 29, 2009
    Date of Patent: April 9, 2013
    Assignee: Microsoft Corporation
    Inventors: Gregory N. Snook, Stephen Latta, Kevin Geisner, Darren Alexander Bennett, Kudo Tsunoda, Alex Kipman, Kathryn Stone Perez