Patents by Inventor Kevin Geisner

Kevin Geisner has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20130194304
    Abstract: A method for presenting real and virtual images correctly positioned with respect to each other. The method includes, in a first field of view, receiving a first real image of an object and displaying a first virtual image. The method also includes, in a second field of view oriented independently relative to the first field of view, receiving a second real image of the object and displaying a second virtual image, the first and second virtual images positioned coincidently within a coordinate system.
    Type: Application
    Filed: February 1, 2012
    Publication date: August 1, 2013
    Inventors: Stephen Latta, Darren Bennett, Peter Tobias Kinnebrew, Kevin Geisner, Brian Mount, Arthur Tomlin, Mike Scavezze, Daniel McCulloch, David Nister, Drew Steedly, Jeffrey Alan Kohler, Ben Sugden, Sebastian Sylvan
  • Publication number: 20130194164
    Abstract: Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object.
    Type: Application
    Filed: January 27, 2012
    Publication date: August 1, 2013
    Inventors: Ben Sugden, John Clavin, Ben Vaught, Stephen Latta, Kathryn Stone Perez, Daniel McCulloch, Jason Scott, Wei Zhang, Darren Bennett, Ryan Hastings, Arthur Tomlin, Kevin Geisner
  • Patent number: 8487938
    Abstract: Systems, methods and computer readable media are disclosed for grouping complementary sets of standard gestures into gesture libraries. The gestures may be complementary in that they are frequently used together in a context or in that their parameters are interrelated. Where a parameter of a gesture is set with a first value, all other parameters of the gesture and of other gestures in the gesture package that depend on the first value may be set with their own value which is determined using the first value.
    Type: Grant
    Filed: February 23, 2009
    Date of Patent: July 16, 2013
    Assignee: Microsoft Corporation
    Inventors: Stephen G. Latta, Kudo Tsunoda, Kevin Geisner, Relja Markovic, Darren Alexander Bennett
  • Publication number: 20130177296
    Abstract: A system and method for efficiently managing life experiences captured by one or more sensors (e.g., video or still camera, image sensors including RGB sensors and depth sensors). A “life recorder” is a recording device that continuously captures life experiences, including unanticipated life experiences, in image, video, and/or audio recordings. In some embodiments, video and/or audio recordings captured by a life recorder are automatically analyzed, tagged with a set of one or more metadata, indexed, and stored for future use. By tagging and indexing life recordings, a life recorder may search for and acquire life recordings generated by itself or another life recorder, thereby allowing life experiences to be shared minutes or even years later.
    Type: Application
    Filed: November 29, 2012
    Publication date: July 11, 2013
    Inventors: Kevin A. Geisner, Relja Markovic, Stephen G. Latta, Daniel McCulloch
  • Publication number: 20130154958
    Abstract: A controller for a content presentation and interaction system which includes a primary content presentation device. The controller includes a tactile control input and a touch screen control input. The tactile control input is responsive to the inputs of a first user and communicatively coupled to the content presentation device. The controller a plurality of tactile input mechanisms and provides a first set of the plurality of control inputs manipulating content. The controller includes a touch screen control input responsive to the inputs of the first user and communicatively coupled to the content presentation device. The second controller is proximate the first controller and provides a second set of the plurality of control inputs. The second set of control inputs includes alternative inputs for at least some of the controls and additional inputs not available using the tactile input mechanisms.
    Type: Application
    Filed: December 20, 2011
    Publication date: June 20, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: John Clavin, Kenneth A. Lobb, Christopher M. Novak, Kevin Geisner, Christian Klein
  • Patent number: 8465108
    Abstract: Techniques for enhancing the use of a motion capture system are provided. A motion capture system tracks movement and audio inputs from a person in a physical space, and provides the inputs to an application, which displays a virtual space on a display. Bodily movements can be used to define traits of an avatar in the virtual space. The person can be directed to perform the movements by a coaching avatar, or visual or audio cues in the virtual space. The application can respond to the detected movements and voice commands or voice volume of the person to define avatar traits and initiate pre-scripted audio-visual events in the virtual space to provide an entertaining experience. A performance in the virtual space can be captured and played back with automatic modifications, such as alterations to the avatar's voice or appearance, or modifications made by another person.
    Type: Grant
    Filed: September 5, 2012
    Date of Patent: June 18, 2013
    Assignee: Microsoft Corporation
    Inventors: Relja Markovic, Stephen G Latta, Kevin A Geisner, Christopher Vuchetich, Darren A Bennett, Brian S Murphy, Shawn C Wright
  • Publication number: 20130141434
    Abstract: A head-mounted display system includes a see-through display that is configured to visually augment an appearance of a physical environment to a user viewing the physical environment through the see-through display. Graphical content presented via the see-through display is created by modeling the ambient lighting conditions of the physical environment.
    Type: Application
    Filed: December 1, 2011
    Publication date: June 6, 2013
    Inventors: Ben Sugden, Darren Bennett, Brian Mount, Sebastian Sylvan, Arthur Tomlin, Ryan Hastings, Daniel McCulloch, Kevin Geisner, Robert Crocco, JR.
  • Publication number: 20130141419
    Abstract: A head-mounted display device is configured to visually augment an observed physical space to a user. The head-mounted display device includes a see-through display and is configured to receive augmented display information, such as a virtual object with occlusion relative to a real world object from a perspective of the see-through display.
    Type: Application
    Filed: December 1, 2011
    Publication date: June 6, 2013
    Inventors: Brian Mount, Stephen Latta, Daniel McCulloch, Kevin Geisner, Jason Scott, Jonathan Steed, Arthur Tomlin, Mark Mihelich
  • Publication number: 20130137076
    Abstract: Technology disclosed herein provides for use of HMDs in a classroom setting. Technology disclosed herein provides for HMD use for holographic instruction. In one embodiment, the HMD is used for social coaching. User profile information may be used to tailor instruction to a specific user based on known skills, learning styles, and/or characteristics. One or more individuals may be monitored based on sensor data. The sensor data may come from an HMD. The monitoring may be analyzed to determine how to enhance an experience. The experience may be enhanced by presenting an image in at least one head mounted display worn by the one or more individuals.
    Type: Application
    Filed: November 30, 2011
    Publication date: May 30, 2013
    Inventors: Kathryn Stone Perez, Kevin A. Geisner, Ben J. Sugden, Daniel J. McCulloch, John Clavin
  • Publication number: 20130135180
    Abstract: Various embodiments are provided for a shared collaboration system and related methods for enabling an active user to interact with one or more additional users and with collaboration items. In one embodiment a head-mounted display device is operatively connected to a computing device that includes a collaboration engine program. The program receives observation information of a physical space from the head-mounted display device along with a collaboration item. The program visually augments an appearance of the physical space as seen through the head-mounted display device to include an active user collaboration item representation of the collaboration item. The program populates the active user collaboration item representation with additional user collaboration item input from an additional user.
    Type: Application
    Filed: November 30, 2011
    Publication date: May 30, 2013
    Inventors: Daniel McCulloch, Stephen Latta, Darren Bennett, Ryan Hastings, Jason Scott, Relja Markovic, Kevin Geisner, Jonathan Steed
  • Patent number: 8451278
    Abstract: It may be desirable to apply corrective data to aspects of captured image or the user-performed gesture for display of a visual representation that corresponds to the corrective data. The captured motion may be any motion in the physical space that is captured by the capture device, such as a camera. Aspects of a skeletal or mesh model of a person, that is generated based on the image data captured by the capture device, may be modified prior to animation. The modification may be made to the model generated from image data that represents a target or a target's motion, including user gestures, in the physical space. For example, certain joints of a skeletal model may be readjusted or realigned. A model of a target may be modified by applying differential correction, magnetism principles, binary snapping, confining virtual movement to defined spaces, or the like.
    Type: Grant
    Filed: August 3, 2012
    Date of Patent: May 28, 2013
    Assignee: Microsoft Corporation
    Inventors: Kevin Geisner, Relja Markovic, Stephen Gilchrist Latta, Gregory Nelson Snook, Kudo Tsunoda, Darren Alexander Bennett
  • Publication number: 20130127994
    Abstract: Optical sensor information captured via one or more optical sensors imaging a scene that includes a human subject is received by a computing device. The optical sensor information is processed by the computing device to model the human subject with a virtual skeleton, and to obtain surface information representing the human subject. The virtual skeleton is transmitted by the computing device to a remote computing device at a higher frame rate than the surface information. Virtual skeleton frames are used by the remote computing device to estimate surface information for frames that have not been transmitted by the computing device.
    Type: Application
    Filed: November 17, 2011
    Publication date: May 23, 2013
    Inventors: Mark Mihelich, Kevin Geisner, Mike Scavezze, Stephen Latta, Daniel McCulloch, Brian Mount
  • Publication number: 20130114043
    Abstract: The technology provides various embodiments for controlling brightness of a see-through, near-eye mixed display device based on light intensity of what the user is gazing at. The opacity of the display can be altered, such that external light is reduced if the wearer is looking at a bright object. The wearer's pupil size may be determined and used to adjust the brightness used to display images, as well as the opacity of the display. A suitable balance between opacity and brightness used to display images may be determined that allows real and virtual objects to be seen clearly, while not causing damage or discomfort to the wearer's eyes.
    Type: Application
    Filed: November 4, 2011
    Publication date: May 9, 2013
    Inventors: Alexandru O. Balan, Ryan L. Hastings, Stephen G. Latta, Michael J. Scavezze, Daniel J. McCulloch, Derek L. Knee, Brian J. Mount, Kevin A. Geisner, Robert L. Crocco
  • Publication number: 20130095924
    Abstract: Technology is described for providing a personalized sport performance experience with three dimensional (3D) virtual data displayed by a near-eye, augmented reality display of a personal audiovisual (A/V) apparatus. A physical movement recommendation is determined for the user performing a sport based on skills data for the user for the sport, physical characteristics of the user, and 3D space positions for at least one or more sport objects. 3D virtual data depicting one or more visual guides for assisting the user in performing the physical movement recommendation may be displayed from a user perspective associated with a display field of view of the near-eye AR display. An avatar may also be displayed by the near-eye AR display performing a sport. The avatar may perform the sport interactively with the user or be displayed performing a prior performance of an individual represented by the avatar.
    Type: Application
    Filed: September 28, 2012
    Publication date: April 18, 2013
    Inventors: Kevin A. Geisner, Kathryn Stone Perez, Stephen G. Latta, Ben J. Sugden, Benjamin I. Vaught, Alex Aben-Athar Kipman, John Clavin
  • Patent number: 8418085
    Abstract: A capture device may capture a user's motion and a display device may display a model that maps to the user's motion, including gestures that are applicable for control. A user may be unfamiliar with a system that maps the user's motions or not know what gestures are applicable for an executing application. A user may not understand or know how to perform gestures that are applicable for the executing application. User motion data and/or outputs of filters corresponding to gestures may be analyzed to determine those cases where assistance to the user on performing the gesture is appropriate.
    Type: Grant
    Filed: May 29, 2009
    Date of Patent: April 9, 2013
    Assignee: Microsoft Corporation
    Inventors: Gregory N. Snook, Stephen Latta, Kevin Geisner, Darren Alexander Bennett, Kudo Tsunoda, Alex Kipman, Kathryn Stone Perez
  • Publication number: 20130083062
    Abstract: A system for generating an augmented reality environment in association with one or more attractions or exhibits is described. In some cases, a see-through head-mounted display device (HMD) may acquire one or more virtual objects from a supplemental information provider associated with a particular attraction. The one or more virtual objects may be based on whether an end user of the HMD is waiting in line for the particular attraction or is on (or in) the particular attraction. The supplemental information provider may vary the one or more virtual objects based on the end user's previous experiences with the particular attraction. The HMD may adapt the one or more virtual objects based on physiological feedback from the end user (e.g., if a child is scared). The supplemental information provider may also provide and automatically update a task list associated with the particular attraction.
    Type: Application
    Filed: March 27, 2012
    Publication date: April 4, 2013
    Inventors: Kevin A. Geisner, Stephen G. Latta, Ben J. Sugden, Benjamin I. Vaught, Alex Aben-Athar Kipman, Kathryn Stone Perez, Ryan L. Hastings, Darren Bennett, Daniel J. McCulloch, John Clavin, Jason Scott
  • Publication number: 20130083009
    Abstract: The technology described herein includes a see-through, near-eye, mixed reality display device for providing customized experiences for a user. The personal A/V apparatus serves as an exercise program that is always with the user, provides motivation for the user, visually tells the user how to exercise, and lets the user exercise with other people who are not present.
    Type: Application
    Filed: March 30, 2012
    Publication date: April 4, 2013
    Inventors: Kevin A. Geisner, Kathryn Stone Perez, Stephen G. Latta, Ben J. Sugden, Benjamin I. Vaught, Alex Aben-Athar Kipman
  • Publication number: 20130085345
    Abstract: A system provides a recommendation of food items to a user based on nutritional preferences of the user, using a head-mounted display device (HMDD) worn by the user. In a store, a forward-facing camera of the HMDD captures an image of a food item. The food item can be identified by the image, such as based on packaging of the food item. Nutritional parameters of the food item are compared to nutritional preferences of the user to determine whether the food item is recommended. The HMDD displays an augmented reality image to the user indicating whether the food item is recommended. If the food item is not recommended, a substitute food item can be identified. The nutritional preferences can indicate food allergies, preferences for low calorie foods and so forth. In a restaurant, the HMDD can recommend menu selections for a user.
    Type: Application
    Filed: March 30, 2012
    Publication date: April 4, 2013
    Inventors: Kevin A. Geisner, Kathryn Stone Perez, Stephen G. Latta, Ben J. Sugden, Benjamin I. Vaught, Alex Aben-Athar Kipman, Cameron G. Brown, Holly A. Hirzel, Brian J. Mount, Daniel McCulloch
  • Publication number: 20130083007
    Abstract: A system for generating an augmented reality environment in association with one or more attractions or exhibits is described. In some cases, a see-through head-mounted display device (HMD) may acquire one or more virtual objects from a supplemental information provider associated with a particular attraction. The one or more virtual objects may be based on whether an end user of the HMD is waiting in line for the particular attraction or is on (or in) the particular attraction. The supplemental information provider may vary the one or more virtual objects based on the end user's previous experiences with the particular attraction. The HMD may adapt the one or more virtual objects based on physiological feedback from the end user (e.g., if a child is scared). The supplemental information provider may also provide and automatically update a task list associated with the particular attraction.
    Type: Application
    Filed: March 27, 2012
    Publication date: April 4, 2013
    Inventors: Kevin A. Geisner, Stephen G. Latta, Ben J. Sugden, Benjamin I. Vaught, Alex Aben-Athar Kipman, Kathryn Stone Perez, Ryan L. Hastings, Daniel J. McCulloch, Arthur C. Tomlin, Jennifer A. Karr
  • Publication number: 20130083173
    Abstract: Technology is described for providing a virtual spectator experience for a user of a personal A/V apparatus including a near-eye, augmented reality (AR) display. A position volume of an event object participating in an event in a first 3D coordinate system for a first location is received and mapped to a second position volume in a second 3D coordinate system at a second location remote from where the event is occurring. A display field of view of the near-eye AR display at the second location is determined, and real-time 3D virtual data representing the one or more event objects which are positioned within the display field of view are displayed in the near-eye AR display. A user may select a viewing position from which to view the event. Additionally, virtual data of a second user may be displayed at a position relative to a first user.
    Type: Application
    Filed: June 29, 2012
    Publication date: April 4, 2013
    Inventors: Kevin A. Geisner, Kathryn Stone Perez, Stephen G. Latta, Ben J. Sugden, Benjamin I. Vaught, Alex Aben-Athar Kipman, Michael J. Scavezze, Daniel J. McCulloch, Darren Bennett, Jason Scott, Ryan L. Hastings, Brian E. Keane, Christopher E. Miles, Robert L. Crocco, JR., Mathew J. Lamb