Patents by Inventor Kevin Geisner
Kevin Geisner has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20110175809Abstract: In a motion capture system, a unitary input is provided to an application based on detected movement and/or location of a group of people. Audio information from the group can also be used as an input. The application can provide real-time feedback to the person or group via a display and audio output. The group can control the movement of an avatar in a virtual space based on the movement of each person in the group, such as in a steering or balancing game. To avoid a discontinuous or confusing output by the application, missing data can be generated for a person who is occluded or partially out of the field of view. A wait time can be set for activating a new person and deactivating a currently-active person. The wait time can be adaptive based on a first detected position or a last detected position of the person.Type: ApplicationFiled: January 15, 2010Publication date: July 21, 2011Applicant: MICROSOFT CORPORATIONInventors: Relja Markovic, Stephen G. Latta, Kevin A. Geisner, David Hill, Darren A. Bennett, David C. Haley, JR., Brian S. Murphy, Shawn C. Wright
-
Publication number: 20110175801Abstract: Techniques for enhancing the use of a motion capture system are provided. A motion capture system tracks movement and audio inputs from a person in a physical space, and provides the inputs to an application, which displays a virtual space on a display. Bodily movements can be used to define traits of an avatar in the virtual space. The person can be directed to perform the movements by a coaching avatar, or visual or audio cues in the virtual space. The application can respond to the detected movements and voice commands or voice volume of the person to define avatar traits and initiate pre-scripted audio-visual events in the virtual space to provide an entertaining experience. A performance in the virtual space can be captured and played back with automatic modifications, such as alterations to the avatar's voice or appearance, or modifications made by another person.Type: ApplicationFiled: January 15, 2010Publication date: July 21, 2011Applicant: MICROSOFT CORPORATIONInventors: Relja Markovic, Stephen G. Latta, Kevin A. Geisner, Christopher Vuchetich, Darren A. Bennett, Brian S. Murphy, Shawn C. Wright
-
Patent number: 7961174Abstract: In a motion capture system, a unitary input is provided to an application based on detected movement and/or location of a group of people. Audio information from the group can also be used as an input. The application can provide real-time feedback to the person or group via a display and audio output. The group can control the movement of an avatar in a virtual space based on the movement of each person in the group, such as in a steering or balancing game. To avoid a discontinuous or confusing output by the application, missing data can be generated for a person who is occluded or partially out of the field of view. A wait time can be set for activating a new person and deactivating a currently-active person. The wait time can be adaptive based on a first detected position or a last detected position of the person.Type: GrantFiled: July 30, 2010Date of Patent: June 14, 2011Assignee: Microsoft CorporationInventors: Relja Markovic, Stephen G Latta, Kevin A Geisner, David Hill, Darren A Bennett, David C Haley, Brian S Murphy, Shawn C Wright
-
Publication number: 20110109617Abstract: An image such as a depth image of a scene may be received, observed, or captured by a device. The image may then be analyzed to identify one or more targets within the scene. When a target is identified, vertices may be generated. A mesh model may then be created by drawing lines that may connect the vertices. Additionally, a depth value may also be calculated for each vertex. The depth values of the vertices may then be used to extrude the mesh model such that the mesh model may represent the target in the three-dimensional virtual world. A colorization scheme, a texture, lighting effects, or the like, may be also applied to the mesh model to convey the depth the virtual object may have in the virtual world.Type: ApplicationFiled: November 12, 2009Publication date: May 12, 2011Applicant: Microsoft CorporationInventors: Gregory Nelson Snook, Relja Markovic, Stephen Gilchrist Latta, Kevin Geisner
-
Publication number: 20110099476Abstract: Disclosed herein are systems and methods for decorating a display environment. In one embodiment, a user may decorate a display environment by making one or more gestures, using voice commands, using a suitable interface device, and/or combinations thereof. A voice command can be detected for user selection of an artistic feature, such as, for example, a color, a texture, an object, and a visual effect for decorating in a display environment. The user can also gesture for selecting a portion of the display environment for decoration. Next, the selected portion of the display environment can be altered based on the selected artistic feature. The user's motions can be reflected in the display environment by an avatar. In addition, a virtual canvas or three-dimensional object can be displayed in the display environment for decoration by the user.Type: ApplicationFiled: October 23, 2009Publication date: April 28, 2011Applicant: Microsoft CorporationInventors: Gregory N. Snook, Relja Markovic, Stephen G. Latta, Kevin Geisner, Christopher Vuchetich, Darren Alexander Bennett, Arthur Charles Tomlin, Joel Deaguero, Matt Puls, Matt Coohill, Ryan Hastings, Kate Kolesar, Brian Scott Murphy
-
Publication number: 20110055846Abstract: A capture device can detect gestures made by a user. The gestures can be used to control a gesture unaware program.Type: ApplicationFiled: August 31, 2009Publication date: March 3, 2011Applicant: Microsoft CorporationInventors: Kathryn S. Perez, Kevin A. Geisner, Alex A. Kipman, Kudo Tsunoda
-
Publication number: 20110035666Abstract: A capture device may capture a user's motion and a display device may display a model that maps to the user's motion, including gestures that are applicable for control. A user may be unfamiliar with a system that maps the user's motions or not know what gestures are applicable for an executing application. A user may not understand or know how to perform gestures that are applicable for the executing application. Providing visual feedback representing instructional gesture data to the user can teach the user how to properly gesture. The visual feedback may be provided in any number of suitable ways. For example, visual feedback may be provided via ghosted images, player avatars, or skeletal representations. The system can process prerecorded or live content for displaying visual feedback representing instructional gesture data. The feedback can portray the deltas between the user's actual position and the ideal gesture position.Type: ApplicationFiled: October 27, 2010Publication date: February 10, 2011Applicant: Microsoft CorporationInventors: Kevin Geisner, Relja Markovic, Stephen Gilchrist Latta, Gregory Nelson Snook, Darren Bennett
-
Publication number: 20100306715Abstract: Systems, methods and computer readable media are disclosed for gesture input beyond skeletal. A user's movement or body position is captured by a capture device of a system. Further, non-user-position data is received by the system, such as controller input by the user, an item that the user is wearing, a prop under the control of the user, or a second user's movement or body position. The system incorporates both the user-position data and the non-user-position data to determine one or more inputs the user made to the system.Type: ApplicationFiled: May 29, 2009Publication date: December 2, 2010Applicant: Microsoft CorporationInventors: Kevin Geisner, Stephen Latta, Relja Markovic, Gregory N. Snook
-
Publication number: 20100306261Abstract: Systems, methods and computer readable media are disclosed for a localized gesture aggregation. In a system where user movement is captured by a capture device to provide gesture input to the system, demographic information regarding users as well as data corresponding to how those users respectively make various gestures is gathered. When a new user begins to use the system, his demographic information is analyzed to determine a most likely way that he will attempt to make or find it easy to make a given gesture. That most likely way is then used to process the new user's gesture input.Type: ApplicationFiled: May 29, 2009Publication date: December 2, 2010Applicant: Microsoft CorporationInventors: Kevin Geisner, Stephen Latta, Gregory N. Snook, Relja Markovic
-
Publication number: 20100306714Abstract: Systems, methods and computer readable media are disclosed for gesture shortcuts. A user's movement or body position is captured by a capture device of a system, and is used as input to control the system. For a system-recognized gesture, there may be a full version of the gesture and a shortcut of the gesture. Where the system recognizes that either the full version of the gesture or the shortcut of the gesture has been performed, it sends an indication that the system-recognized gesture was observed to a corresponding application. Where the shortcut comprises a subset of the full version of the gesture, and both the shortcut and the full version of the gesture are recognized as the user performs the full version of the gesture, the system recognizes that only a single performance of the gesture has occurred, and indicates to the application as such.Type: ApplicationFiled: May 29, 2009Publication date: December 2, 2010Applicant: Microsoft CorporationInventors: Stephen Latta, Kevin Geisner, John Clavin, Kudo Tsunoda, Kathryn Stone Perez, Alex Kipman, Relja Markovic, Gregory N. Snook
-
Publication number: 20100306713Abstract: Systems, methods and computer readable media are disclosed for a gesture tool. A capture device captures user movement and provides corresponding data to a gesture recognizer engine and an application. From that, the data is parsed to determine whether it satisfies one or more gesture filters, each filter corresponding to user-performed gesture. The data and the information about the filters is also sent to a gesture tool, which displays aspects of the data and filters. In response to user input corresponding to a change in a filter, the gesture tool sends an indication of such to the gesture recognizer engine and application, where that change occurs.Type: ApplicationFiled: May 29, 2009Publication date: December 2, 2010Applicant: Microsoft CorporationInventors: Kevin Geisner, Stephen Latta, Gregory N. Snook, Relja Markovic, Arthur Charles Tomlin, Mark Mihelich, Kyungsuk David Lee, David Jason Christopher Horbach, Matthew Jon Puls
-
Publication number: 20100306712Abstract: A capture device may capture a user's motion and a display device may display a model that maps to the user's motion, including gestures that are applicable for control. A user may be unfamiliar with a system that maps the user's motions or not know what gestures are applicable for an executing application. A user may not understand or know how to perform gestures that are applicable for the executing application. User motion data and/or outputs of filters corresponding to gestures may be analyzed to determine those cases where assistance to the user on performing the gesture is appropriate.Type: ApplicationFiled: May 29, 2009Publication date: December 2, 2010Applicant: Microsoft CorporationInventors: Gregory N. Snook, Stephen Latta, Kevin Geisner, Darren Alexander Bennett, Kudo Tsunoda, Alex Kipman, Kathryn Stone Perez
-
Publication number: 20100281432Abstract: A capture device may capture a user's motion and a display device may display a model that maps to the user's motion, including gestures that are applicable for control. A user may be unfamiliar with a system that maps the user's motions or not know what gestures are applicable for an executing application. A user may not understand or know how to perform gestures that are applicable for the executing application. Providing visual feedback representing instructional gesture data to the user can teach the user how to properly gesture. The visual feedback may be provided in any number of suitable ways. For example, visual feedback may be provided via ghosted images, player avatars, or skeletal representations. The system can process prerecorded or live content for displaying visual feedback representing instructional gesture data. The feedback can portray the deltas between the user's actual position and the ideal gesture position.Type: ApplicationFiled: May 1, 2009Publication date: November 4, 2010Inventors: Kevin Geisner, Relja Markovic, Stephen Gilchrist Latta, Gregory Nelson Snook, Darren Bennett
-
Publication number: 20100278393Abstract: A system may receive image data and capture motion with respect to a target in a physical space and recognize a gesture from the captured motion. It may be desirable to isolate aspects of captured motion to differentiate random and extraneous motions. For example, a gesture may comprise motion of a user's right arm, and it may be desirable to isolate the motion of the user's right arm and exclude an interpretation of any other motion. Thus, the isolated aspect may be the focus of the received data for gesture recognition. Alternately, the isolated aspects may be an aspect of the captured motion that is removed from consideration when identifying a gesture from the captured motion. For example, gesture filters may be modified to correspond to the user's natural lean to eliminate the effect the lean has on the registry of a motion with a gesture filter.Type: ApplicationFiled: May 29, 2009Publication date: November 4, 2010Applicant: Microsoft CorporationInventors: Gregory Nelson Snook, Relja Markovic, Stephen Gilchrist Latta, Kevin Geisner
-
Publication number: 20100281438Abstract: Disclosed herein are systems and methods for altering a view perspective within a display environment. For example, gesture data corresponding to a plurality of inputs may be stored. The input may be input into a game or application implemented by a computing device. Images of a user of the game or application may be captured. For example, a suitable capture device may capture several images of the user over a period of time. The images may be analyzed and processed for detecting a user's gesture. Aspects of the user's gesture may be compared to the stored gesture data for determining an intended gesture input for the user. The comparison may be part of an analysis for determining inputs corresponding to the gesture data, where one or more of the inputs are input into the game or application and cause a view perspective within the display environment to be altered.Type: ApplicationFiled: May 29, 2009Publication date: November 4, 2010Applicant: Microsoft CorporationInventors: Stephen G. Latta, Gregory N. Snook, Justin McBride, Arthur Charles Tomlin, Peter Sarrett, Kevin Geisner, Relja Markovic, Christopher Vuchetich
-
Publication number: 20100277489Abstract: It may be desirable to apply corrective data to aspects of captured image or the user-performed gesture for display of a visual representation that corresponds to the corrective data. The captured motion may be any motion in the physical space that is captured by the capture device, such as a camera. Aspects of a skeletal or mesh model of a person, that is generated based on the image data captured by the capture device, may be modified prior to animation. The modification may be made to the model generated from image data that represents a target or a target's motion, including user gestures, in the physical space. For example, certain joints of a skeletal model may be readjusted or realigned. A model of a target may be modified by applying differential correction, magnetism principles, binary snapping, confining virtual movement to defined spaces, or the like.Type: ApplicationFiled: May 1, 2009Publication date: November 4, 2010Applicant: Microsoft CorporationInventors: Kevin Geisner, Relja Markovic, Stephen Gilchrist Latta, Gregory Nelson Snook, Kudo Tsunoda, Darren Alexander Bennett
-
Publication number: 20100281439Abstract: Systems, methods and computer readable media are disclosed for controlling perspective of a camera-controlled computer. A capture device captures user gestures and sends corresponding data to a recognizer engine. The recognizer engine analyzes the data with a plurality of filters, each filter corresponding to a gesture. Based on the output of those filters, a perspective control is determined, and a display device displays a new perspective corresponding to the perspective control.Type: ApplicationFiled: May 29, 2009Publication date: November 4, 2010Applicant: Microsoft CorporationInventors: Relja Markovic, Gregory N. Snook, Stephen Latta, Kevin Geisner, Johnny Lee, Adam Jethro Langridge
-
Publication number: 20100238182Abstract: In applications that display a representation of a user, it may be reasonable to insert a pre-canned animation rather than animating a user's captured motion. For example, in a tennis swing, the ball toss and take back in a serve could be a pre-canned animation, whereas the actual forward swing may be mapped from the user's gestures. An animation of a user's gestures can be chained together into sequences with pre-canned animations, where animation blending techniques can provide for a smoother transition between the animation types. Techniques for blending animations, that may comprise determining boundaries and transition points between pre-canned animations and animations based on captured motion, may improve animation efficiency. Gesture history, including joint position, velocity, and acceleration, can be used to determine user intent, seed parameters for subsequent animations and game control, and determine the subsequent gestures to initiate.Type: ApplicationFiled: March 20, 2009Publication date: September 23, 2010Applicant: Microsoft CorporationInventors: Kevin Geisner, Relja Markovic, Stephen Gilchrist Latta, Gregory Nelson Snook
-
Publication number: 20100241998Abstract: Systems, methods and computer readable media are disclosed for manipulating virtual objects. A user may utilize a controller, such as his hand, in physical space to associate with a cursor in a virtual environment. As the user manipulates the controller in physical space, this is captured by a depth camera. The image data from the depth camera is parsed to determine how the controller is manipulated, and a corresponding manipulation of the cursor is performed in virtual space. Where the cursor interacts with a virtual object in the virtual space, that virtual object is manipulated by the cursor.Type: ApplicationFiled: March 20, 2009Publication date: September 23, 2010Applicant: Microsoft CorporationInventors: Stephen G Latta, Kevin Geisner, Relja Markovic, Darren Alexander Bennett, Arthur Charles Tomlin
-
Publication number: 20100199228Abstract: Systems, methods and computer readable media are disclosed for gesture keyboarding. A user makes a gesture by either making a pose or moving in a pre-defined way that is captured by a depth camera. The depth information provided by the depth camera is parsed to determine at least that part of the user that is making the gesture. When parsed, the character or action signified by this gesture is identified.Type: ApplicationFiled: February 23, 2009Publication date: August 5, 2010Applicant: Microsoft CorporationInventors: Stephen G. Latta, Kudo Tsunoda, Kevin Geisner, Relja Markovic, Darren Alexander Bennett, Kathryn Stone Perez