Patents by Inventor Kevin Geisner
Kevin Geisner has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 8317623Abstract: One or more physical characteristics of each of multiple users are detected. These physical characteristics of a user can include physical attributes of the user (e.g., the user's height, length of the user's legs) and/or physical skills of the user (e.g., how high the user can jump). Based on these detected one or more physical characteristics of the users, two or more of the multiple users to share an online experience (e.g., play a multi-player game) are identified.Type: GrantFiled: June 6, 2011Date of Patent: November 27, 2012Assignee: Microsoft CorporationInventors: Brian Scott Murphy, Stephen G. Latta, Darren Alexander Bennett, Pedro Perez, Shawn C. Wright, Relja Markovic, Joel B. Deaguero, Christopher H. Willoughby, Ryan Lucas Hastings, Kevin Geisner
-
Publication number: 20120293518Abstract: It may be desirable to apply corrective data to aspects of captured image or the user-performed gesture for display of a visual representation that corresponds to the corrective data. The captured motion may be any motion in the physical space that is captured by the capture device, such as a camera. Aspects of a skeletal or mesh model of a person, that is generated based on the image data captured by the capture device, may be modified prior to animation. The modification may be made to the model generated from image data that represents a target or a target's motion, including user gestures, in the physical space. For example, certain joints of a skeletal model may be readjusted or realigned. A model of a target may be modified by applying differential correction, magnetism principles, binary snapping, confining virtual movement to defined spaces, or the like.Type: ApplicationFiled: August 3, 2012Publication date: November 22, 2012Applicant: MICROSOFT CORPORATIONInventors: Kevin Geisner, Relja Markovic, Stephen Gilchrist Latta, Gregory Nelson Snook, Kudo Tsunoda, Darren Alexander Bennett
-
Patent number: 8284157Abstract: Techniques for enhancing the use of a motion capture system are provided. A motion capture system tracks movement and audio inputs from a person in a physical space, and provides the inputs to an application, which displays a virtual space on a display. Bodily movements can be used to define traits of an avatar in the virtual space. The person can be directed to perform the movements by a coaching avatar, or visual or audio cues in the virtual space. The application can respond to the detected movements and voice commands or voice volume of the person to define avatar traits and initiate pre-scripted audio-visual events in the virtual space to provide an entertaining experience. A performance in the virtual space can be captured and played back with automatic modifications, such as alterations to the avatar's voice or appearance, or modifications made by another person.Type: GrantFiled: January 15, 2010Date of Patent: October 9, 2012Assignee: Microsoft CorporationInventors: Relja Markovic, Stephen G Latta, Kevin A Geisner, Christopher Vuchetich, Darren A Bennett, Brian S Murphy, Shawn C Wright
-
Patent number: 8253746Abstract: It may be desirable to apply corrective data to aspects of captured image or the user-performed gesture for display of a visual representation that corresponds to the corrective data. The captured motion may be any motion in the physical space that is captured by the capture device, such as a camera. Aspects of a skeletal or mesh model of a person, that is generated based on the image data captured by the capture device, may be modified prior to animation. The modification may be made to the model generated from image data that represents a target or a target's motion, including user gestures, in the physical space. For example, certain joints of a skeletal model may be readjusted or realigned. A model of a target may be modified by applying differential correction, magnetism principles, binary snapping, confining virtual movement to defined spaces, or the like.Type: GrantFiled: May 1, 2009Date of Patent: August 28, 2012Assignee: Microsoft CorporationInventors: Kevin Geisner, Relja Markovic, Stephen Gilchrist Latta, Gregory Nelson Snook, Kudo Tsunoda, Darren Alexander Bennett
-
Publication number: 20120206452Abstract: Technology is described for providing realistic occlusion between a virtual object displayed by a head mounted, augmented reality display system and a real object visible to the user's eyes through the display. A spatial occlusion in a user field of view of the display is typically a three dimensional occlusion determined based on a three dimensional space mapping of real and virtual objects. An occlusion interface between a real object and a virtual object can be modeled at a level of detail determined based on criteria such as distance within the field of view, display size or position with respect to a point of gaze. Technology is also described for providing three dimensional audio occlusion based on an occlusion between a real object and a virtual object in the user environment.Type: ApplicationFiled: April 10, 2012Publication date: August 16, 2012Inventors: Kevin A. Geisner, Brian J. Mount, Stephen G. Latta, Daniel J. McCulloch, Kyungsuk David Lee, Ben J. Sugden, Jeffrey N. Margolis, Kathryn Stone Perez, Sheridan Martin Small, Mark J. Finocchio, Robert L. Crocco, JR.
-
Publication number: 20120165096Abstract: A computing system runs an application (e.g., video game) that interacts with one or more actively engaged users. One or more physical properties of a group are sensed. The group may include the one or more actively engaged users and/or one or more entities not actively engaged with the application. The computing system will determine that the group (or the one or more entities not actively engaged with the application) have performed a predetermined action. A runtime condition of the application is changed in response to determining that the group (or the one or more entities not actively engaged with the computer based application) have performed the predetermined action. Examples of changing a runtime condition include moving an object, changing a score or changing an environmental condition of a video game.Type: ApplicationFiled: March 2, 2012Publication date: June 28, 2012Applicant: MICROSOFT CORPORATIONInventors: Kevin Geisner, Relja Markovic, Stephen G. Latta, Mark T. Mihelich, Christopher Willoughby, Jonathan T. Steed, Darren Bennett, Shawn C. Wright, Matt Coohill
-
Publication number: 20120157198Abstract: Depth-image analysis is performed with a device that analyzes a human target within an observed scene by capturing depth-images that include depth information from the observed scene. The human target is modeled with a virtual skeleton including a plurality of joints. The virtual skeleton is used as an input for controlling a driving simulation.Type: ApplicationFiled: December 21, 2010Publication date: June 21, 2012Applicant: MICROSOFT CORPORATIONInventors: Stephen Latta, Darren Bennett, Kevin Geisner, Relja Markovic, Kudo Tsunoda, Rhett Mathis, Matthew Monson, David Gierok, William Paul Giese, Darrin Brown, Cam McRae, David Seymour, William Axel Olsen, Matthew Searcy
-
Publication number: 20120157200Abstract: Implementations for identifying, capturing, and presenting high-quality photo-representations of acts occurring during play of a game that employs motion tracking input technology are disclosed. As one example, a method is disclosed that includes capturing via an optical interface, a plurality of photographs of a player in a capture volume during play of the electronic game. The method further includes for each captured photograph of the plurality of captured photographs, comparing an event-based scoring parameter to an event depicted by or corresponding to the captured photograph. The method further includes assigning respective scores to the plurality of captured photographs based, at least in part, on the comparison to the even-based scoring parameter. The method further includes associating the captured photographs at an electronic storage media with the respective scores assigned to the captured photographs.Type: ApplicationFiled: December 21, 2010Publication date: June 21, 2012Applicant: MICROSOFT CORPORATIONInventors: Mike Scavezze, Arthur Tomlin, Relja Markovic, Stephen Latta, Kevin Geisner
-
Publication number: 20120155705Abstract: A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human target observed with a three dimensional depth camera. A relative position of a hand joint of the virtual skeleton is translated as a gestured aiming vector control, and a virtual weapon is aimed in proportion to the gestured aiming vector control.Type: ApplicationFiled: December 21, 2010Publication date: June 21, 2012Applicant: MICROSOFT CORPORATIONInventors: Stephen Latta, Darren Bennett, Kevin Geisner, Relja Markovic, Kudo Tsunoda, Greg Snook, Christopher H. Willoughby, Peter Sarrett, Daniel Lee Osborn
-
Publication number: 20120157203Abstract: A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human target observed with a three-dimensional depth camera. A relative position of a hand joint of the virtual skeleton is translated as a gestured control, and a three-dimensional virtual world is controlled responsive to the gestured control.Type: ApplicationFiled: December 21, 2010Publication date: June 21, 2012Applicant: Microsoft CorporationInventors: Stephen Latta, Darren Bennett, Kevin Geisner, Relja Markovic
-
Publication number: 20120154618Abstract: A method for modeling an object from image data comprises identifying in an image from the video a set of reference points on the object, and, for each reference point identified, observing a displacement of that reference point in response to a motion of the object. The method further comprises grouping together those reference points for which a common translational or rotational motion of the object results in the observed displacement, and fitting the grouped-together reference points to a shape.Type: ApplicationFiled: December 15, 2010Publication date: June 21, 2012Applicant: MICROSOFT CORPORATIONInventors: Relja Markovic, Stephen Latta, Kevin Geisner
-
Patent number: 8145594Abstract: Systems, methods and computer readable media are disclosed for a localized gesture aggregation. In a system where user movement is captured by a capture device to provide gesture input to the system, demographic information regarding users as well as data corresponding to how those users respectively make various gestures is gathered. When a new user begins to use the system, his demographic information is analyzed to determine a most likely way that he will attempt to make or find it easy to make a given gesture. That most likely way is then used to process the new user's gesture input.Type: GrantFiled: May 29, 2009Date of Patent: March 27, 2012Assignee: Microsoft CorporationInventors: Kevin Geisner, Stephen Latta, Gregory N. Snook, Relja Markovic
-
Publication number: 20110314482Abstract: A system and method is disclosed aggregating and organizing a user's cloud data in an encompassing system, and then exposing the sum-total of that cloud data to application programs via a common API. Such a system provides rich presence information allowing users to map and unify the totality of their experiences across all of their computing devices, as well as discovering other users and their experiences. In this way, users can enhance their knowledge of, and interaction with, their own environment, as well as open up new social experiences with others.Type: ApplicationFiled: June 18, 2010Publication date: December 22, 2011Applicant: MICROSOFT CORPORATIONInventors: Shiraz Cupala, Kevin Geisner, John Clavin, Kenneth A. Lobb, Brian Ostergren
-
Publication number: 20110304774Abstract: Embodiments are disclosed that relate to the automatic tagging of recorded content. For example, one disclosed embodiment provides a computing device comprising a processor and memory having instructions executable by the processor to receive input data comprising one or more of a depth data, video data, and directional audio data, identify a content-based input signal in the input data, and apply one or more filters to the input signal to determine whether the input signal comprises a recognized input. Further, if the input signal comprises a recognized input, then the instructions are executable to tag the input data with the contextual tag associated with the recognized input and record the contextual tag with the input data.Type: ApplicationFiled: June 11, 2010Publication date: December 15, 2011Applicant: MICROSOFT CORPORATIONInventors: Stephen Latta, Christopher Vuchetich, Matthew Eric Haigh, JR., Andrew Robert Campbell, Darren Bennett, Relja Markovic, Oscar Omar Garza Santos, Kevin Geisner, Kudo Tsunoda
-
Publication number: 20110304632Abstract: Embodiments are disclosed that relate to interacting with a user interface via feedback provided by an avatar. One embodiment provides a method comprising receiving depth data, locating a person in the depth data, and mapping a physical space in front of the person to a screen space of a display device. The method further comprises forming an image of an avatar representing the person, outputting to a display an image of a user interface comprising an interactive user interface control, and outputting to the display device the image of the avatar such that the avatar faces the user interface control. The method further comprises detecting a motion of the person via the depth data, forming an animated representation of the avatar interacting with the user interface control based upon the motion of the person, and outputting the animated representation of the avatar interacting with the control.Type: ApplicationFiled: June 11, 2010Publication date: December 15, 2011Applicant: MICROSOFT CORPORATIONInventors: Jeffrey Evertt, Joel Deaguero, Darren Bennett, Dylan Vance, David Galloway, Relja Markovic, Stephen Latta, Oscar Omar Garza Santos, Kevin Geisner
-
Publication number: 20110299728Abstract: Automatic depth camera aiming is provided by a method which includes receiving from the depth camera one or more observed depth images of a scene. The method further includes, if a point of interest of a target is found within the scene, determining if the point of interest is within a far range relative to the depth camera. The method further includes, if the point of interest of the target is within the far range, operating the depth camera with a far logic, or if the point of interest of the target is not within the far range, operating the depth camera with a near logic.Type: ApplicationFiled: June 4, 2010Publication date: December 8, 2011Applicant: MICROSOFT CORPORATIONInventors: Relja Markovic, Stephen Latta, Kyungsuk David Lee, Oscar Omar Garza Santos, Kevin Geisner
-
Publication number: 20110246329Abstract: An on-screen shopping application which reacts to a human target user's motions to provide a shopping experience to the user is provided. A tracking system captures user motions and executes a shopping application allowing a user to manipulate an on-screen representation the user. The on-screen representation has a likeness of the user or another individual and movements of the user in the on-screen interface allows the user to interact with virtual articles that represent real-world articles. User movements which are recognized as article manipulation or transaction control gestures are translated into commands for the shopping application.Type: ApplicationFiled: April 1, 2010Publication date: October 6, 2011Applicant: MICROSOFT CORPORATIONInventors: Kevin A. Geisner, Kudo Tsunoda, Darren Bennett, Brian S. Murphy, Stephen G. Latta, Relja Markovic, Alex Kipman
-
Publication number: 20110223995Abstract: A computing system runs an application (e.g., video game) that interacts with one or more actively engaged users. One or more physical properties of a group are sensed. The group may include the one or more actively engaged users and/or one or more entities not actively engaged with the application. The computing system will determine that the group (or the one or more entities not actively engaged with the application) have performed a predetermined action. A runtime condition of the application is changed in response to determining that the group (or the one or more entities not actively engaged with the computer based application) have performed the predetermined action. Examples of changing a runtime condition include moving an object, changing a score or changing an environmental condition of a video game.Type: ApplicationFiled: March 12, 2010Publication date: September 15, 2011Inventors: Kevin Geisner, Relja Markovic, Stephen G. Latta, Mark T. Mihelich, Christopher Willoughby, Jonathan T. Steed, Darren Bennett, Shawn C. Wright, Matt Coohill
-
Publication number: 20110221755Abstract: A camera that can sense motion of a user is connected to a computing system (e.g., video game apparatus or other type of computer). The computing system determines an action corresponding to the sensed motion of the user and determines a magnitude of the sensed motion of the user. The computing system creates and displays an animation of an object (e.g., an avatar in a video game) performing the action in a manner that is amplified in comparison to the sensed motion by a factor that is proportional to the determined magnitude. The computing system also creates and outputs audio/visual feedback in proportion to a magnitude of the sensed motion of the user.Type: ApplicationFiled: March 12, 2010Publication date: September 15, 2011Inventors: Kevin Geisner, Relja Markovic, Stephen G. Latta, Brian James Mount, Zachary T. Middleton, Joel Deaguero, Christopher Willoughby, Dan Osborn, Darren Bennett, Gregory N. Snook
-
Publication number: 20110175810Abstract: Techniques for facilitating interaction with an application in a motion capture system allow a person to easily begin interacting without manual setup. A depth camera system tracks a person in physical space and evaluates the person's intent to engage with the application. Factors such as location, stance, movement and voice data can be evaluated. Absolute location in a field of view of the depth camera, and location relative to another person, can be evaluated. Stance can include facing a depth camera, indicating a willingness to interact. Movements can include moving toward or away from a central area in the physical space, walking through the field of view, and movements which occur while standing generally in one location, such as moving one's arms around, gesturing, or shifting weight from one foot to another. Voice data can include volume as well as words which are detected by speech recognition.Type: ApplicationFiled: January 15, 2010Publication date: July 21, 2011Applicant: MICROSOFT CORPORATIONInventors: Relja Markovic, Stephen G. Latta, Kevin A. Geisner, Jonathan T. Steed, Darren A. Bennett, Amos D. Vance