Patents by Inventor Kathryn Stone Perez

Kathryn Stone Perez has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20110025689
    Abstract: Techniques for auto-generating the target's visual representation may reduce or eliminate the manual input required for the generation of the target's visual representation. For example, a system having a capture device may detect various features of a user in the physical space and make feature selections from a library of visual representation feature options based on the detected features. The system can automatically apply the selections to the visual representation of the user based on the detected features. Alternately, the system may make selections that narrow the number of options for features from which the user chooses. The system may apply the selections to the user in real time as well as make updates to the features selected and applied to the target's visual representation in real time.
    Type: Application
    Filed: July 29, 2009
    Publication date: February 3, 2011
    Applicant: Microsoft Corporation
    Inventors: Kathryn Stone Perez, Alex Kipman, Nicholas D. Burton, Andrew Wilson
  • Publication number: 20110007142
    Abstract: Using facial recognition and gesture/body posture recognition techniques, a system can naturally convey the emotions and attitudes of a user via the user's visual representation. Techniques may comprise customizing a visual representation of a user based on detectable characteristics, deducting a user's temperament from the detectable characteristics, and applying attributes indicative of the temperament to the visual representation in real time. Techniques may also comprise processing changes to the user's characteristics in the physical space and updating the visual representation in real time. For example, the system may track a user's facial expressions and body movements to identify a temperament and then apply attributes indicative of that temperament to the visual representation. Thus, a visual representation of a user, such as an avatar or fanciful character, can reflect the user's expressions and moods in real time.
    Type: Application
    Filed: July 9, 2009
    Publication date: January 13, 2011
    Applicant: Microsoft Corporation
    Inventors: Kathryn Stone Perez, Alex Kipman, Nicholas D. Burton, Andrew Wilson
  • Publication number: 20110007079
    Abstract: Data captured with respect to a human may be analyzed and applied to a visual representation of a user such that the visual representation begins to reflect the behavioral characteristics of the user. For example, a system may have a capture device that captures data about the user in the physical space. The system may identify the user's characteristics, tendencies, voice patterns, behaviors, gestures, etc. Over time, the system may learn a user's tendencies and intelligently apply animations to the user's avatar such that the avatar behaves and responds in accordance with the identified behaviors of the user. The animations applied to the avatar may be animations selected from a library of pre-packaged animations, or the animations may be entered and recorded by the user into the avatar's avatar library.
    Type: Application
    Filed: July 13, 2009
    Publication date: January 13, 2011
    Applicant: Microsoft Corporation
    Inventors: Kathryn Stone Perez, Alex Kipman, Nicholas D. Burton, Andrew Wilson
  • Publication number: 20100306712
    Abstract: A capture device may capture a user's motion and a display device may display a model that maps to the user's motion, including gestures that are applicable for control. A user may be unfamiliar with a system that maps the user's motions or not know what gestures are applicable for an executing application. A user may not understand or know how to perform gestures that are applicable for the executing application. User motion data and/or outputs of filters corresponding to gestures may be analyzed to determine those cases where assistance to the user on performing the gesture is appropriate.
    Type: Application
    Filed: May 29, 2009
    Publication date: December 2, 2010
    Applicant: Microsoft Corporation
    Inventors: Gregory N. Snook, Stephen Latta, Kevin Geisner, Darren Alexander Bennett, Kudo Tsunoda, Alex Kipman, Kathryn Stone Perez
  • Publication number: 20100306714
    Abstract: Systems, methods and computer readable media are disclosed for gesture shortcuts. A user's movement or body position is captured by a capture device of a system, and is used as input to control the system. For a system-recognized gesture, there may be a full version of the gesture and a shortcut of the gesture. Where the system recognizes that either the full version of the gesture or the shortcut of the gesture has been performed, it sends an indication that the system-recognized gesture was observed to a corresponding application. Where the shortcut comprises a subset of the full version of the gesture, and both the shortcut and the full version of the gesture are recognized as the user performs the full version of the gesture, the system recognizes that only a single performance of the gesture has occurred, and indicates to the application as such.
    Type: Application
    Filed: May 29, 2009
    Publication date: December 2, 2010
    Applicant: Microsoft Corporation
    Inventors: Stephen Latta, Kevin Geisner, John Clavin, Kudo Tsunoda, Kathryn Stone Perez, Alex Kipman, Relja Markovic, Gregory N. Snook
  • Publication number: 20100302247
    Abstract: Techniques may comprise identifying surfaces, textures, and object dimensions from unorganized point clouds derived from a capture device, such as a depth sensing device. Employing target digitization may comprise surface extraction, identifying points in a point cloud, labeling surfaces, computing object properties, tracking changes in object properties over time, and increasing confidence in the object boundaries and identity as additional frames are captured. If the point cloud data includes an object, a model of the object may be generated. Feedback of the model associated with a particular object may be generated and provided real time to the user. Further, the model of the object may be tracked in response to any movement of the object in the physical space such that the model may be adjusted to mimic changes or movement of the object, or increase the fidelity of the target's characteristics.
    Type: Application
    Filed: May 29, 2009
    Publication date: December 2, 2010
    Applicant: Microsoft Corporation
    Inventors: Kathryn Stone Perez, Alex Kipman, Nicholas Burton, Andrew Wilson, Diego Fernandes Nehab
  • Publication number: 20100303302
    Abstract: A depth image of a scene may be received, observed, or captured by a device. The depth image may include a human target that may have, for example, a portion thereof non-visible or occluded. For example, a user may be turned such that a body part may not be visible to the device, may have one or more body parts partially outside a field of view of the device, may have a body part or a portion of a body part behind another body part or object, or the like such that the human target associated with the user may also have a portion body part or a body part non-visible or occluded in the depth image. A position or location of the non-visible or occluded portion or body part of the human target associated with the user may then be estimated.
    Type: Application
    Filed: June 26, 2009
    Publication date: December 2, 2010
    Applicant: Microsoft Corporation
    Inventors: Alex A. Kipman, Kathryn Stone Perez, Mark J. Finocchio, Ryan Michael Geiss, Kudo Tsunoda
  • Publication number: 20100302257
    Abstract: An virtual character such as an on-screen object, an avatar, an on-screen character, or the like may be animated using a live motion of a user and a pre-recorded motion. For example, a live motion of a user may be captured and a pre-recorded motion such as a pre-recorded artist generated motion, a pre-recorded motion of the user, and/or a programmatically controlled transformation may be received. The live motion may then be applied to a first portion of an the virtual character and the pre-recorded motion may be applied to a second portion of the virtual character such that the virtual character may be animated with a combination of the live and pre-recorded motions.
    Type: Application
    Filed: May 29, 2009
    Publication date: December 2, 2010
    Applicant: Microsoft Corporation
    Inventors: Kathryn Stone Perez, Alex A. Kipman, Jeffrey Margolis
  • Publication number: 20100302015
    Abstract: A system to present the user a 3-D virtual environment as well as non-visual sensory feedback for interactions that user makes with virtual objects in that environment is disclosed. In an exemplary embodiment, a system comprising a depth camera that captures user position and movement, a three-dimensional (3-D) display device that presents the user a virtual environment in 3-D and a haptic feedback device provides haptic feedback to the user as he interacts with a virtual object in the virtual environment. As the user moves through his physical space, he is captured by the depth camera. Data from that depth camera is parsed to correlate a user position with a position in the virtual environment. Where the user position or movement causes the user to touch the virtual object, that is determined, and corresponding haptic feedback is provided to the user.
    Type: Application
    Filed: July 12, 2010
    Publication date: December 2, 2010
    Applicant: Microsoft Corporation
    Inventors: Alex Kipman, Kudo Tsunoda, Todd Eric Holmdahl, John Clavin, Kathryn Stone Perez
  • Publication number: 20100306716
    Abstract: In a system that utilizes gestures for controlling aspects of an application, strict requirements for success may limit approachability or accessibility for different types of people. The system may receive data reflecting movement of a user and remap a standard gesture to correspond to the received data. Following the remapping, the system may receive data reflecting skeletal movement of a user, and determine from that data whether the user has performed one or more standard and/or remapped gestures. In an exemplary embodiment, a gesture library comprises a plurality of gestures. Where these gestures are complementary with each other, they may be grouped into gesture packages. A gesture package may include gestures that are packaged as remapped gestures or a gesture package may include options for remapping standard gestures to new data.
    Type: Application
    Filed: May 29, 2009
    Publication date: December 2, 2010
    Applicant: Microsoft Corporation
    Inventor: Kathryn Stone Perez
  • Publication number: 20100303289
    Abstract: A system recognizes human beings in their natural environment, without special sensing devices attached to the subjects, uniquely identifies them and tracks them in three dimensional space. The resulting representation is presented directly to applications as a multi-point skeletal model delivered in real-time. The device efficiently tracks humans and their natural movements by understanding the natural mechanics and capabilities of the human muscular-skeletal system. The device also uniquely recognizes individuals in order to allow multiple people to interact with the system via natural movements of their limbs and body as well as voice commands/responses.
    Type: Application
    Filed: May 29, 2009
    Publication date: December 2, 2010
    Applicant: MICROSOFT CORPORATION
    Inventors: R. Stephen Polzin, Alex A. Kipman, Mark J. Finocchio, Ryan Michael Geiss, Kathryn Stone Perez, Kudo Tsunoda, Darren Alexander Bennett
  • Publication number: 20100277411
    Abstract: Technology is presented for providing feedback to a user on an ability of an executing application to track user action for control of the executing application on a computer system. A capture system detects a user in a capture area. Factors in the capture area and the user's actions can adversely affect the ability of the application to determine if a user movement is a gesture which is a control or instruction to the application. One example of such factors is a user being out of the field of view of the capture system. Some other factor examples include lighting conditions and obstructions in the capture area. Responsive to a user tracking criteria not being satisfied, feedback is output to the user. In some embodiments, the feedback is provided within the context of an executing application.
    Type: Application
    Filed: June 22, 2010
    Publication date: November 4, 2010
    Applicant: MICROSOFT CORPORATION
    Inventors: Dawson Yee, Kathryn Stone Perez
  • Publication number: 20100281436
    Abstract: Techniques for managing a set of states associated with a capture device are disclosed herein. The capture device may detect and bind to users, and may provide feedback about whether the capture device is bound to, or detecting a user. Techniques are also disclosed wherein virtual ports may be associated with users bound to a capture device and feedback about the state of virtual ports may be provided.
    Type: Application
    Filed: May 1, 2009
    Publication date: November 4, 2010
    Applicant: Microsoft Corporation
    Inventors: Alex Kipman, Kathryn Stone Perez, R. Stephen Polzin, William Guo
  • Publication number: 20100281437
    Abstract: Techniques for managing virtual ports are disclosed herein. Each such virtual port may have different associated features such as, for example, privileges, rights or options. When one or more users are in a capture scene of a gesture based system, the system may associate virtual ports with the users and maintain the virtual ports. Also provided are techniques for disassociating virtual ports with users or swapping virtual ports between two or more users.
    Type: Application
    Filed: May 1, 2009
    Publication date: November 4, 2010
    Applicant: Microsoft Corporation
    Inventors: Kathryn Stone-Perez, Jeffrey Margolis, Mark J. Finocchio, Brian E. Keane, Rudy Jacobus Poot, Stephen G. Latta
  • Publication number: 20100231512
    Abstract: Disclosed herein are systems and methods for controlling a computing environment with one or more gestures by sizing a virtual screen centered on a user, and by adapting the response of the computing environment to gestures made by a user and modes of use exhibited by a user. The virtual screen may be sized using depth, aspects of the user such as height and/or user profile information such as age and ability. Modes of use by a user may also be considered in determining the size of the virtual screen and the control of the system, the modes being based on profile information and/or information from a capture device.
    Type: Application
    Filed: March 16, 2009
    Publication date: September 16, 2010
    Applicant: Microsoft Corporation
    Inventors: Kathryn Stone Perez, Rudy Poot
  • Publication number: 20100199228
    Abstract: Systems, methods and computer readable media are disclosed for gesture keyboarding. A user makes a gesture by either making a pose or moving in a pre-defined way that is captured by a depth camera. The depth information provided by the depth camera is parsed to determine at least that part of the user that is making the gesture. When parsed, the character or action signified by this gesture is identified.
    Type: Application
    Filed: February 23, 2009
    Publication date: August 5, 2010
    Applicant: Microsoft Corporation
    Inventors: Stephen G. Latta, Kudo Tsunoda, Kevin Geisner, Relja Markovic, Darren Alexander Bennett, Kathryn Stone Perez