Patents by Inventor Kudo Tsunoda

Kudo Tsunoda has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8448094
    Abstract: Systems and methods for mapping natural input devices to legacy system inputs are disclosed. One example system may include a computing device having an algorithmic preprocessing module configured to receive input data containing a natural user input and to identify the natural user input in the input data. The computing device may further include a gesture module coupled to the algorithmic preprocessing module, the gesture module being configured to associate the natural user input to a gesture in a gesture library. The computing device may also include a mapping module to map the gesture to a legacy controller input, and to send the legacy controller input to a legacy system in response to the natural user input.
    Type: Grant
    Filed: March 25, 2009
    Date of Patent: May 21, 2013
    Assignee: Microsoft Corporation
    Inventors: Alex Kipman, R. Stephen Polzin, Kudo Tsunoda, Darren Bennett, Stephen Latta, Mark Finocchio, Gregory G. Snook, Relja Markovic
  • Patent number: 8418085
    Abstract: A capture device may capture a user's motion and a display device may display a model that maps to the user's motion, including gestures that are applicable for control. A user may be unfamiliar with a system that maps the user's motions or not know what gestures are applicable for an executing application. A user may not understand or know how to perform gestures that are applicable for the executing application. User motion data and/or outputs of filters corresponding to gestures may be analyzed to determine those cases where assistance to the user on performing the gesture is appropriate.
    Type: Grant
    Filed: May 29, 2009
    Date of Patent: April 9, 2013
    Assignee: Microsoft Corporation
    Inventors: Gregory N. Snook, Stephen Latta, Kevin Geisner, Darren Alexander Bennett, Kudo Tsunoda, Alex Kipman, Kathryn Stone Perez
  • Patent number: 8385596
    Abstract: A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human target observed with a three-dimensional depth camera. A relative position of a hand joint of the virtual skeleton is translated as a gestured aiming vector control, and a virtual weapon is aimed in proportion to the gestured aiming vector control.
    Type: Grant
    Filed: December 21, 2010
    Date of Patent: February 26, 2013
    Assignee: Microsoft Corporation
    Inventors: Stephen Latta, Darren Bennett, Kevin Geisner, Relja Markovic, Kudo Tsunoda, Greg Snook, Christopher H. Willoughby, Peter Sarrett, Daniel Lee Osborn
  • Publication number: 20120299912
    Abstract: A method to help a user visualize how a wearable article will look on the user's body. Enacted on a computing system, the method includes receiving an image of the user's body from an image-capture component. Based on the image, a posable, three-dimensional, virtual avatar is constructed to substantially resemble the user. In this example method, data is obtained that identifies the wearable article as being selected for the user. This data includes a plurality of metrics that at least partly define the wearable article. Then, a virtualized form of the wearable article is attached to the avatar, which is provided to a display component for the user to review.
    Type: Application
    Filed: August 2, 2012
    Publication date: November 29, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Jay Kapur, Sheridan Jones, Kudo Tsunoda
  • Publication number: 20120293518
    Abstract: It may be desirable to apply corrective data to aspects of captured image or the user-performed gesture for display of a visual representation that corresponds to the corrective data. The captured motion may be any motion in the physical space that is captured by the capture device, such as a camera. Aspects of a skeletal or mesh model of a person, that is generated based on the image data captured by the capture device, may be modified prior to animation. The modification may be made to the model generated from image data that represents a target or a target's motion, including user gestures, in the physical space. For example, certain joints of a skeletal model may be readjusted or realigned. A model of a target may be modified by applying differential correction, magnetism principles, binary snapping, confining virtual movement to defined spaces, or the like.
    Type: Application
    Filed: August 3, 2012
    Publication date: November 22, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Kevin Geisner, Relja Markovic, Stephen Gilchrist Latta, Gregory Nelson Snook, Kudo Tsunoda, Darren Alexander Bennett
  • Patent number: 8253746
    Abstract: It may be desirable to apply corrective data to aspects of captured image or the user-performed gesture for display of a visual representation that corresponds to the corrective data. The captured motion may be any motion in the physical space that is captured by the capture device, such as a camera. Aspects of a skeletal or mesh model of a person, that is generated based on the image data captured by the capture device, may be modified prior to animation. The modification may be made to the model generated from image data that represents a target or a target's motion, including user gestures, in the physical space. For example, certain joints of a skeletal model may be readjusted or realigned. A model of a target may be modified by applying differential correction, magnetism principles, binary snapping, confining virtual movement to defined spaces, or the like.
    Type: Grant
    Filed: May 1, 2009
    Date of Patent: August 28, 2012
    Assignee: Microsoft Corporation
    Inventors: Kevin Geisner, Relja Markovic, Stephen Gilchrist Latta, Gregory Nelson Snook, Kudo Tsunoda, Darren Alexander Bennett
  • Publication number: 20120155705
    Abstract: A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human target observed with a three dimensional depth camera. A relative position of a hand joint of the virtual skeleton is translated as a gestured aiming vector control, and a virtual weapon is aimed in proportion to the gestured aiming vector control.
    Type: Application
    Filed: December 21, 2010
    Publication date: June 21, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Stephen Latta, Darren Bennett, Kevin Geisner, Relja Markovic, Kudo Tsunoda, Greg Snook, Christopher H. Willoughby, Peter Sarrett, Daniel Lee Osborn
  • Publication number: 20120157198
    Abstract: Depth-image analysis is performed with a device that analyzes a human target within an observed scene by capturing depth-images that include depth information from the observed scene. The human target is modeled with a virtual skeleton including a plurality of joints. The virtual skeleton is used as an input for controlling a driving simulation.
    Type: Application
    Filed: December 21, 2010
    Publication date: June 21, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Stephen Latta, Darren Bennett, Kevin Geisner, Relja Markovic, Kudo Tsunoda, Rhett Mathis, Matthew Monson, David Gierok, William Paul Giese, Darrin Brown, Cam McRae, David Seymour, William Axel Olsen, Matthew Searcy
  • Publication number: 20110304774
    Abstract: Embodiments are disclosed that relate to the automatic tagging of recorded content. For example, one disclosed embodiment provides a computing device comprising a processor and memory having instructions executable by the processor to receive input data comprising one or more of a depth data, video data, and directional audio data, identify a content-based input signal in the input data, and apply one or more filters to the input signal to determine whether the input signal comprises a recognized input. Further, if the input signal comprises a recognized input, then the instructions are executable to tag the input data with the contextual tag associated with the recognized input and record the contextual tag with the input data.
    Type: Application
    Filed: June 11, 2010
    Publication date: December 15, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Stephen Latta, Christopher Vuchetich, Matthew Eric Haigh, JR., Andrew Robert Campbell, Darren Bennett, Relja Markovic, Oscar Omar Garza Santos, Kevin Geisner, Kudo Tsunoda
  • Publication number: 20110279249
    Abstract: A system to present the user a 3-D virtual environment as well as non-visual sensory feedback for interactions that user makes with virtual objects in that environment is disclosed. In an exemplary embodiment, a system comprising a depth camera that captures user position and movement, a three-dimensional (3-D) display device that presents the user a virtual environment in 3-D and a haptic feedback device provides haptic feedback to the user as he interacts with a virtual object in the virtual environment. As the user moves through his physical space, he is captured by the depth camera. Data from that depth camera is parsed to correlate a user position with a position in the virtual environment. Where the user position or movement causes the user to touch the virtual object, that is determined, and corresponding haptic feedback is provided to the user.
    Type: Application
    Filed: July 27, 2011
    Publication date: November 17, 2011
    Applicant: Microsoft Corporation
    Inventors: Alex Kipman, Kudo Tsunoda, Todd Eric Holmdahl, John Clavin, Kathryn Stone Perez
  • Publication number: 20110246329
    Abstract: An on-screen shopping application which reacts to a human target user's motions to provide a shopping experience to the user is provided. A tracking system captures user motions and executes a shopping application allowing a user to manipulate an on-screen representation the user. The on-screen representation has a likeness of the user or another individual and movements of the user in the on-screen interface allows the user to interact with virtual articles that represent real-world articles. User movements which are recognized as article manipulation or transaction control gestures are translated into commands for the shopping application.
    Type: Application
    Filed: April 1, 2010
    Publication date: October 6, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Kevin A. Geisner, Kudo Tsunoda, Darren Bennett, Brian S. Murphy, Stephen G. Latta, Relja Markovic, Alex Kipman
  • Patent number: 8009022
    Abstract: A system to present the user a 3-D virtual environment as well as non-visual sensory feedback for interactions that user makes with virtual objects in that environment is disclosed. In an exemplary embodiment, a system comprising a depth camera that captures user position and movement, a three-dimensional (3-D) display device that presents the user a virtual environment in 3-D and a haptic feedback device provides haptic feedback to the user as he interacts with a virtual object in the virtual environment. As the user moves through his physical space, he is captured by the depth camera. Data from that depth camera is parsed to correlate a user position with a position in the virtual environment. Where the user position or movement causes the user to touch the virtual object, that is determined, and corresponding haptic feedback is provided to the user.
    Type: Grant
    Filed: July 12, 2010
    Date of Patent: August 30, 2011
    Assignee: Microsoft Corporation
    Inventors: Alex Kipman, Kudo Tsunoda, Todd Eric Holmdahl, John Clavin, Kathryn Stone Perez
  • Publication number: 20110055846
    Abstract: A capture device can detect gestures made by a user. The gestures can be used to control a gesture unaware program.
    Type: Application
    Filed: August 31, 2009
    Publication date: March 3, 2011
    Applicant: Microsoft Corporation
    Inventors: Kathryn S. Perez, Kevin A. Geisner, Alex A. Kipman, Kudo Tsunoda
  • Publication number: 20100303289
    Abstract: A system recognizes human beings in their natural environment, without special sensing devices attached to the subjects, uniquely identifies them and tracks them in three dimensional space. The resulting representation is presented directly to applications as a multi-point skeletal model delivered in real-time. The device efficiently tracks humans and their natural movements by understanding the natural mechanics and capabilities of the human muscular-skeletal system. The device also uniquely recognizes individuals in order to allow multiple people to interact with the system via natural movements of their limbs and body as well as voice commands/responses.
    Type: Application
    Filed: May 29, 2009
    Publication date: December 2, 2010
    Applicant: MICROSOFT CORPORATION
    Inventors: R. Stephen Polzin, Alex A. Kipman, Mark J. Finocchio, Ryan Michael Geiss, Kathryn Stone Perez, Kudo Tsunoda, Darren Alexander Bennett
  • Publication number: 20100302015
    Abstract: A system to present the user a 3-D virtual environment as well as non-visual sensory feedback for interactions that user makes with virtual objects in that environment is disclosed. In an exemplary embodiment, a system comprising a depth camera that captures user position and movement, a three-dimensional (3-D) display device that presents the user a virtual environment in 3-D and a haptic feedback device provides haptic feedback to the user as he interacts with a virtual object in the virtual environment. As the user moves through his physical space, he is captured by the depth camera. Data from that depth camera is parsed to correlate a user position with a position in the virtual environment. Where the user position or movement causes the user to touch the virtual object, that is determined, and corresponding haptic feedback is provided to the user.
    Type: Application
    Filed: July 12, 2010
    Publication date: December 2, 2010
    Applicant: Microsoft Corporation
    Inventors: Alex Kipman, Kudo Tsunoda, Todd Eric Holmdahl, John Clavin, Kathryn Stone Perez
  • Publication number: 20100303302
    Abstract: A depth image of a scene may be received, observed, or captured by a device. The depth image may include a human target that may have, for example, a portion thereof non-visible or occluded. For example, a user may be turned such that a body part may not be visible to the device, may have one or more body parts partially outside a field of view of the device, may have a body part or a portion of a body part behind another body part or object, or the like such that the human target associated with the user may also have a portion body part or a body part non-visible or occluded in the depth image. A position or location of the non-visible or occluded portion or body part of the human target associated with the user may then be estimated.
    Type: Application
    Filed: June 26, 2009
    Publication date: December 2, 2010
    Applicant: Microsoft Corporation
    Inventors: Alex A. Kipman, Kathryn Stone Perez, Mark J. Finocchio, Ryan Michael Geiss, Kudo Tsunoda
  • Publication number: 20100302253
    Abstract: Techniques for generating an avatar model during the runtime of an application are herein disclosed. The avatar model can be generated from an image captured by a capture device. End-effectors can be positioned an inverse kinematics can be used to determine positions of other nodes in the avatar model.
    Type: Application
    Filed: August 26, 2009
    Publication date: December 2, 2010
    Applicant: Microsoft Corporation
    Inventors: Alex A. Kipman, Kudo Tsunoda, Jeffrey N. Margolis, Scott W. Sims, Nicholas D. Burton, Andrew Wilson
  • Publication number: 20100306714
    Abstract: Systems, methods and computer readable media are disclosed for gesture shortcuts. A user's movement or body position is captured by a capture device of a system, and is used as input to control the system. For a system-recognized gesture, there may be a full version of the gesture and a shortcut of the gesture. Where the system recognizes that either the full version of the gesture or the shortcut of the gesture has been performed, it sends an indication that the system-recognized gesture was observed to a corresponding application. Where the shortcut comprises a subset of the full version of the gesture, and both the shortcut and the full version of the gesture are recognized as the user performs the full version of the gesture, the system recognizes that only a single performance of the gesture has occurred, and indicates to the application as such.
    Type: Application
    Filed: May 29, 2009
    Publication date: December 2, 2010
    Applicant: Microsoft Corporation
    Inventors: Stephen Latta, Kevin Geisner, John Clavin, Kudo Tsunoda, Kathryn Stone Perez, Alex Kipman, Relja Markovic, Gregory N. Snook
  • Publication number: 20100306712
    Abstract: A capture device may capture a user's motion and a display device may display a model that maps to the user's motion, including gestures that are applicable for control. A user may be unfamiliar with a system that maps the user's motions or not know what gestures are applicable for an executing application. A user may not understand or know how to perform gestures that are applicable for the executing application. User motion data and/or outputs of filters corresponding to gestures may be analyzed to determine those cases where assistance to the user on performing the gesture is appropriate.
    Type: Application
    Filed: May 29, 2009
    Publication date: December 2, 2010
    Applicant: Microsoft Corporation
    Inventors: Gregory N. Snook, Stephen Latta, Kevin Geisner, Darren Alexander Bennett, Kudo Tsunoda, Alex Kipman, Kathryn Stone Perez
  • Publication number: 20100277489
    Abstract: It may be desirable to apply corrective data to aspects of captured image or the user-performed gesture for display of a visual representation that corresponds to the corrective data. The captured motion may be any motion in the physical space that is captured by the capture device, such as a camera. Aspects of a skeletal or mesh model of a person, that is generated based on the image data captured by the capture device, may be modified prior to animation. The modification may be made to the model generated from image data that represents a target or a target's motion, including user gestures, in the physical space. For example, certain joints of a skeletal model may be readjusted or realigned. A model of a target may be modified by applying differential correction, magnetism principles, binary snapping, confining virtual movement to defined spaces, or the like.
    Type: Application
    Filed: May 1, 2009
    Publication date: November 4, 2010
    Applicant: Microsoft Corporation
    Inventors: Kevin Geisner, Relja Markovic, Stephen Gilchrist Latta, Gregory Nelson Snook, Kudo Tsunoda, Darren Alexander Bennett