Patents by Inventor Alex A. Kipman

Alex A. Kipman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20130311944
    Abstract: A system is disclosed for providing on-screen graphical handles to control interaction between a user and on-screen objects. A handle defines what actions a user may perform on the object, such as for example scrolling through a textual or graphical navigation menu. Affordances are provided to guide the user through the process of interacting with a handle.
    Type: Application
    Filed: July 29, 2013
    Publication date: November 21, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: Andrew Mattingly, Jeremy Hill, Arjun Daval, Brian Kramp, Ali Vassigh, Christian Klein, Adam Poulos, Alex Kipman, Jeffrey Margolis
  • Patent number: 8542252
    Abstract: Techniques may comprise identifying surfaces, textures, and object dimensions from unorganized point clouds derived from a capture device, such as a depth sensing device. Employing target digitization may comprise surface extraction, identifying points in a point cloud, labeling surfaces, computing object properties, tracking changes in object properties over time, and increasing confidence in the object boundaries and identity as additional frames are captured. If the point cloud data includes an object, a model of the object may be generated. Feedback of the model associated with a particular object may be generated and provided real time to the user. Further, the model of the object may be tracked in response to any movement of the object in the physical space such that the model may be adjusted to mimic changes or movement of the object, or increase the fidelity of the target's characteristics.
    Type: Grant
    Filed: May 29, 2009
    Date of Patent: September 24, 2013
    Assignee: Microsoft Corporation
    Inventors: Kathryn Stone Perez, Alex Kipman, Nicholas Burton, Andrew Wilson, Diego Fernandes Nehab
  • Publication number: 20130194259
    Abstract: A system and related methods for visually augmenting an appearance of a physical environment as seen by a user through a head-mounted display device are provided. In one embodiment, a virtual environment generating program receives eye-tracking information, lighting information, and depth information from the head-mounted display. The program generates a virtual environment that models the physical environment and is based on the lighting information and the distance of a real-world object from the head-mounted display. The program visually augments a virtual object representation in the virtual environment based on the eye-tracking information, and renders the virtual object representation on a transparent display of the head-mounted display device.
    Type: Application
    Filed: January 27, 2012
    Publication date: August 1, 2013
    Inventors: Darren Bennett, Brian Mount, Stephen Latta, Alex Kipman, Ryan Hastings, Arthur Tomlin, Sebastian Sylvan, Daniel McCulloch, Jonathan Steed, Jason Scott, Mathew Lamb
  • Patent number: 8499257
    Abstract: A system is disclosed for providing on-screen graphical handles to control interaction between a user and on-screen objects. A handle defines what actions a user may perform on the object, such as for example scrolling through a textual or graphical navigation menu. Affordances are provided to guide the user through the process of interacting with a handle.
    Type: Grant
    Filed: February 9, 2010
    Date of Patent: July 30, 2013
    Assignee: Microsoft Corporation
    Inventors: Andrew Mattingly, Jeremy Hill, Arjun Dayal, Brian Kramp, Ali Vassigh, Christian Klein, Adam Poulos, Alex Kipman, Jeffrey Margolis
  • Patent number: 8448094
    Abstract: Systems and methods for mapping natural input devices to legacy system inputs are disclosed. One example system may include a computing device having an algorithmic preprocessing module configured to receive input data containing a natural user input and to identify the natural user input in the input data. The computing device may further include a gesture module coupled to the algorithmic preprocessing module, the gesture module being configured to associate the natural user input to a gesture in a gesture library. The computing device may also include a mapping module to map the gesture to a legacy controller input, and to send the legacy controller input to a legacy system in response to the natural user input.
    Type: Grant
    Filed: March 25, 2009
    Date of Patent: May 21, 2013
    Assignee: Microsoft Corporation
    Inventors: Alex Kipman, R. Stephen Polzin, Kudo Tsunoda, Darren Bennett, Stephen Latta, Mark Finocchio, Gregory G. Snook, Relja Markovic
  • Patent number: 8418085
    Abstract: A capture device may capture a user's motion and a display device may display a model that maps to the user's motion, including gestures that are applicable for control. A user may be unfamiliar with a system that maps the user's motions or not know what gestures are applicable for an executing application. A user may not understand or know how to perform gestures that are applicable for the executing application. User motion data and/or outputs of filters corresponding to gestures may be analyzed to determine those cases where assistance to the user on performing the gesture is appropriate.
    Type: Grant
    Filed: May 29, 2009
    Date of Patent: April 9, 2013
    Assignee: Microsoft Corporation
    Inventors: Gregory N. Snook, Stephen Latta, Kevin Geisner, Darren Alexander Bennett, Kudo Tsunoda, Alex Kipman, Kathryn Stone Perez
  • Patent number: 8390680
    Abstract: Using facial recognition and gesture/body posture recognition techniques, a system can naturally convey the emotions and attitudes of a user via the user's visual representation. Techniques may comprise customizing a visual representation of a user based on detectable characteristics, deducting a user's temperament from the detectable characteristics, and applying attributes indicative of the temperament to the visual representation in real time. Techniques may also comprise processing changes to the user's characteristics in the physical space and updating the visual representation in real time. For example, the system may track a user's facial expressions and body movements to identify a temperament and then apply attributes indicative of that temperament to the visual representation. Thus, a visual representation of a user, such as an avatar or fanciful character, can reflect the user's expressions and moods in real time.
    Type: Grant
    Filed: July 9, 2009
    Date of Patent: March 5, 2013
    Assignee: Microsoft Corporation
    Inventors: Kathryn Stone Perez, Alex Kipman, Nicholas D. Burton, Andrew Wilson
  • Publication number: 20120320013
    Abstract: Embodiments are disclosed that relate to sharing media streams capturing different perspectives of an event. For example, one embodiment provides, on a computing device, a method including storing an event definition for an event, receiving from each capture device of a plurality of capture devices a request to share a media stream provided by the capture device, receiving a media stream from each capture device of the plurality of capture devices, and associating a subset of media streams from the plurality of capture devices with the event based upon the event definition. The method further includes receiving a request for transmission of a selected media stream associated with the event, and sending the selected media stream associated with the event to the requesting capture device.
    Type: Application
    Filed: June 16, 2011
    Publication date: December 20, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Kathryn Stone Perez, Alex Kipman, Andrew Fuller
  • Publication number: 20110279249
    Abstract: A system to present the user a 3-D virtual environment as well as non-visual sensory feedback for interactions that user makes with virtual objects in that environment is disclosed. In an exemplary embodiment, a system comprising a depth camera that captures user position and movement, a three-dimensional (3-D) display device that presents the user a virtual environment in 3-D and a haptic feedback device provides haptic feedback to the user as he interacts with a virtual object in the virtual environment. As the user moves through his physical space, he is captured by the depth camera. Data from that depth camera is parsed to correlate a user position with a position in the virtual environment. Where the user position or movement causes the user to touch the virtual object, that is determined, and corresponding haptic feedback is provided to the user.
    Type: Application
    Filed: July 27, 2011
    Publication date: November 17, 2011
    Applicant: Microsoft Corporation
    Inventors: Alex Kipman, Kudo Tsunoda, Todd Eric Holmdahl, John Clavin, Kathryn Stone Perez
  • Publication number: 20110246329
    Abstract: An on-screen shopping application which reacts to a human target user's motions to provide a shopping experience to the user is provided. A tracking system captures user motions and executes a shopping application allowing a user to manipulate an on-screen representation the user. The on-screen representation has a likeness of the user or another individual and movements of the user in the on-screen interface allows the user to interact with virtual articles that represent real-world articles. User movements which are recognized as article manipulation or transaction control gestures are translated into commands for the shopping application.
    Type: Application
    Filed: April 1, 2010
    Publication date: October 6, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Kevin A. Geisner, Kudo Tsunoda, Darren Bennett, Brian S. Murphy, Stephen G. Latta, Relja Markovic, Alex Kipman
  • Patent number: 8009022
    Abstract: A system to present the user a 3-D virtual environment as well as non-visual sensory feedback for interactions that user makes with virtual objects in that environment is disclosed. In an exemplary embodiment, a system comprising a depth camera that captures user position and movement, a three-dimensional (3-D) display device that presents the user a virtual environment in 3-D and a haptic feedback device provides haptic feedback to the user as he interacts with a virtual object in the virtual environment. As the user moves through his physical space, he is captured by the depth camera. Data from that depth camera is parsed to correlate a user position with a position in the virtual environment. Where the user position or movement causes the user to touch the virtual object, that is determined, and corresponding haptic feedback is provided to the user.
    Type: Grant
    Filed: July 12, 2010
    Date of Patent: August 30, 2011
    Assignee: Microsoft Corporation
    Inventors: Alex Kipman, Kudo Tsunoda, Todd Eric Holmdahl, John Clavin, Kathryn Stone Perez
  • Publication number: 20110197161
    Abstract: A system is disclosed for providing on-screen graphical handles to control interaction between a user and on-screen objects. A handle defines what actions a user may perform on the object, such as for example scrolling through a textual or graphical navigation menu. Affordances are provided to guide the user through the process of interacting with a handle.
    Type: Application
    Filed: February 9, 2010
    Publication date: August 11, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Andrew Mattingly, Jeremy Hill, Arjun Dayal, Brian Kramp, Ali Vassigh, Christian Klein, Adam Poulos, Alex Kipman, Jeffrey Margolis
  • Publication number: 20110173204
    Abstract: Techniques for assigning a gesture dictionary in a gesture-based system to a user comprise capturing data representative of a user in a physical space. In a gesture-based system, gestures may control aspects of a computing environment or application, where the gestures may be derived from a user's position or movement in a physical space. In an example embodiment, the system may monitor a user's gestures and select a particular gesture dictionary in response to the manner in which the user performs the gestures. The gesture dictionary may be assigned in real time with respect to the capture of the data representative of a user's gesture. The system may generate calibration tests for assigning a gesture dictionary. The system may track the user during a set of short gesture calibration tests and assign the gesture dictionary based on a compilation of the data captured that represents the user's gestures.
    Type: Application
    Filed: January 8, 2010
    Publication date: July 14, 2011
    Applicant: Microsoft Corporation
    Inventors: Oscar E. Murillo, Andy Wilson, Alex A. Kipman, Janet Galore
  • Patent number: 7974443
    Abstract: A method of tracking a target includes receiving an observed depth image of the target from a source and analyzing the observed depth image with a prior-trained collection of known poses to find an exemplar pose that represents an observed pose of the target. The method further includes rasterizing a model of the target into a synthesized depth image having a rasterized pose and adjusting the rasterized pose of the model into a model-fitting pose based, at least in part, on differences between the observed depth image and the synthesized depth image. Either the exemplar pose or the model-fitting pose is then selected to represent the target.
    Type: Grant
    Filed: November 23, 2010
    Date of Patent: July 5, 2011
    Assignee: Microsoft Corporation
    Inventors: Alex Kipman, Mark Finocchio, Ryan M. Geiss, Johnny Chung Lee, Charles Claudius Marais, Zsolt Mathe
  • Patent number: 7950000
    Abstract: Architecture that facilitates management of a build process according to a level of trust of a build entity. The build process processes one or more build entities, each of which is associated with a level of trust. These associations are stored in a policy file that is run against the one or more entities at the start of the build process. The build process runs at a permission level that is representative of the lowest level of trust of the build entities. The levels of trust include at least trusted, semi-trusted, and untrusted levels. If the lowest level is untrusted, the build process fails, and the user is notified.
    Type: Grant
    Filed: March 17, 2004
    Date of Patent: May 24, 2011
    Assignee: Microsoft Corporation
    Inventors: Alex A. Kipman, Rajeev Goel, Jomo A. Fisher, Christopher A. Flaat, Chad W. Royal
  • Publication number: 20110058709
    Abstract: A method of tracking a target includes receiving an observed depth image of the target from a source and analyzing the observed depth image with a prior-trained collection of known poses to find an exemplar pose that represents an observed pose of the target. The method further includes rasterizing a model of the target into a synthesized depth image having a rasterized pose and adjusting the rasterized pose of the model into a model-fitting pose based, at least in part, on differences between the observed depth image and the synthesized depth image. Either the exemplar pose or the model-fitting pose is then selected to represent the target.
    Type: Application
    Filed: November 23, 2010
    Publication date: March 10, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Alex Kipman, Mark Finocchio, Ryan M. Geiss, Johnny Chung Lee, Charles Claudius Marais, Zsolt Mathe
  • Publication number: 20110055846
    Abstract: A capture device can detect gestures made by a user. The gestures can be used to control a gesture unaware program.
    Type: Application
    Filed: August 31, 2009
    Publication date: March 3, 2011
    Applicant: Microsoft Corporation
    Inventors: Kathryn S. Perez, Kevin A. Geisner, Alex A. Kipman, Kudo Tsunoda
  • Publication number: 20110025689
    Abstract: Techniques for auto-generating the target's visual representation may reduce or eliminate the manual input required for the generation of the target's visual representation. For example, a system having a capture device may detect various features of a user in the physical space and make feature selections from a library of visual representation feature options based on the detected features. The system can automatically apply the selections to the visual representation of the user based on the detected features. Alternately, the system may make selections that narrow the number of options for features from which the user chooses. The system may apply the selections to the user in real time as well as make updates to the features selected and applied to the target's visual representation in real time.
    Type: Application
    Filed: July 29, 2009
    Publication date: February 3, 2011
    Applicant: Microsoft Corporation
    Inventors: Kathryn Stone Perez, Alex Kipman, Nicholas D. Burton, Andrew Wilson
  • Publication number: 20110007079
    Abstract: Data captured with respect to a human may be analyzed and applied to a visual representation of a user such that the visual representation begins to reflect the behavioral characteristics of the user. For example, a system may have a capture device that captures data about the user in the physical space. The system may identify the user's characteristics, tendencies, voice patterns, behaviors, gestures, etc. Over time, the system may learn a user's tendencies and intelligently apply animations to the user's avatar such that the avatar behaves and responds in accordance with the identified behaviors of the user. The animations applied to the avatar may be animations selected from a library of pre-packaged animations, or the animations may be entered and recorded by the user into the avatar's avatar library.
    Type: Application
    Filed: July 13, 2009
    Publication date: January 13, 2011
    Applicant: Microsoft Corporation
    Inventors: Kathryn Stone Perez, Alex Kipman, Nicholas D. Burton, Andrew Wilson
  • Publication number: 20110007142
    Abstract: Using facial recognition and gesture/body posture recognition techniques, a system can naturally convey the emotions and attitudes of a user via the user's visual representation. Techniques may comprise customizing a visual representation of a user based on detectable characteristics, deducting a user's temperament from the detectable characteristics, and applying attributes indicative of the temperament to the visual representation in real time. Techniques may also comprise processing changes to the user's characteristics in the physical space and updating the visual representation in real time. For example, the system may track a user's facial expressions and body movements to identify a temperament and then apply attributes indicative of that temperament to the visual representation. Thus, a visual representation of a user, such as an avatar or fanciful character, can reflect the user's expressions and moods in real time.
    Type: Application
    Filed: July 9, 2009
    Publication date: January 13, 2011
    Applicant: Microsoft Corporation
    Inventors: Kathryn Stone Perez, Alex Kipman, Nicholas D. Burton, Andrew Wilson