Patents by Inventor Alex Kipman
Alex Kipman has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9026596Abstract: Embodiments are disclosed that relate to sharing media streams capturing different perspectives of an event. For example, one embodiment provides, on a computing device, a method including storing an event definition for an event, receiving from each capture device of a plurality of capture devices a request to share a media stream provided by the capture device, receiving a media stream from each capture device of the plurality of capture devices, and associating a subset of media streams from the plurality of capture devices with the event based upon the event definition. The method further includes receiving a request for transmission of a selected media stream associated with the event, and sending the selected media stream associated with the event to the requesting capture device.Type: GrantFiled: June 16, 2011Date of Patent: May 5, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Kathryn Stone Perez, Alex Kipman, Andrew Fuller
-
Patent number: 9015638Abstract: Techniques for managing a set of states associated with a capture device are disclosed herein. The capture device may detect and bind to users, and may provide feedback about whether the capture device is bound to, or detecting a user. Techniques are also disclosed wherein virtual ports may be associated with users bound to a capture device and feedback about the state of virtual ports may be provided.Type: GrantFiled: May 1, 2009Date of Patent: April 21, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Alex Kipman, Kathryn Stone Perez, R. Stephen Polzin, William Guo
-
Publication number: 20150035861Abstract: Embodiments that relate to presenting a plurality of visual information density levels for a plurality of geo-located data items in a mixed reality environment are disclosed. For example, in one disclosed embodiment a graduated information delivery program receives information for a selected geo-located data item and provides a minimum visual information density level for the item to a head-mounted display device. The program receives via the head-mounted display device a user input corresponding to the selected geo-located data item. Based on the input, the program provides an increasing visual information density level for the selected item to the head-mounted display device for display within the mixed reality environment.Type: ApplicationFiled: July 31, 2013Publication date: February 5, 2015Inventors: Thomas George Salter, Ben Sugden, Daniel Deptford, Robert Crocco, JR., Brian Keane, Laura Massey, Alex Kipman, Peter Tobias Kinnebrew, Nicholas Kamuda
-
Publication number: 20140375683Abstract: Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes identifying one or more objects located outside a field of view of a user, and for each object of the one or more objects, providing to the user an indication of positional information associated with the object.Type: ApplicationFiled: June 25, 2013Publication date: December 25, 2014Inventors: Thomas George Salter, Ben Sugden, Daniel Deptford, Robert Crocco, JR., Brian Keane, Laura Massey, Alex Kipman, Peter Tobias Kinnebrew, Nicholas Kamuda, Zachary Quarles, Michael Scavezze, Ryan Hastings, Cameron Brown, Tony Ambrus, Jason Scott, John Bevis, Jamie B. Kirschenbaum, Nicholas Gervase Fajt, Michael Klucher, Relja Markovic, Stephen Latta, Daniel McCulloch
-
Publication number: 20140160001Abstract: Embodiments that relate to presenting a mixed reality environment via a mixed reality display device are disclosed. For example, one disclosed embodiment provides a method for presenting a mixed reality environment via a head-mounted display device. The method includes using head pose data to generally identify one or more gross selectable targets within a sub-region of a spatial region occupied by the mixed reality environment. The method further includes specifically identifying a fine selectable target from among the gross selectable targets based on eye-tracking data. Gesture data is then used to identify a gesture, and an operation associated with the identified gesture is performed on the fine selectable target.Type: ApplicationFiled: December 6, 2012Publication date: June 12, 2014Inventors: Peter Tobias Kinnebrew, Alex Kipman
-
Publication number: 20130311944Abstract: A system is disclosed for providing on-screen graphical handles to control interaction between a user and on-screen objects. A handle defines what actions a user may perform on the object, such as for example scrolling through a textual or graphical navigation menu. Affordances are provided to guide the user through the process of interacting with a handle.Type: ApplicationFiled: July 29, 2013Publication date: November 21, 2013Applicant: MICROSOFT CORPORATIONInventors: Andrew Mattingly, Jeremy Hill, Arjun Daval, Brian Kramp, Ali Vassigh, Christian Klein, Adam Poulos, Alex Kipman, Jeffrey Margolis
-
Patent number: 8542252Abstract: Techniques may comprise identifying surfaces, textures, and object dimensions from unorganized point clouds derived from a capture device, such as a depth sensing device. Employing target digitization may comprise surface extraction, identifying points in a point cloud, labeling surfaces, computing object properties, tracking changes in object properties over time, and increasing confidence in the object boundaries and identity as additional frames are captured. If the point cloud data includes an object, a model of the object may be generated. Feedback of the model associated with a particular object may be generated and provided real time to the user. Further, the model of the object may be tracked in response to any movement of the object in the physical space such that the model may be adjusted to mimic changes or movement of the object, or increase the fidelity of the target's characteristics.Type: GrantFiled: May 29, 2009Date of Patent: September 24, 2013Assignee: Microsoft CorporationInventors: Kathryn Stone Perez, Alex Kipman, Nicholas Burton, Andrew Wilson, Diego Fernandes Nehab
-
Publication number: 20130194259Abstract: A system and related methods for visually augmenting an appearance of a physical environment as seen by a user through a head-mounted display device are provided. In one embodiment, a virtual environment generating program receives eye-tracking information, lighting information, and depth information from the head-mounted display. The program generates a virtual environment that models the physical environment and is based on the lighting information and the distance of a real-world object from the head-mounted display. The program visually augments a virtual object representation in the virtual environment based on the eye-tracking information, and renders the virtual object representation on a transparent display of the head-mounted display device.Type: ApplicationFiled: January 27, 2012Publication date: August 1, 2013Inventors: Darren Bennett, Brian Mount, Stephen Latta, Alex Kipman, Ryan Hastings, Arthur Tomlin, Sebastian Sylvan, Daniel McCulloch, Jonathan Steed, Jason Scott, Mathew Lamb
-
Patent number: 8499257Abstract: A system is disclosed for providing on-screen graphical handles to control interaction between a user and on-screen objects. A handle defines what actions a user may perform on the object, such as for example scrolling through a textual or graphical navigation menu. Affordances are provided to guide the user through the process of interacting with a handle.Type: GrantFiled: February 9, 2010Date of Patent: July 30, 2013Assignee: Microsoft CorporationInventors: Andrew Mattingly, Jeremy Hill, Arjun Dayal, Brian Kramp, Ali Vassigh, Christian Klein, Adam Poulos, Alex Kipman, Jeffrey Margolis
-
Patent number: 8448094Abstract: Systems and methods for mapping natural input devices to legacy system inputs are disclosed. One example system may include a computing device having an algorithmic preprocessing module configured to receive input data containing a natural user input and to identify the natural user input in the input data. The computing device may further include a gesture module coupled to the algorithmic preprocessing module, the gesture module being configured to associate the natural user input to a gesture in a gesture library. The computing device may also include a mapping module to map the gesture to a legacy controller input, and to send the legacy controller input to a legacy system in response to the natural user input.Type: GrantFiled: March 25, 2009Date of Patent: May 21, 2013Assignee: Microsoft CorporationInventors: Alex Kipman, R. Stephen Polzin, Kudo Tsunoda, Darren Bennett, Stephen Latta, Mark Finocchio, Gregory G. Snook, Relja Markovic
-
Patent number: 8418085Abstract: A capture device may capture a user's motion and a display device may display a model that maps to the user's motion, including gestures that are applicable for control. A user may be unfamiliar with a system that maps the user's motions or not know what gestures are applicable for an executing application. A user may not understand or know how to perform gestures that are applicable for the executing application. User motion data and/or outputs of filters corresponding to gestures may be analyzed to determine those cases where assistance to the user on performing the gesture is appropriate.Type: GrantFiled: May 29, 2009Date of Patent: April 9, 2013Assignee: Microsoft CorporationInventors: Gregory N. Snook, Stephen Latta, Kevin Geisner, Darren Alexander Bennett, Kudo Tsunoda, Alex Kipman, Kathryn Stone Perez
-
Patent number: 8390680Abstract: Using facial recognition and gesture/body posture recognition techniques, a system can naturally convey the emotions and attitudes of a user via the user's visual representation. Techniques may comprise customizing a visual representation of a user based on detectable characteristics, deducting a user's temperament from the detectable characteristics, and applying attributes indicative of the temperament to the visual representation in real time. Techniques may also comprise processing changes to the user's characteristics in the physical space and updating the visual representation in real time. For example, the system may track a user's facial expressions and body movements to identify a temperament and then apply attributes indicative of that temperament to the visual representation. Thus, a visual representation of a user, such as an avatar or fanciful character, can reflect the user's expressions and moods in real time.Type: GrantFiled: July 9, 2009Date of Patent: March 5, 2013Assignee: Microsoft CorporationInventors: Kathryn Stone Perez, Alex Kipman, Nicholas D. Burton, Andrew Wilson
-
Publication number: 20120320013Abstract: Embodiments are disclosed that relate to sharing media streams capturing different perspectives of an event. For example, one embodiment provides, on a computing device, a method including storing an event definition for an event, receiving from each capture device of a plurality of capture devices a request to share a media stream provided by the capture device, receiving a media stream from each capture device of the plurality of capture devices, and associating a subset of media streams from the plurality of capture devices with the event based upon the event definition. The method further includes receiving a request for transmission of a selected media stream associated with the event, and sending the selected media stream associated with the event to the requesting capture device.Type: ApplicationFiled: June 16, 2011Publication date: December 20, 2012Applicant: MICROSOFT CORPORATIONInventors: Kathryn Stone Perez, Alex Kipman, Andrew Fuller
-
Publication number: 20110279249Abstract: A system to present the user a 3-D virtual environment as well as non-visual sensory feedback for interactions that user makes with virtual objects in that environment is disclosed. In an exemplary embodiment, a system comprising a depth camera that captures user position and movement, a three-dimensional (3-D) display device that presents the user a virtual environment in 3-D and a haptic feedback device provides haptic feedback to the user as he interacts with a virtual object in the virtual environment. As the user moves through his physical space, he is captured by the depth camera. Data from that depth camera is parsed to correlate a user position with a position in the virtual environment. Where the user position or movement causes the user to touch the virtual object, that is determined, and corresponding haptic feedback is provided to the user.Type: ApplicationFiled: July 27, 2011Publication date: November 17, 2011Applicant: Microsoft CorporationInventors: Alex Kipman, Kudo Tsunoda, Todd Eric Holmdahl, John Clavin, Kathryn Stone Perez
-
Publication number: 20110246329Abstract: An on-screen shopping application which reacts to a human target user's motions to provide a shopping experience to the user is provided. A tracking system captures user motions and executes a shopping application allowing a user to manipulate an on-screen representation the user. The on-screen representation has a likeness of the user or another individual and movements of the user in the on-screen interface allows the user to interact with virtual articles that represent real-world articles. User movements which are recognized as article manipulation or transaction control gestures are translated into commands for the shopping application.Type: ApplicationFiled: April 1, 2010Publication date: October 6, 2011Applicant: MICROSOFT CORPORATIONInventors: Kevin A. Geisner, Kudo Tsunoda, Darren Bennett, Brian S. Murphy, Stephen G. Latta, Relja Markovic, Alex Kipman
-
Patent number: 8009022Abstract: A system to present the user a 3-D virtual environment as well as non-visual sensory feedback for interactions that user makes with virtual objects in that environment is disclosed. In an exemplary embodiment, a system comprising a depth camera that captures user position and movement, a three-dimensional (3-D) display device that presents the user a virtual environment in 3-D and a haptic feedback device provides haptic feedback to the user as he interacts with a virtual object in the virtual environment. As the user moves through his physical space, he is captured by the depth camera. Data from that depth camera is parsed to correlate a user position with a position in the virtual environment. Where the user position or movement causes the user to touch the virtual object, that is determined, and corresponding haptic feedback is provided to the user.Type: GrantFiled: July 12, 2010Date of Patent: August 30, 2011Assignee: Microsoft CorporationInventors: Alex Kipman, Kudo Tsunoda, Todd Eric Holmdahl, John Clavin, Kathryn Stone Perez
-
Publication number: 20110197161Abstract: A system is disclosed for providing on-screen graphical handles to control interaction between a user and on-screen objects. A handle defines what actions a user may perform on the object, such as for example scrolling through a textual or graphical navigation menu. Affordances are provided to guide the user through the process of interacting with a handle.Type: ApplicationFiled: February 9, 2010Publication date: August 11, 2011Applicant: MICROSOFT CORPORATIONInventors: Andrew Mattingly, Jeremy Hill, Arjun Dayal, Brian Kramp, Ali Vassigh, Christian Klein, Adam Poulos, Alex Kipman, Jeffrey Margolis
-
Patent number: 7974443Abstract: A method of tracking a target includes receiving an observed depth image of the target from a source and analyzing the observed depth image with a prior-trained collection of known poses to find an exemplar pose that represents an observed pose of the target. The method further includes rasterizing a model of the target into a synthesized depth image having a rasterized pose and adjusting the rasterized pose of the model into a model-fitting pose based, at least in part, on differences between the observed depth image and the synthesized depth image. Either the exemplar pose or the model-fitting pose is then selected to represent the target.Type: GrantFiled: November 23, 2010Date of Patent: July 5, 2011Assignee: Microsoft CorporationInventors: Alex Kipman, Mark Finocchio, Ryan M. Geiss, Johnny Chung Lee, Charles Claudius Marais, Zsolt Mathe
-
Publication number: 20110058709Abstract: A method of tracking a target includes receiving an observed depth image of the target from a source and analyzing the observed depth image with a prior-trained collection of known poses to find an exemplar pose that represents an observed pose of the target. The method further includes rasterizing a model of the target into a synthesized depth image having a rasterized pose and adjusting the rasterized pose of the model into a model-fitting pose based, at least in part, on differences between the observed depth image and the synthesized depth image. Either the exemplar pose or the model-fitting pose is then selected to represent the target.Type: ApplicationFiled: November 23, 2010Publication date: March 10, 2011Applicant: MICROSOFT CORPORATIONInventors: Alex Kipman, Mark Finocchio, Ryan M. Geiss, Johnny Chung Lee, Charles Claudius Marais, Zsolt Mathe
-
Publication number: 20110025689Abstract: Techniques for auto-generating the target's visual representation may reduce or eliminate the manual input required for the generation of the target's visual representation. For example, a system having a capture device may detect various features of a user in the physical space and make feature selections from a library of visual representation feature options based on the detected features. The system can automatically apply the selections to the visual representation of the user based on the detected features. Alternately, the system may make selections that narrow the number of options for features from which the user chooses. The system may apply the selections to the user in real time as well as make updates to the features selected and applied to the target's visual representation in real time.Type: ApplicationFiled: July 29, 2009Publication date: February 3, 2011Applicant: Microsoft CorporationInventors: Kathryn Stone Perez, Alex Kipman, Nicholas D. Burton, Andrew Wilson