Patents by Inventor Christopher Vuchetich
Christopher Vuchetich has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 9498718Abstract: Disclosed herein are systems and methods for altering a view perspective within a display environment. For example, gesture data corresponding to a plurality of inputs may be stored. The input may be input into a game or application implemented by a computing device. Images of a user of the game or application may be captured. For example, a suitable capture device may capture several images of the user over a period of time. The images may be analyzed and processed for detecting a user's gesture. Aspects of the user's gesture may be compared to the stored gesture data for determining an intended gesture input for the user. The comparison may be part of an analysis for determining inputs corresponding to the gesture data, where one or more of the inputs are input into the game or application and cause a view perspective within the display environment to be altered.Type: GrantFiled: May 29, 2009Date of Patent: November 22, 2016Assignee: Microsoft Technology Licensing, LLCInventors: Stephen G. Latta, Gregory N. Snook, Justin McBride, Arthur Charles Tomlin, Peter Sarrett, Kevin Geisner, Relja Markovic, Christopher Vuchetich
-
Patent number: 9075434Abstract: A system for translating user motion into multiple object responses of an on-screen object based on user interaction of an application executing on a computing device is provided. User motion data is received from a capture device from one or more users. The user motion data corresponds to user interaction with an on-screen object presented in the application. The on-screen object corresponds to an object other than an on-screen representation of a user that is displayed by the computing device. The user motion data is automatically translated into multiple object responses of the on-screen object. The multiple object responses of the on-screen object are simultaneously displayed to the users.Type: GrantFiled: August 20, 2010Date of Patent: July 7, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Oscar Omar Garza Santos, Matthew Haigh, Christopher Vuchetich, Ben Hindle, Darren A. Bennett
-
Patent number: 8957858Abstract: Systems and methods for multi-platform motion interactivity, is provided. The system includes a motion-sensing subsystem, a display subsystem including a display, a logic subsystem, and a data-holding subsystem containing instructions executable by the logic subsystem. The system configured to display a displayed scene on the display; receive a dynamically-changing motion input from the motion-sensing subsystem that is generated in response to movement of a tracked object; generate, in real time, a dynamically-changing 3D spatial model of the tracked object based on the motion input; control, based on the movement of the tracked object and using the 3D spatial model, motion within the displayed scene. The system further configured to receive, from a secondary computing system, a secondary input; and control the displayed scene in response to the secondary input to visually represent interaction between the motion input and the secondary input.Type: GrantFiled: May 27, 2011Date of Patent: February 17, 2015Assignee: Microsoft Technology Licensing, LLCInventors: Dan Osborn, Christopher Willoughby, Brian Mount, Vaibhav Goel, Tim Psiaki, Shawn C. Wright, Christopher Vuchetich
-
Patent number: 8465108Abstract: Techniques for enhancing the use of a motion capture system are provided. A motion capture system tracks movement and audio inputs from a person in a physical space, and provides the inputs to an application, which displays a virtual space on a display. Bodily movements can be used to define traits of an avatar in the virtual space. The person can be directed to perform the movements by a coaching avatar, or visual or audio cues in the virtual space. The application can respond to the detected movements and voice commands or voice volume of the person to define avatar traits and initiate pre-scripted audio-visual events in the virtual space to provide an entertaining experience. A performance in the virtual space can be captured and played back with automatic modifications, such as alterations to the avatar's voice or appearance, or modifications made by another person.Type: GrantFiled: September 5, 2012Date of Patent: June 18, 2013Assignee: Microsoft CorporationInventors: Relja Markovic, Stephen G Latta, Kevin A Geisner, Christopher Vuchetich, Darren A Bennett, Brian S Murphy, Shawn C Wright
-
Publication number: 20120326976Abstract: Techniques for enhancing the use of a motion capture system are provided. A motion capture system tracks movement and audio inputs from a person in a physical space, and provides the inputs to an application, which displays a virtual space on a display. Bodily movements can be used to define traits of an avatar in the virtual space. The person can be directed to perform the movements by a coaching avatar, or visual or audio cues in the virtual space. The application can respond to the detected movements and voice commands or voice volume of the person to define avatar traits and initiate pre-scripted audio-visual events in the virtual space to provide an entertaining experience. A performance in the virtual space can be captured and played back with automatic modifications, such as alterations to the avatar's voice or appearance, or modifications made by another person.Type: ApplicationFiled: September 5, 2012Publication date: December 27, 2012Applicant: MICROSOFT CORPORATIONInventors: Relja Markovic, Stephen G. Latta, Kevin A. Geisner, Christopher Vuchetich, Darren A. Bennett, Brian S. Murphy, Shawn C. Wright
-
Publication number: 20120299827Abstract: Systems and methods for multi-platform motion interactivity, is provided. The system includes a motion-sensing subsystem, a display subsystem including a display, a logic subsystem, and a data-holding subsystem containing instructions executable by the logic subsystem. The system configured to display a displayed scene on the display; receive a dynamically-changing motion input from the motion-sensing subsystem that is generated in response to movement of a tracked object; generate, in real time, a dynamically-changing 3D spatial model of the tracked object based on the motion input; control, based on the movement of the tracked object and using the 3D spatial model, motion within the displayed scene. The system further configured to receive, from a secondary computing system, a secondary input; and control the displayed scene in response to the secondary input to visually represent interaction between the motion input and the secondary input.Type: ApplicationFiled: May 27, 2011Publication date: November 29, 2012Applicant: MICROSOFT CORPORATIONInventors: Dan Osborn, Christopher Willoughby, Brian Mount, Vaibhav Goel, Tim Psiaki, Shawn C. Wright, Christopher Vuchetich
-
Patent number: 8284157Abstract: Techniques for enhancing the use of a motion capture system are provided. A motion capture system tracks movement and audio inputs from a person in a physical space, and provides the inputs to an application, which displays a virtual space on a display. Bodily movements can be used to define traits of an avatar in the virtual space. The person can be directed to perform the movements by a coaching avatar, or visual or audio cues in the virtual space. The application can respond to the detected movements and voice commands or voice volume of the person to define avatar traits and initiate pre-scripted audio-visual events in the virtual space to provide an entertaining experience. A performance in the virtual space can be captured and played back with automatic modifications, such as alterations to the avatar's voice or appearance, or modifications made by another person.Type: GrantFiled: January 15, 2010Date of Patent: October 9, 2012Assignee: Microsoft CorporationInventors: Relja Markovic, Stephen G Latta, Kevin A Geisner, Christopher Vuchetich, Darren A Bennett, Brian S Murphy, Shawn C Wright
-
Publication number: 20120047468Abstract: A system for translating user motion into multiple object responses of an on-screen object based on user interaction of an application executing on a computing device is provided. User motion data is received from a capture device from one or more users. The user motion data corresponds to user interaction with an on-screen object presented in the application. The on-screen object corresponds to an object other than an on-screen representation of a user that is displayed by the computing device. The user motion data is automatically translated into multiple object responses of the on-screen object. The multiple object responses of the on-screen object are simultaneously displayed to the users.Type: ApplicationFiled: August 20, 2010Publication date: February 23, 2012Applicant: MICROSOFT CORPORATIONInventors: Oscar Omar Garza Santos, Matthew Haigh, Christopher Vuchetich, Ben Hindle, Darren A. Bennett
-
Publication number: 20110304774Abstract: Embodiments are disclosed that relate to the automatic tagging of recorded content. For example, one disclosed embodiment provides a computing device comprising a processor and memory having instructions executable by the processor to receive input data comprising one or more of a depth data, video data, and directional audio data, identify a content-based input signal in the input data, and apply one or more filters to the input signal to determine whether the input signal comprises a recognized input. Further, if the input signal comprises a recognized input, then the instructions are executable to tag the input data with the contextual tag associated with the recognized input and record the contextual tag with the input data.Type: ApplicationFiled: June 11, 2010Publication date: December 15, 2011Applicant: MICROSOFT CORPORATIONInventors: Stephen Latta, Christopher Vuchetich, Matthew Eric Haigh, JR., Andrew Robert Campbell, Darren Bennett, Relja Markovic, Oscar Omar Garza Santos, Kevin Geisner, Kudo Tsunoda
-
Publication number: 20110175801Abstract: Techniques for enhancing the use of a motion capture system are provided. A motion capture system tracks movement and audio inputs from a person in a physical space, and provides the inputs to an application, which displays a virtual space on a display. Bodily movements can be used to define traits of an avatar in the virtual space. The person can be directed to perform the movements by a coaching avatar, or visual or audio cues in the virtual space. The application can respond to the detected movements and voice commands or voice volume of the person to define avatar traits and initiate pre-scripted audio-visual events in the virtual space to provide an entertaining experience. A performance in the virtual space can be captured and played back with automatic modifications, such as alterations to the avatar's voice or appearance, or modifications made by another person.Type: ApplicationFiled: January 15, 2010Publication date: July 21, 2011Applicant: MICROSOFT CORPORATIONInventors: Relja Markovic, Stephen G. Latta, Kevin A. Geisner, Christopher Vuchetich, Darren A. Bennett, Brian S. Murphy, Shawn C. Wright
-
Publication number: 20110099476Abstract: Disclosed herein are systems and methods for decorating a display environment. In one embodiment, a user may decorate a display environment by making one or more gestures, using voice commands, using a suitable interface device, and/or combinations thereof. A voice command can be detected for user selection of an artistic feature, such as, for example, a color, a texture, an object, and a visual effect for decorating in a display environment. The user can also gesture for selecting a portion of the display environment for decoration. Next, the selected portion of the display environment can be altered based on the selected artistic feature. The user's motions can be reflected in the display environment by an avatar. In addition, a virtual canvas or three-dimensional object can be displayed in the display environment for decoration by the user.Type: ApplicationFiled: October 23, 2009Publication date: April 28, 2011Applicant: Microsoft CorporationInventors: Gregory N. Snook, Relja Markovic, Stephen G. Latta, Kevin Geisner, Christopher Vuchetich, Darren Alexander Bennett, Arthur Charles Tomlin, Joel Deaguero, Matt Puls, Matt Coohill, Ryan Hastings, Kate Kolesar, Brian Scott Murphy
-
Publication number: 20100281438Abstract: Disclosed herein are systems and methods for altering a view perspective within a display environment. For example, gesture data corresponding to a plurality of inputs may be stored. The input may be input into a game or application implemented by a computing device. Images of a user of the game or application may be captured. For example, a suitable capture device may capture several images of the user over a period of time. The images may be analyzed and processed for detecting a user's gesture. Aspects of the user's gesture may be compared to the stored gesture data for determining an intended gesture input for the user. The comparison may be part of an analysis for determining inputs corresponding to the gesture data, where one or more of the inputs are input into the game or application and cause a view perspective within the display environment to be altered.Type: ApplicationFiled: May 29, 2009Publication date: November 4, 2010Applicant: Microsoft CorporationInventors: Stephen G. Latta, Gregory N. Snook, Justin McBride, Arthur Charles Tomlin, Peter Sarrett, Kevin Geisner, Relja Markovic, Christopher Vuchetich