Patents by Inventor Darren A Bennett

Darren A Bennett has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20120326976
    Abstract: Techniques for enhancing the use of a motion capture system are provided. A motion capture system tracks movement and audio inputs from a person in a physical space, and provides the inputs to an application, which displays a virtual space on a display. Bodily movements can be used to define traits of an avatar in the virtual space. The person can be directed to perform the movements by a coaching avatar, or visual or audio cues in the virtual space. The application can respond to the detected movements and voice commands or voice volume of the person to define avatar traits and initiate pre-scripted audio-visual events in the virtual space to provide an entertaining experience. A performance in the virtual space can be captured and played back with automatic modifications, such as alterations to the avatar's voice or appearance, or modifications made by another person.
    Type: Application
    Filed: September 5, 2012
    Publication date: December 27, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Relja Markovic, Stephen G. Latta, Kevin A. Geisner, Christopher Vuchetich, Darren A. Bennett, Brian S. Murphy, Shawn C. Wright
  • Patent number: 8334842
    Abstract: Techniques for facilitating interaction with an application in a motion capture system allow a person to easily begin interacting without manual setup. A depth camera system tracks a person in physical space and evaluates the person's intent to engage with the application. Factors such as location, stance, movement and voice data can be evaluated. Absolute location in a field of view of the depth camera, and location relative to another person, can be evaluated. Stance can include facing a depth camera, indicating a willingness to interact. Movements can include moving toward or away from a central area in the physical space, walking through the field of view, and movements which occur while standing generally in one location, such as moving one's arms around, gesturing, or shifting weight from one foot to another. Voice data can include volume as well as words which are detected by speech recognition.
    Type: Grant
    Filed: January 15, 2010
    Date of Patent: December 18, 2012
    Assignee: Microsoft Corporation
    Inventors: Relja Markovic, Stephen G Latta, Kevin A Geisner, Jonathan T Steed, Darren A Bennett, Amos D Vance
  • Patent number: 8284157
    Abstract: Techniques for enhancing the use of a motion capture system are provided. A motion capture system tracks movement and audio inputs from a person in a physical space, and provides the inputs to an application, which displays a virtual space on a display. Bodily movements can be used to define traits of an avatar in the virtual space. The person can be directed to perform the movements by a coaching avatar, or visual or audio cues in the virtual space. The application can respond to the detected movements and voice commands or voice volume of the person to define avatar traits and initiate pre-scripted audio-visual events in the virtual space to provide an entertaining experience. A performance in the virtual space can be captured and played back with automatic modifications, such as alterations to the avatar's voice or appearance, or modifications made by another person.
    Type: Grant
    Filed: January 15, 2010
    Date of Patent: October 9, 2012
    Assignee: Microsoft Corporation
    Inventors: Relja Markovic, Stephen G Latta, Kevin A Geisner, Christopher Vuchetich, Darren A Bennett, Brian S Murphy, Shawn C Wright
  • Publication number: 20120165096
    Abstract: A computing system runs an application (e.g., video game) that interacts with one or more actively engaged users. One or more physical properties of a group are sensed. The group may include the one or more actively engaged users and/or one or more entities not actively engaged with the application. The computing system will determine that the group (or the one or more entities not actively engaged with the application) have performed a predetermined action. A runtime condition of the application is changed in response to determining that the group (or the one or more entities not actively engaged with the computer based application) have performed the predetermined action. Examples of changing a runtime condition include moving an object, changing a score or changing an environmental condition of a video game.
    Type: Application
    Filed: March 2, 2012
    Publication date: June 28, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Kevin Geisner, Relja Markovic, Stephen G. Latta, Mark T. Mihelich, Christopher Willoughby, Jonathan T. Steed, Darren Bennett, Shawn C. Wright, Matt Coohill
  • Publication number: 20120155705
    Abstract: A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human target observed with a three dimensional depth camera. A relative position of a hand joint of the virtual skeleton is translated as a gestured aiming vector control, and a virtual weapon is aimed in proportion to the gestured aiming vector control.
    Type: Application
    Filed: December 21, 2010
    Publication date: June 21, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Stephen Latta, Darren Bennett, Kevin Geisner, Relja Markovic, Kudo Tsunoda, Greg Snook, Christopher H. Willoughby, Peter Sarrett, Daniel Lee Osborn
  • Publication number: 20120157203
    Abstract: A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human target observed with a three-dimensional depth camera. A relative position of a hand joint of the virtual skeleton is translated as a gestured control, and a three-dimensional virtual world is controlled responsive to the gestured control.
    Type: Application
    Filed: December 21, 2010
    Publication date: June 21, 2012
    Applicant: Microsoft Corporation
    Inventors: Stephen Latta, Darren Bennett, Kevin Geisner, Relja Markovic
  • Publication number: 20120157198
    Abstract: Depth-image analysis is performed with a device that analyzes a human target within an observed scene by capturing depth-images that include depth information from the observed scene. The human target is modeled with a virtual skeleton including a plurality of joints. The virtual skeleton is used as an input for controlling a driving simulation.
    Type: Application
    Filed: December 21, 2010
    Publication date: June 21, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Stephen Latta, Darren Bennett, Kevin Geisner, Relja Markovic, Kudo Tsunoda, Rhett Mathis, Matthew Monson, David Gierok, William Paul Giese, Darrin Brown, Cam McRae, David Seymour, William Axel Olsen, Matthew Searcy
  • Publication number: 20120047468
    Abstract: A system for translating user motion into multiple object responses of an on-screen object based on user interaction of an application executing on a computing device is provided. User motion data is received from a capture device from one or more users. The user motion data corresponds to user interaction with an on-screen object presented in the application. The on-screen object corresponds to an object other than an on-screen representation of a user that is displayed by the computing device. The user motion data is automatically translated into multiple object responses of the on-screen object. The multiple object responses of the on-screen object are simultaneously displayed to the users.
    Type: Application
    Filed: August 20, 2010
    Publication date: February 23, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Oscar Omar Garza Santos, Matthew Haigh, Christopher Vuchetich, Ben Hindle, Darren A. Bennett
  • Publication number: 20110304632
    Abstract: Embodiments are disclosed that relate to interacting with a user interface via feedback provided by an avatar. One embodiment provides a method comprising receiving depth data, locating a person in the depth data, and mapping a physical space in front of the person to a screen space of a display device. The method further comprises forming an image of an avatar representing the person, outputting to a display an image of a user interface comprising an interactive user interface control, and outputting to the display device the image of the avatar such that the avatar faces the user interface control. The method further comprises detecting a motion of the person via the depth data, forming an animated representation of the avatar interacting with the user interface control based upon the motion of the person, and outputting the animated representation of the avatar interacting with the control.
    Type: Application
    Filed: June 11, 2010
    Publication date: December 15, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Jeffrey Evertt, Joel Deaguero, Darren Bennett, Dylan Vance, David Galloway, Relja Markovic, Stephen Latta, Oscar Omar Garza Santos, Kevin Geisner
  • Publication number: 20110304774
    Abstract: Embodiments are disclosed that relate to the automatic tagging of recorded content. For example, one disclosed embodiment provides a computing device comprising a processor and memory having instructions executable by the processor to receive input data comprising one or more of a depth data, video data, and directional audio data, identify a content-based input signal in the input data, and apply one or more filters to the input signal to determine whether the input signal comprises a recognized input. Further, if the input signal comprises a recognized input, then the instructions are executable to tag the input data with the contextual tag associated with the recognized input and record the contextual tag with the input data.
    Type: Application
    Filed: June 11, 2010
    Publication date: December 15, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Stephen Latta, Christopher Vuchetich, Matthew Eric Haigh, JR., Andrew Robert Campbell, Darren Bennett, Relja Markovic, Oscar Omar Garza Santos, Kevin Geisner, Kudo Tsunoda
  • Patent number: 8076565
    Abstract: An electronic entertainment system provides an entertainment environment wherein music in the form of a storable music sequence is processed to determine, in advance or in real-time, a set of musical events of interest, and associating elements of the entertainment system with portions of the storable music sequence, wherein an association is represented by a data structure and indicates a mapping between at least one musical event and at least one element of the entertainment environment, wherein the at least one element can be independent of the at least one musical event.
    Type: Grant
    Filed: August 13, 2007
    Date of Patent: December 13, 2011
    Assignee: Electronic Arts, Inc.
    Inventor: Darren Bennett
  • Publication number: 20110246329
    Abstract: An on-screen shopping application which reacts to a human target user's motions to provide a shopping experience to the user is provided. A tracking system captures user motions and executes a shopping application allowing a user to manipulate an on-screen representation the user. The on-screen representation has a likeness of the user or another individual and movements of the user in the on-screen interface allows the user to interact with virtual articles that represent real-world articles. User movements which are recognized as article manipulation or transaction control gestures are translated into commands for the shopping application.
    Type: Application
    Filed: April 1, 2010
    Publication date: October 6, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Kevin A. Geisner, Kudo Tsunoda, Darren Bennett, Brian S. Murphy, Stephen G. Latta, Relja Markovic, Alex Kipman
  • Publication number: 20110221755
    Abstract: A camera that can sense motion of a user is connected to a computing system (e.g., video game apparatus or other type of computer). The computing system determines an action corresponding to the sensed motion of the user and determines a magnitude of the sensed motion of the user. The computing system creates and displays an animation of an object (e.g., an avatar in a video game) performing the action in a manner that is amplified in comparison to the sensed motion by a factor that is proportional to the determined magnitude. The computing system also creates and outputs audio/visual feedback in proportion to a magnitude of the sensed motion of the user.
    Type: Application
    Filed: March 12, 2010
    Publication date: September 15, 2011
    Inventors: Kevin Geisner, Relja Markovic, Stephen G. Latta, Brian James Mount, Zachary T. Middleton, Joel Deaguero, Christopher Willoughby, Dan Osborn, Darren Bennett, Gregory N. Snook
  • Publication number: 20110223995
    Abstract: A computing system runs an application (e.g., video game) that interacts with one or more actively engaged users. One or more physical properties of a group are sensed. The group may include the one or more actively engaged users and/or one or more entities not actively engaged with the application. The computing system will determine that the group (or the one or more entities not actively engaged with the application) have performed a predetermined action. A runtime condition of the application is changed in response to determining that the group (or the one or more entities not actively engaged with the computer based application) have performed the predetermined action. Examples of changing a runtime condition include moving an object, changing a score or changing an environmental condition of a video game.
    Type: Application
    Filed: March 12, 2010
    Publication date: September 15, 2011
    Inventors: Kevin Geisner, Relja Markovic, Stephen G. Latta, Mark T. Mihelich, Christopher Willoughby, Jonathan T. Steed, Darren Bennett, Shawn C. Wright, Matt Coohill
  • Publication number: 20110175809
    Abstract: In a motion capture system, a unitary input is provided to an application based on detected movement and/or location of a group of people. Audio information from the group can also be used as an input. The application can provide real-time feedback to the person or group via a display and audio output. The group can control the movement of an avatar in a virtual space based on the movement of each person in the group, such as in a steering or balancing game. To avoid a discontinuous or confusing output by the application, missing data can be generated for a person who is occluded or partially out of the field of view. A wait time can be set for activating a new person and deactivating a currently-active person. The wait time can be adaptive based on a first detected position or a last detected position of the person.
    Type: Application
    Filed: January 15, 2010
    Publication date: July 21, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Relja Markovic, Stephen G. Latta, Kevin A. Geisner, David Hill, Darren A. Bennett, David C. Haley, JR., Brian S. Murphy, Shawn C. Wright
  • Publication number: 20110175810
    Abstract: Techniques for facilitating interaction with an application in a motion capture system allow a person to easily begin interacting without manual setup. A depth camera system tracks a person in physical space and evaluates the person's intent to engage with the application. Factors such as location, stance, movement and voice data can be evaluated. Absolute location in a field of view of the depth camera, and location relative to another person, can be evaluated. Stance can include facing a depth camera, indicating a willingness to interact. Movements can include moving toward or away from a central area in the physical space, walking through the field of view, and movements which occur while standing generally in one location, such as moving one's arms around, gesturing, or shifting weight from one foot to another. Voice data can include volume as well as words which are detected by speech recognition.
    Type: Application
    Filed: January 15, 2010
    Publication date: July 21, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Relja Markovic, Stephen G. Latta, Kevin A. Geisner, Jonathan T. Steed, Darren A. Bennett, Amos D. Vance
  • Publication number: 20110175801
    Abstract: Techniques for enhancing the use of a motion capture system are provided. A motion capture system tracks movement and audio inputs from a person in a physical space, and provides the inputs to an application, which displays a virtual space on a display. Bodily movements can be used to define traits of an avatar in the virtual space. The person can be directed to perform the movements by a coaching avatar, or visual or audio cues in the virtual space. The application can respond to the detected movements and voice commands or voice volume of the person to define avatar traits and initiate pre-scripted audio-visual events in the virtual space to provide an entertaining experience. A performance in the virtual space can be captured and played back with automatic modifications, such as alterations to the avatar's voice or appearance, or modifications made by another person.
    Type: Application
    Filed: January 15, 2010
    Publication date: July 21, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Relja Markovic, Stephen G. Latta, Kevin A. Geisner, Christopher Vuchetich, Darren A. Bennett, Brian S. Murphy, Shawn C. Wright
  • Patent number: 7961174
    Abstract: In a motion capture system, a unitary input is provided to an application based on detected movement and/or location of a group of people. Audio information from the group can also be used as an input. The application can provide real-time feedback to the person or group via a display and audio output. The group can control the movement of an avatar in a virtual space based on the movement of each person in the group, such as in a steering or balancing game. To avoid a discontinuous or confusing output by the application, missing data can be generated for a person who is occluded or partially out of the field of view. A wait time can be set for activating a new person and deactivating a currently-active person. The wait time can be adaptive based on a first detected position or a last detected position of the person.
    Type: Grant
    Filed: July 30, 2010
    Date of Patent: June 14, 2011
    Assignee: Microsoft Corporation
    Inventors: Relja Markovic, Stephen G Latta, Kevin A Geisner, David Hill, Darren A Bennett, David C Haley, Brian S Murphy, Shawn C Wright
  • Publication number: 20110035666
    Abstract: A capture device may capture a user's motion and a display device may display a model that maps to the user's motion, including gestures that are applicable for control. A user may be unfamiliar with a system that maps the user's motions or not know what gestures are applicable for an executing application. A user may not understand or know how to perform gestures that are applicable for the executing application. Providing visual feedback representing instructional gesture data to the user can teach the user how to properly gesture. The visual feedback may be provided in any number of suitable ways. For example, visual feedback may be provided via ghosted images, player avatars, or skeletal representations. The system can process prerecorded or live content for displaying visual feedback representing instructional gesture data. The feedback can portray the deltas between the user's actual position and the ideal gesture position.
    Type: Application
    Filed: October 27, 2010
    Publication date: February 10, 2011
    Applicant: Microsoft Corporation
    Inventors: Kevin Geisner, Relja Markovic, Stephen Gilchrist Latta, Gregory Nelson Snook, Darren Bennett
  • Publication number: 20100281432
    Abstract: A capture device may capture a user's motion and a display device may display a model that maps to the user's motion, including gestures that are applicable for control. A user may be unfamiliar with a system that maps the user's motions or not know what gestures are applicable for an executing application. A user may not understand or know how to perform gestures that are applicable for the executing application. Providing visual feedback representing instructional gesture data to the user can teach the user how to properly gesture. The visual feedback may be provided in any number of suitable ways. For example, visual feedback may be provided via ghosted images, player avatars, or skeletal representations. The system can process prerecorded or live content for displaying visual feedback representing instructional gesture data. The feedback can portray the deltas between the user's actual position and the ideal gesture position.
    Type: Application
    Filed: May 1, 2009
    Publication date: November 4, 2010
    Inventors: Kevin Geisner, Relja Markovic, Stephen Gilchrist Latta, Gregory Nelson Snook, Darren Bennett