Patents by Inventor Darren A Bennett

Darren A Bennett has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240402795
    Abstract: Examples of augmented reality (AR) environment control advantageously employ multi-factor intention determination and include: performing a multi-factor intention determination for summoning a control object (e.g., a menu, a keyboard, or an input panel) using a set of indications in an AR environment, the set of indications comprising a plurality of indications (e.g., two or more of a palm-facing gesture, an eye gaze, a head gaze, and a finger position simultaneously); and based on at least the set of indications indicating a summoning request by a user, displaying the control object in a position proximate to the user in the AR environment (e.g., docked to a hand of the user). Some examples continue displaying the control object while at least one indication remains, and continue displaying the control object during a timer period if one of the indications is lost.
    Type: Application
    Filed: July 12, 2024
    Publication date: December 5, 2024
    Inventors: Andrew Jackson KLEIN, Cory Ryan BRAMALL, Kyle MOURITSEN, Ethan Harris ARNOWITZ, Jeremy Bruce KERSEY, Victor JIA, Justin Thomas SAVINO, Stephen Michael LUCAS, Darren A. BENNETT
  • Patent number: 12067159
    Abstract: Examples of augmented reality (AR) environment control advantageously employ multi-factor intention determination and include: performing a multi-factor intention determination for summoning a control object (e.g., a menu, a keyboard, or an input panel) using a set of indications in an AR environment, the set of indications comprising a plurality of indications (e.g., two or more of a palm-facing gesture, an eye gaze, a head gaze, and a finger position simultaneously); and based on at least the set of indications indicating a summoning request by a user, displaying the control object in a position proximate to the user in the AR environment (e.g., docked to a hand of the user). Some examples continue displaying the control object while at least one indication remains, and continue displaying the control object during a timer period if one of the indications is lost.
    Type: Grant
    Filed: December 27, 2021
    Date of Patent: August 20, 2024
    Assignee: Microsoft Technology Licensing, LLC.
    Inventors: Andrew Jackson Klein, Cory Ryan Bramall, Kyle Mouritsen, Ethan Harris Arnowitz, Jeremy Bruce Kersey, Victor Jia, Justin Thomas Savino, Stephen Michael Lucas, Darren A. Bennett
  • Publication number: 20240168542
    Abstract: Examples of augmented reality (AR) environment control advantageously employ multi-factor intention determination and include: performing a multi-factor intention determination for summoning a control object (e.g., a menu, a keyboard, or an input panel) using a set of indications in an AR environment, the set of indications comprising a plurality of indications (e.g., two or more of a palm-facing gesture, an eye gaze, a head gaze, and a finger position simultaneously); and based on at least the set of indications indicating a summoning request by a user, displaying the control object in a position proximate to the user in the AR environment (e.g., docked to a hand of the user). Some examples continue displaying the control object while at least one indication remains, and continue displaying the control object during a timer period if one of the indications is lost.
    Type: Application
    Filed: January 22, 2024
    Publication date: May 23, 2024
    Inventors: Andrew Jackson KLEIN, Cory Ryan BRAMALL, Kyle MOURITSEN, Ethan Harris ARNOWITZ, Jeremy Bruce KERSEY, Victor JIA, Justin Thomas SAVINO, Stephen Michael LUCAS, Darren A. BENNETT
  • Patent number: 11914759
    Abstract: Examples of augmented reality (AR) environment control advantageously employ multi-factor intention determination and include: performing a multi-factor intention determination for summoning a control object (e.g., a menu, a keyboard, or an input panel) using a set of indications in an AR environment, the set of indications comprising a plurality of indications (e.g., two or more of a palm-facing gesture, an eye gaze, a head gaze, and a finger position simultaneously); and based on at least the set of indications indicating a summoning request by a user, displaying the control object in a position proximate to the user in the AR environment (e.g., docked to a hand of the user). Some examples continue displaying the control object while at least one indication remains, and continue displaying the control object during a timer period if one of the indications is lost.
    Type: Grant
    Filed: January 19, 2022
    Date of Patent: February 27, 2024
    Assignee: Microsoft Technology Licensing, LLC.
    Inventors: Andrew Jackson Klein, Cory Ryan Bramall, Kyle Mouritsen, Ethan Harris Arnowitz, Jeremy Bruce Kersey, Victor Jia, Justin Thomas Savino, Stephen Michael Lucas, Darren A. Bennett
  • Publication number: 20230137920
    Abstract: Examples of augmented reality (AR) environment control advantageously employ multi-factor intention determination and include: performing a multi-factor intention determination for summoning a control object (e.g., a menu, a keyboard, or an input panel) using a set of indications in an AR environment, the set of indications comprising a plurality of indications (e.g., two or more of a palm-facing gesture, an eye gaze, a head gaze, and a finger position simultaneously); and based on at least the set of indications indicating a summoning request by a user, displaying the control object in a position proximate to the user in the AR environment (e.g., docked to a hand of the user). Some examples continue displaying the control object while at least one indication remains, and continue displaying the control object during a timer period if one of the indications is lost.
    Type: Application
    Filed: January 19, 2022
    Publication date: May 4, 2023
    Inventors: Andrew Jackson KLEIN, Cory Ryan BRAMALL, Kyle MOURITSEN, Ethan Harris ARNOWITZ, Jeremy Bruce KERSEY, Victor JIA, Justin Thomas SAVINO, Stephen Michael LUCAS, Darren A. BENNETT
  • Patent number: 9498720
    Abstract: A game can be created, shared and played using a personal audio/visual apparatus such as a head-mounted display device (HMDD). Rules of the game, and a configuration of the game space, can be standard or custom. Boundary points of the game can be defined by a gaze direction of the HMDD, by the user's location, by a model of a physical game space such as an instrumented court or by a template. Players can be identified and notified of the availability of a game using a server push technology. For example, a user in a particular location may be notified of the availability of a game at that location. A server manages the game, including storing the rules, boundaries and a game state. The game state can identify players and their scores. Real world objects can be imaged and provided as virtual objects in the game space.
    Type: Grant
    Filed: April 12, 2012
    Date of Patent: November 22, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kevin A Geisner, Stephen G Latta, Ben J Sugden, Benjamin I Vaught, Alex Aben-Athar Kipman, Kathryn Stone Perez, Ryan L Hastings, Jason Scott, Darren A Bennett, John Clavin, Daniel McCulloch
  • Patent number: 9342610
    Abstract: A see-through head-mounted display (HMD) device provides an augmented reality image which is associated with a real-world object, such as a picture frame, wall or billboard. Initially, the object is identified by a user, e.g., based on the user gazing at the object for a period of time, making a gesture such as pointing at the object and/or providing a verbal command. The location and visual characteristics of the object are determined by a front-facing camera of the HMD device, and stored in a record. The user selects from among candidate data streams, such as a web page, game feed, video or stocker ticker. Subsequently, when the user is in the location of the object and looks at the object, the HMD device matches the visual characteristics to the record to identify the data stream, and displays corresponding augmented reality images registered to the object.
    Type: Grant
    Filed: August 25, 2011
    Date of Patent: May 17, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: James Chia-Ming Liu, Anton Oguzhan Alford Andrews, Craig R. Maitlen, Christopher M. Novak, Darren A. Bennett, Sheridan Martin Small
  • Patent number: 9195305
    Abstract: Techniques for facilitating interaction with an application in a motion capture system allow a person to easily begin interacting without manual setup. A depth camera system tracks a person in physical space and determines a probabilistic measure of the person's intent to engage of disengage with the application based on location, stance and movement. Absolute location in a field of view of the depth camera, and location relative to another person, can be evaluated. Stance can include facing a depth camera, indicating a willingness to interact. Movements can include moving toward or away from a central area in the physical space, walking through the field of view, and movements which occur while standing generally in one location, such as moving one's arms around, gesturing, or shifting weight from one foot to another.
    Type: Grant
    Filed: November 8, 2012
    Date of Patent: November 24, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Relja Markovic, Stephen G Latta, Kevin A Geisner, Jonathan T Steed, Darren A Bennett, Amos D Vance
  • Patent number: 9075434
    Abstract: A system for translating user motion into multiple object responses of an on-screen object based on user interaction of an application executing on a computing device is provided. User motion data is received from a capture device from one or more users. The user motion data corresponds to user interaction with an on-screen object presented in the application. The on-screen object corresponds to an object other than an on-screen representation of a user that is displayed by the computing device. The user motion data is automatically translated into multiple object responses of the on-screen object. The multiple object responses of the on-screen object are simultaneously displayed to the users.
    Type: Grant
    Filed: August 20, 2010
    Date of Patent: July 7, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Oscar Omar Garza Santos, Matthew Haigh, Christopher Vuchetich, Ben Hindle, Darren A. Bennett
  • Patent number: 8933884
    Abstract: In a motion capture system, a unitary input is provided to an application based on detected movement and/or location of a group of people. Audio information from the group can also be used as an input. The application can provide real-time feedback to the person or group via a display and audio output. The group can control the movement of an avatar in a virtual space based on the movement of each person in the group, such as in a steering or balancing game. To avoid a discontinuous or confusing output by the application, missing data can be generated for a person who is occluded or partially out of the field of view. A wait time can be set for activating a new person and deactivating a currently-active person. The wait time can be adaptive based on a first detected position or a last detected position of the person.
    Type: Grant
    Filed: January 15, 2010
    Date of Patent: January 13, 2015
    Assignee: Microsoft Corporation
    Inventors: Relja Markovic, Stephen G. Latta, Kevin A. Geisner, David Hill, Darren A. Bennett, David C. Haley, Jr., Brian S. Murphy, Shawn C. Wright
  • Patent number: 8465108
    Abstract: Techniques for enhancing the use of a motion capture system are provided. A motion capture system tracks movement and audio inputs from a person in a physical space, and provides the inputs to an application, which displays a virtual space on a display. Bodily movements can be used to define traits of an avatar in the virtual space. The person can be directed to perform the movements by a coaching avatar, or visual or audio cues in the virtual space. The application can respond to the detected movements and voice commands or voice volume of the person to define avatar traits and initiate pre-scripted audio-visual events in the virtual space to provide an entertaining experience. A performance in the virtual space can be captured and played back with automatic modifications, such as alterations to the avatar's voice or appearance, or modifications made by another person.
    Type: Grant
    Filed: September 5, 2012
    Date of Patent: June 18, 2013
    Assignee: Microsoft Corporation
    Inventors: Relja Markovic, Stephen G Latta, Kevin A Geisner, Christopher Vuchetich, Darren A Bennett, Brian S Murphy, Shawn C Wright
  • Publication number: 20130084970
    Abstract: A game can be created, shared and played using a personal audio/visual apparatus such as a head-mounted display device (HMDD). Rules of the game, and a configuration of the game space, can be standard or custom. Boundary points of the game can be defined by a gaze direction of the HMDD, by the user's location, by a model of a physical game space such as an instrumented court or by a template. Players can be identified and notified of the availability of a game using a server push technology. For example, a user in a particular location may be notified of the availability of a game at that location. A server manages the game, including storing the rules, boundaries and a game state. The game state can identify players and their scores. Real world objects can be imaged and provided as virtual objects in the game space.
    Type: Application
    Filed: April 12, 2012
    Publication date: April 4, 2013
    Inventors: Kevin A. Geisner, Stephen G. Latta, Ben J. Sugden, Benjamin I. Vaught, Alex Aben-Athar Kipman, Kathryn Stone Perez, Ryan L. Hastings, Jason Scott, Darren A. Bennett, John Clavin, Daniel McCulloch
  • Publication number: 20130050258
    Abstract: A see-through head-mounted display (HMD) device provides an augmented reality image which is associated with a real-world object, such as a picture frame, wall or billboard. Initially, the object is identified by a user, e.g., based on the user gazing at the object for a period of time, making a gesture such as pointing at the object and/or providing a verbal command. The location and visual characteristics of the object are determined by a front-facing camera of the HMD device, and stored in a record. The user selects from among candidate data streams, such as a web page, game feed, video or stocker ticker. Subsequently, when the user is in the location of the object and looks at the object, the HMD device matches the visual characteristics to the record to identify the data stream, and displays corresponding augmented reality images registered to the object.
    Type: Application
    Filed: August 25, 2011
    Publication date: February 28, 2013
    Inventors: James Chia-Ming Liu, Anton Oguzhan Alford Andrews, Craig R. Maitlen, Christopher M. Novak, Darren A. Bennett, Sheridan Martin Small
  • Publication number: 20120326976
    Abstract: Techniques for enhancing the use of a motion capture system are provided. A motion capture system tracks movement and audio inputs from a person in a physical space, and provides the inputs to an application, which displays a virtual space on a display. Bodily movements can be used to define traits of an avatar in the virtual space. The person can be directed to perform the movements by a coaching avatar, or visual or audio cues in the virtual space. The application can respond to the detected movements and voice commands or voice volume of the person to define avatar traits and initiate pre-scripted audio-visual events in the virtual space to provide an entertaining experience. A performance in the virtual space can be captured and played back with automatic modifications, such as alterations to the avatar's voice or appearance, or modifications made by another person.
    Type: Application
    Filed: September 5, 2012
    Publication date: December 27, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Relja Markovic, Stephen G. Latta, Kevin A. Geisner, Christopher Vuchetich, Darren A. Bennett, Brian S. Murphy, Shawn C. Wright
  • Patent number: 8334842
    Abstract: Techniques for facilitating interaction with an application in a motion capture system allow a person to easily begin interacting without manual setup. A depth camera system tracks a person in physical space and evaluates the person's intent to engage with the application. Factors such as location, stance, movement and voice data can be evaluated. Absolute location in a field of view of the depth camera, and location relative to another person, can be evaluated. Stance can include facing a depth camera, indicating a willingness to interact. Movements can include moving toward or away from a central area in the physical space, walking through the field of view, and movements which occur while standing generally in one location, such as moving one's arms around, gesturing, or shifting weight from one foot to another. Voice data can include volume as well as words which are detected by speech recognition.
    Type: Grant
    Filed: January 15, 2010
    Date of Patent: December 18, 2012
    Assignee: Microsoft Corporation
    Inventors: Relja Markovic, Stephen G Latta, Kevin A Geisner, Jonathan T Steed, Darren A Bennett, Amos D Vance
  • Patent number: 8284157
    Abstract: Techniques for enhancing the use of a motion capture system are provided. A motion capture system tracks movement and audio inputs from a person in a physical space, and provides the inputs to an application, which displays a virtual space on a display. Bodily movements can be used to define traits of an avatar in the virtual space. The person can be directed to perform the movements by a coaching avatar, or visual or audio cues in the virtual space. The application can respond to the detected movements and voice commands or voice volume of the person to define avatar traits and initiate pre-scripted audio-visual events in the virtual space to provide an entertaining experience. A performance in the virtual space can be captured and played back with automatic modifications, such as alterations to the avatar's voice or appearance, or modifications made by another person.
    Type: Grant
    Filed: January 15, 2010
    Date of Patent: October 9, 2012
    Assignee: Microsoft Corporation
    Inventors: Relja Markovic, Stephen G Latta, Kevin A Geisner, Christopher Vuchetich, Darren A Bennett, Brian S Murphy, Shawn C Wright
  • Publication number: 20120047468
    Abstract: A system for translating user motion into multiple object responses of an on-screen object based on user interaction of an application executing on a computing device is provided. User motion data is received from a capture device from one or more users. The user motion data corresponds to user interaction with an on-screen object presented in the application. The on-screen object corresponds to an object other than an on-screen representation of a user that is displayed by the computing device. The user motion data is automatically translated into multiple object responses of the on-screen object. The multiple object responses of the on-screen object are simultaneously displayed to the users.
    Type: Application
    Filed: August 20, 2010
    Publication date: February 23, 2012
    Applicant: MICROSOFT CORPORATION
    Inventors: Oscar Omar Garza Santos, Matthew Haigh, Christopher Vuchetich, Ben Hindle, Darren A. Bennett
  • Publication number: 20110175809
    Abstract: In a motion capture system, a unitary input is provided to an application based on detected movement and/or location of a group of people. Audio information from the group can also be used as an input. The application can provide real-time feedback to the person or group via a display and audio output. The group can control the movement of an avatar in a virtual space based on the movement of each person in the group, such as in a steering or balancing game. To avoid a discontinuous or confusing output by the application, missing data can be generated for a person who is occluded or partially out of the field of view. A wait time can be set for activating a new person and deactivating a currently-active person. The wait time can be adaptive based on a first detected position or a last detected position of the person.
    Type: Application
    Filed: January 15, 2010
    Publication date: July 21, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Relja Markovic, Stephen G. Latta, Kevin A. Geisner, David Hill, Darren A. Bennett, David C. Haley, JR., Brian S. Murphy, Shawn C. Wright
  • Publication number: 20110175801
    Abstract: Techniques for enhancing the use of a motion capture system are provided. A motion capture system tracks movement and audio inputs from a person in a physical space, and provides the inputs to an application, which displays a virtual space on a display. Bodily movements can be used to define traits of an avatar in the virtual space. The person can be directed to perform the movements by a coaching avatar, or visual or audio cues in the virtual space. The application can respond to the detected movements and voice commands or voice volume of the person to define avatar traits and initiate pre-scripted audio-visual events in the virtual space to provide an entertaining experience. A performance in the virtual space can be captured and played back with automatic modifications, such as alterations to the avatar's voice or appearance, or modifications made by another person.
    Type: Application
    Filed: January 15, 2010
    Publication date: July 21, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Relja Markovic, Stephen G. Latta, Kevin A. Geisner, Christopher Vuchetich, Darren A. Bennett, Brian S. Murphy, Shawn C. Wright
  • Publication number: 20110175810
    Abstract: Techniques for facilitating interaction with an application in a motion capture system allow a person to easily begin interacting without manual setup. A depth camera system tracks a person in physical space and evaluates the person's intent to engage with the application. Factors such as location, stance, movement and voice data can be evaluated. Absolute location in a field of view of the depth camera, and location relative to another person, can be evaluated. Stance can include facing a depth camera, indicating a willingness to interact. Movements can include moving toward or away from a central area in the physical space, walking through the field of view, and movements which occur while standing generally in one location, such as moving one's arms around, gesturing, or shifting weight from one foot to another. Voice data can include volume as well as words which are detected by speech recognition.
    Type: Application
    Filed: January 15, 2010
    Publication date: July 21, 2011
    Applicant: MICROSOFT CORPORATION
    Inventors: Relja Markovic, Stephen G. Latta, Kevin A. Geisner, Jonathan T. Steed, Darren A. Bennett, Amos D. Vance