Patents by Inventor Darren A Bennett

Darren A Bennett has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9146398
    Abstract: Techniques are provided for displaying electronic communications using a head mounted display (HMD). Each electronic communication may be displayed to represent a physical object that indentifies it as a specific type or nature of electronic communication. Therefore, the user is able to process the electronic communications more efficiently. In some aspects, computer vision allows a user to interact with the representation of the physical objects. One embodiment includes accessing electronic communications, and determining physical objects that are representative of at least a subset of the electronic communications. A head mounted display (HMD) is instructed how to display a representation of the physical objects in this embodiment.
    Type: Grant
    Filed: July 12, 2011
    Date of Patent: September 29, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Stephen G. Latta, Sheridan Martin Small, James C. Liu, Benjamin I. Vaught, Darren Bennett
  • Publication number: 20150254793
    Abstract: Technology is provided for transferring a right to a digital content item based on one or more physical actions detected in data captured by a see-through, augmented reality display device system. A digital content item may be represented by a three-dimensional (3D) virtual object displayed by the device system. A user can hold the virtual object in some examples, and transfer a right to the content item the object represents by handing the object to another user within a defined distance, who indicates acceptance of the right based upon one or more physical actions including taking hold of the transferred object. Other examples of physical actions performed by a body part of a user may also indicate offer and acceptance in the right transfer. Content may be transferred from display device to display device while rights data is communicated via a network with a service application executing remotely.
    Type: Application
    Filed: May 18, 2015
    Publication date: September 10, 2015
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Ryan L. Hastings, Stephen G. Latta, Benjamin I. Vaught, Darren Bennett
  • Patent number: 9098873
    Abstract: An on-screen shopping application which reacts to a human target user's motions to provide a shopping experience to the user is provided. A tracking system captures user motions and executes a shopping application allowing a user to manipulate an on-screen representation the user. The on-screen representation has a likeness of the user or another individual and movements of the user in the on-screen interface allows the user to interact with virtual articles that represent real-world articles. User movements which are recognized as article manipulation or transaction control gestures are translated into commands for the shopping application.
    Type: Grant
    Filed: April 1, 2010
    Date of Patent: August 4, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kevin A. Geisner, Kudo Tsunoda, Darren Bennett, Brian S. Murphy, Stephen G. Latta, Relja Markovic, Alex Kipman
  • Publication number: 20150212585
    Abstract: A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human target observed with a three-dimensional depth camera. A relative position of a hand joint of the virtual skeleton is translated as a gestured control, and a three-dimensional virtual world is controlled responsive to the gestured control.
    Type: Application
    Filed: February 26, 2015
    Publication date: July 30, 2015
    Inventors: Stephen Latta, Darren Bennett, Kevin Geisner, Relja Markovic
  • Patent number: 9075434
    Abstract: A system for translating user motion into multiple object responses of an on-screen object based on user interaction of an application executing on a computing device is provided. User motion data is received from a capture device from one or more users. The user motion data corresponds to user interaction with an on-screen object presented in the application. The on-screen object corresponds to an object other than an on-screen representation of a user that is displayed by the computing device. The user motion data is automatically translated into multiple object responses of the on-screen object. The multiple object responses of the on-screen object are simultaneously displayed to the users.
    Type: Grant
    Filed: August 20, 2010
    Date of Patent: July 7, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Oscar Omar Garza Santos, Matthew Haigh, Christopher Vuchetich, Ben Hindle, Darren A. Bennett
  • Patent number: 9069381
    Abstract: A computing system runs an application (e.g., video game) that interacts with one or more actively engaged users. One or more physical properties of a group are sensed. The group may include the one or more actively engaged users and/or one or more entities not actively engaged with the application. The computing system will determine that the group (or the one or more entities not actively engaged with the application) have performed a predetermined action. A runtime condition of the application is changed in response to determining that the group (or the one or more entities not actively engaged with the computer based application) have performed the predetermined action. Examples of changing a runtime condition include moving an object, changing a score or changing an environmental condition of a video game.
    Type: Grant
    Filed: March 2, 2012
    Date of Patent: June 30, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kevin Geisner, Relja Markovic, Stephen G. Latta, Mark T. Mihelich, Christopher Willoughby, Jonathan T. Steed, Darren Bennett, Shawn C. Wright, Matt Coohill
  • Patent number: 9063566
    Abstract: Various embodiments are provided for a shared collaboration system and related methods for enabling an active user to interact with one or more additional users and with collaboration items. In one embodiment a head-mounted display device is operatively connected to a computing device that includes a collaboration engine program. The program receives observation information of a physical space from the head-mounted display device along with a collaboration item. The program visually augments an appearance of the physical space as seen through the head-mounted display device to include an active user collaboration item representation of the collaboration item. The program populates the active user collaboration item representation with additional user collaboration item input from an additional user.
    Type: Grant
    Filed: November 30, 2011
    Date of Patent: June 23, 2015
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Daniel McCulloch, Stephen Latta, Darren Bennett, Ryan Hastings, Jason Scott, Relja Markovic, Kevin Geisner, Jonathan Steed
  • Patent number: 9041739
    Abstract: Embodiments for matching participants in a virtual multiplayer entertainment experience are provided. For example, one embodiment provides a method including receiving from each user of a plurality of users a request to join the virtual multiplayer entertainment experience, receiving from each user of the plurality of users information regarding characteristics of a physical space in which each user is located, and matching two or more users of the plurality of users for participation in the virtual multiplayer entertainment experience based on the characteristics of the physical space of each of the two or more users.
    Type: Grant
    Filed: January 31, 2012
    Date of Patent: May 26, 2015
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Stephen Latta, Kevin Geisner, Brian Mount, Daniel McCulloch, Cameron Brown, Jeffrey Alan Kohler, Wei Zhang, Ryan Hastings, Darren Bennett, Ian McIntyre
  • Patent number: 9041622
    Abstract: Technology is described for controlling a virtual object displayed by a near-eye, augmented reality display with a real controller device. User input data is received from a real controller device requesting an action to be performed by the virtual object. A user perspective of the virtual object being displayed by the near-eye, augmented reality display is determined. The user input data requesting the action to be performed by the virtual object is applied based on the user perspective, and the action is displayed from the user perspective. The virtual object to be controlled by the real controller device may be identified based on user input data which may be from a natural user interface (NUI). A user selected force feedback object may also be identified, and the identification may also be based on NUI input data.
    Type: Grant
    Filed: June 12, 2012
    Date of Patent: May 26, 2015
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Daniel J. McCulloch, Arnulfo Zepeda Navratil, Jonathan T. Steed, Ryan L. Hastings, Jason Scott, Brian J. Mount, Holly A. Hirzel, Darren Bennett, Michael J. Scavezze
  • Patent number: 9038127
    Abstract: Technology is provided for transferring a right to a digital content item based on one or more physical actions detected in data captured by a see-through, augmented reality display device system. A digital content item may be represented by a three-dimensional (3D) virtual object displayed by the device system. A user can hold the virtual object in some examples, and transfer a right to the content item the object represents by handing the object to another user within a defined distance, who indicates acceptance of the right based upon one or more physical actions including taking hold of the transferred object. Other examples of physical actions performed by a body part of a user may also indicate offer and acceptance in the right transfer. Content may be transferred from display device to display device while rights data is communicated via a network with a service application executing remotely.
    Type: Grant
    Filed: August 18, 2011
    Date of Patent: May 19, 2015
    Inventors: Ryan L. Hastings, Stephen G. Latta, Benjamin I. Vaught, Darren Bennett
  • Publication number: 20150130689
    Abstract: Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object.
    Type: Application
    Filed: January 22, 2015
    Publication date: May 14, 2015
    Inventors: Ben Sugden, John Clavin, Ben Vaught, Stephen Latta, Kathryn Stone Perez, Daniel McCulloch, Jason Scott, Wei Zhang, Darren Bennett, Ryan Hastings, Arthur Tomlin, Kevin Geisner
  • Patent number: 8994718
    Abstract: A virtual skeleton includes a plurality of joints and provides a machine readable representation of a human target observed with a three-dimensional depth camera. A relative position of a hand joint of the virtual skeleton is translated as a gestured control, and a three-dimensional virtual world is controlled responsive to the gestured control.
    Type: Grant
    Filed: December 21, 2010
    Date of Patent: March 31, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Stephen Latta, Darren Bennett, Kevin Geisner, Relja Markovic
  • Patent number: 8963805
    Abstract: Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object.
    Type: Grant
    Filed: January 27, 2012
    Date of Patent: February 24, 2015
    Assignee: Microsoft Corporation
    Inventors: Ben Sugden, John Clavin, Ben Vaught, Stephen Latta, Kathryn Stone Perez, Daniel McCulloch, Jason Scott, Wei Zhang, Darren Bennett, Ryan Hastings, Arthur Tomlin, Kevin Geisner
  • Publication number: 20150035832
    Abstract: A head-mounted display system includes a see-through display that is configured to visually augment an appearance of a physical environment to a user viewing the physical environment through the see-through display. Graphical content presented via the see-through display is created by modeling the ambient lighting conditions of the physical environment.
    Type: Application
    Filed: October 22, 2014
    Publication date: February 5, 2015
    Inventors: Ben Sugden, Darren Bennett, Brian Mount, Sebastian Sylvan, Arthur Tomlin, Ryan Hastings, Daniel McCulloch, Kevin Geisner, Robert Crocco
  • Patent number: 8933884
    Abstract: In a motion capture system, a unitary input is provided to an application based on detected movement and/or location of a group of people. Audio information from the group can also be used as an input. The application can provide real-time feedback to the person or group via a display and audio output. The group can control the movement of an avatar in a virtual space based on the movement of each person in the group, such as in a steering or balancing game. To avoid a discontinuous or confusing output by the application, missing data can be generated for a person who is occluded or partially out of the field of view. A wait time can be set for activating a new person and deactivating a currently-active person. The wait time can be adaptive based on a first detected position or a last detected position of the person.
    Type: Grant
    Filed: January 15, 2010
    Date of Patent: January 13, 2015
    Assignee: Microsoft Corporation
    Inventors: Relja Markovic, Stephen G. Latta, Kevin A. Geisner, David Hill, Darren A. Bennett, David C. Haley, Jr., Brian S. Murphy, Shawn C. Wright
  • Patent number: 8872853
    Abstract: A head-mounted display system includes a see-through display that is configured to visually augment an appearance of a physical environment to a user viewing the physical environment through the see-through display. Graphical content presented via the see-through display is created by modeling the ambient lighting conditions of the physical environment.
    Type: Grant
    Filed: December 1, 2011
    Date of Patent: October 28, 2014
    Assignee: Microsoft Corporation
    Inventors: Ben Sugden, Darren Bennett, Brian Mount, Sebastian Sylvan, Arthur Tomlin, Ryan Hastings, Daniel McCulloch, Kevin Geisner, Robert Crocco, Jr.
  • Publication number: 20140267311
    Abstract: Embodiments are disclosed that relate to interacting with a user interface via feedback provided by an avatar. One embodiment provides a method comprising receiving depth data, locating a person in the depth data, and mapping a physical space in front of the person to a screen space of a display device. The method further comprises forming an image of an avatar representing the person, outputting to a display an image of a user interface comprising an interactive user interface control, and outputting to the display device the image of the avatar such that the avatar faces the user interface control. The method further comprises detecting a motion of the person via the depth data, forming an animated representation of the avatar interacting with the user interface control based upon the motion of the person, and outputting the animated representation of the avatar interacting with the control.
    Type: Application
    Filed: May 29, 2014
    Publication date: September 18, 2014
    Applicant: Microsoft Corporation
    Inventors: Jeffrey Evertt, Joel Deaguero, Darren Bennett, Dylan Vance, David Galloway, Relja Markovic, Stephen Latta, Oscar Omar Garza Santos, Kevin Geisner
  • Patent number: 8749557
    Abstract: Embodiments are disclosed that relate to interacting with a user interface via feedback provided by an avatar. One embodiment provides a method comprising receiving depth data, locating a person in the depth data, and mapping a physical space in front of the person to a screen space of a display device. The method further comprises forming an image of an avatar representing the person, outputting to a display an image of a user interface comprising an interactive user interface control, and outputting to the display device the image of the avatar such that the avatar faces the user interface control. The method further comprises detecting a motion of the person via the depth data, forming an animated representation of the avatar interacting with the user interface control based upon the motion of the person, and outputting the animated representation of the avatar interacting with the control.
    Type: Grant
    Filed: June 11, 2010
    Date of Patent: June 10, 2014
    Assignee: Microsoft Corporation
    Inventors: Jeffrey Evertt, Joel Deaguero, Darren Bennett, Dylan Vance, David Galloway, Relja Markovic, Stephen Latta, Oscar Omar Garza Santos, Kevin Geisner
  • Publication number: 20140002444
    Abstract: Technology is described for automatically determining placement of one or more interaction zones in an augmented reality environment in which one or more virtual features are added to a real environment. An interaction zone includes at least one virtual feature and is associated with a space within the augmented reality environment with boundaries of the space determined based on the one or more real environment features. A plurality of activation criteria may be available for an interaction zone and at least one may be selected based on at least one real environment feature. The technology also describes controlling activation of an interaction zone within the augmented reality environment. In some examples, at least some behavior of a virtual object is controlled by emergent behavior criteria which defines an action independently from a type of object in the real world environment.
    Type: Application
    Filed: June 29, 2012
    Publication date: January 2, 2014
    Inventors: Darren Bennett, Brian J. Mount, Michael J. Scavezze, Daniel J. McCulloch, Anthony J. Ambrus, Jonathan T. Steed, Arthur C. Tomlin, Kevin A. Geisner
  • Publication number: 20130328927
    Abstract: A system for generating a virtual gaming environment based on features identified within a real-world environment, and adapting the virtual gaming environment over time as the features identified within the real-world environment change is described. Utilizing the technology described, a person wearing a head-mounted display device (HMD) may walk around a real-world environment and play a virtual game that is adapted to that real-world environment. For example, the HMD may identify environmental features within a real-world environment such as five grassy areas and two cars, and then spawn virtual monsters based on the location and type of the environmental features identified. The location and type of the environmental features identified may vary depending on the particular real-world environment in which the HMD exists and therefore each virtual game may look different depending on the particular real-world environment.
    Type: Application
    Filed: November 29, 2012
    Publication date: December 12, 2013
    Inventors: Brian J. Mount, Jason Scott, Ryan L. Hastings, Darren Bennett, Stephen G. Latta, Daniel J. McCulloch, Kevin A. Geisner, Jonathan T. Steed, Michael J. Scavezze