Patents by Inventor Daniel J. McCulloch

Daniel J. McCulloch has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9201578
    Abstract: Methods for enabling hands-free selection of virtual objects are described. In some embodiments, a gaze swipe gesture may be used to select a virtual object. The gaze swipe gesture may involve an end user of a head-mounted display device (HMD) performing head movements that are tracked by the HMD to detect whether a virtual pointer controlled by the end user has swiped across two or more edges of the virtual object. In some cases, the gaze swipe gesture may comprise the end user using their head movements to move the virtual pointer through two edges of the virtual object while the end user gazes at the virtual object. In response to detecting the gaze swipe gesture, the HMD may determine a second virtual object to be displayed on the HMD based on a speed of the gaze swipe gesture and a size of the virtual object.
    Type: Grant
    Filed: January 23, 2014
    Date of Patent: December 1, 2015
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Jason Scott, Arthur C. Tomlin, Mike Thomas, Matthew Kaplan, Cameron G. Brown, Jonathan Plumb, Nicholas Gervase Fajt, Daniel J. McCulloch, Jeremy Lee
  • Publication number: 20150331240
    Abstract: Assisted viewing of web-based resources by an end user of a head-mounted display device (HMD) is described. An HMD may display content from web-based resources using a see-through display while tracking eye and head movement of the end user viewing the content within an augmented reality environment. Active view regions within the see-through display are identified based on tracking information including eye gaze data and head direction data. The web-based resources are analyzed to identify content and display elements. The analysis is correlated with the active view regions to identify the underlying content that is a desired point of focus of a corresponding active view region, as well as to identify the display elements corresponding to that content. A web-based resource is modified based on the correlation. The content from the web-based resource is displayed based on the modifications to assist the end user in viewing the web-based resource.
    Type: Application
    Filed: May 15, 2014
    Publication date: November 19, 2015
    Inventors: Adam G. Poulos, Cameron G. Brown, Stephen G. Latta, Brian J. Mount, Daniel J. McCulloch
  • Patent number: 9183676
    Abstract: Technology is described for displaying a collision between objects by an augmented reality display device system. A collision between a real object and a virtual object is identified based on three dimensional space position data of the objects. At least one effect on at least one physical property of the real object is determined based on physical properties of the real object, like a change in surface shape, and physical interaction characteristics of the collision. Simulation image data is generated and displayed simulating the effect on the real object by the augmented reality display. Virtual objects under control of different executing applications can also interact with one another in collisions.
    Type: Grant
    Filed: April 27, 2012
    Date of Patent: November 10, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Daniel J. McCulloch, Stephen G. Latta, Brian J. Mount, Kevin A. Geisner, Roger Sebastian Kevin Sylvan, Arnulfo Zepeda Navratil, Jason Scott, Jonathan T. Steed, Ben J. Sugden, Britta Silke Hummel, Kyungsuk David Lee, Mark J. Finocchio, Alex Aben-Athar Kipman, Jeffrey N. Margolis
  • Patent number: 9165381
    Abstract: A system and method are disclosed for augmenting a reading experience in a mixed reality environment. In response to predefined verbal or physical gestures, the mixed reality system is able to answer a user's questions or provide additional information relating to what the user is reading. Responses may be displayed to the user on virtual display slates in a border or around the reading material without obscuring text or interfering with the user's reading experience.
    Type: Grant
    Filed: May 31, 2012
    Date of Patent: October 20, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Stephen G. Latta, Ryan L. Hastings, Cameron G. Brown, Aaron Krauss, Daniel J. McCulloch, Ben J. Sugden
  • Patent number: 9153195
    Abstract: The technology provides contextual personal information by a mixed reality display device system being worn by a user. A user inputs person selection criteria, and the display system sends a request for data identifying at least one person in a location of the user who satisfy the person selection criteria to a cloud based application with access to user profile data for multiple users. Upon receiving data identifying the at least one person, the display system outputs data identifying the person if he or she is within the field of view. An identifier and a position indicator of the person in the location is output if not. Directional sensors on the display device may also be used for determining a position of the person. Cloud based executing software can identify and track the positions of people based on image and non-image data from display devices in the location.
    Type: Grant
    Filed: January 30, 2012
    Date of Patent: October 6, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kevin A. Geisner, Darren Bennett, Relja Markovic, Stephen G. Latta, Daniel J. McCulloch, Jason Scott, Ryan L. Hastings, Alex Aben-Athar Kipman, Andrew John Fuller, Jeffrey Neil Margolis, Kathryn Stone Perez, Sheridan Martin Small
  • Patent number: 9122053
    Abstract: Technology is described for providing realistic occlusion between a virtual object displayed by a head mounted, augmented reality display system and a real object visible to the user's eyes through the display. A spatial occlusion in a user field of view of the display is typically a three dimensional occlusion determined based on a three dimensional space mapping of real and virtual objects. An occlusion interface between a real object and a virtual object can be modeled at a level of detail determined based on criteria such as distance within the field of view, display size or position with respect to a point of gaze. Technology is also described for providing three dimensional audio occlusion based on an occlusion between a real object and a virtual object in the user environment.
    Type: Grant
    Filed: April 10, 2012
    Date of Patent: September 1, 2015
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Kevin A. Geisner, Brian J. Mount, Stephen G. Latta, Daniel J. McCulloch, Kyungsuk David Lee, Ben J. Sugden, Jeffrey N. Margolis, Kathryn Stone Perez, Sheridan Martin Small, Mark J. Finocchio, Robert L. Crocco, Jr.
  • Publication number: 20150205494
    Abstract: Methods for enabling hands-free selection of virtual objects are described. In some embodiments, a gaze swipe gesture may be used to select a virtual object. The gaze swipe gesture may involve an end user of a head-mounted display device (HMD) performing head movements that are tracked by the HMD to detect whether a virtual pointer controlled by the end user has swiped across two or more edges of the virtual object. In some cases, the gaze swipe gesture may comprise the end user using their head movements to move the virtual pointer through two edges of the virtual object while the end user gazes at the virtual object. In response to detecting the gaze swipe gesture, the HMD may determine a second virtual object to be displayed on the HMD based on a speed of the gaze swipe gesture and a size of the virtual object.
    Type: Application
    Filed: January 23, 2014
    Publication date: July 23, 2015
    Inventors: Jason Scott, Arthur C. Tomlin, Mike Thomas, Matthew Kaplan, Cameron G. Brown, Jonathan Plumb, Nicholas Gervase Fajt, Daniel J. McCulloch, Jeremy Lee
  • Publication number: 20150206321
    Abstract: Methods for controlling the display of content as the content is being viewed by an end user of a head-mounted display device (HMD) are described. In some embodiments, an HMD may display the content using a virtual content reader for reading the content. The content may comprise text and/or images, such as text or images associated with an electronic book, an electronic magazine, a word processing document, a webpage, or an email. The virtual content reader may provide automated content scrolling based on a rate at which the end user reads a portion of the displayed content on the virtual content reader. In one embodiment, an HMD may combine automatic scrolling of content displayed on the virtual content reader with user controlled scrolling (e.g., via head tracking of the end user of the HMD).
    Type: Application
    Filed: January 23, 2014
    Publication date: July 23, 2015
    Inventors: Michael J. Scavezze, Adam G. Poulos, Johnathan Robert Bevis, Nicholas Gervase Fajt, Cameron G. Brown, Daniel J. McCulloch, Jeremy Lee
  • Patent number: 9041622
    Abstract: Technology is described for controlling a virtual object displayed by a near-eye, augmented reality display with a real controller device. User input data is received from a real controller device requesting an action to be performed by the virtual object. A user perspective of the virtual object being displayed by the near-eye, augmented reality display is determined. The user input data requesting the action to be performed by the virtual object is applied based on the user perspective, and the action is displayed from the user perspective. The virtual object to be controlled by the real controller device may be identified based on user input data which may be from a natural user interface (NUI). A user selected force feedback object may also be identified, and the identification may also be based on NUI input data.
    Type: Grant
    Filed: June 12, 2012
    Date of Patent: May 26, 2015
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Daniel J. McCulloch, Arnulfo Zepeda Navratil, Jonathan T. Steed, Ryan L. Hastings, Jason Scott, Brian J. Mount, Holly A. Hirzel, Darren Bennett, Michael J. Scavezze
  • Patent number: 8965741
    Abstract: A system for generating and updating a 3D model of a structure as the structure is being constructed or modified is described. The structure may comprise a building or non-building structure such as a bridge, parking garage, or roller coaster. The 3D model may include virtual objects depicting physical components or other construction elements of the structure. Each construction element may be associated with physical location information that may be analyzed over time in order to detect movement of the construction element and to predict when movement of the construction element may cause a code or regulation to be violated. In some cases, a see-through HMD may be utilized by a construction worker while constructing or modifying a structure in order to verify that the placement of a construction element complies with various building codes or regulations in real-time.
    Type: Grant
    Filed: April 24, 2012
    Date of Patent: February 24, 2015
    Assignee: Microsoft Corporation
    Inventors: Daniel J. McCulloch, Ryan L. Hastings, Jason Scott, Holly A. Hirzel, Brian J. Mount
  • Publication number: 20150007114
    Abstract: Technology is described for web-like hierarchical menu interface which displays a menu in a web-like hierarchical menu display configuration in a near-eye display (NED). The web-like hierarchical menu display configuration links menu levels and menu items within a menu level with flexible spatial dimensions for menu elements. One or more processors executing the interface select a web-like hierarchical menu display configuration based on the available menu space and user head view direction determined from a 3D mapping of the NED field of view data and stored user head comfort rules. Activation parameters in menu item selection criteria are adjusted to be user specific based on user head motion data tracked based on data from one or more sensors when the user wears the NED. Menu display layout may be triggered by changes in head view direction of the user and available menu space about the user's head.
    Type: Application
    Filed: June 28, 2013
    Publication date: January 1, 2015
    Inventors: Adam G. Poulos, Anthony J. Ambrus, Cameron G. Brown, Jason Scott, Brian J. Mount, Daniel J. McCulloch, John Bevis, Wei Zhang
  • Publication number: 20150002507
    Abstract: Technology is described for (3D) space carving of a user environment based on movement through the user environment of one or more users wearing a near-eye display (NED) system. One or more sensors on the near-eye display (NED) system provide sensor data from which a distance and direction of movement can be determined. Spatial dimensions for a navigable path can be represented based on user height data and user width data of the one or more users who have traversed the path. Space carving data identifying carved out space can be stored in a 3D space carving model of the user environment. The navigable paths can also be related to position data in another kind of 3D mapping like a 3D surface reconstruction mesh model of the user environment generated from depth images.
    Type: Application
    Filed: June 28, 2013
    Publication date: January 1, 2015
    Inventors: Anthony J. Ambrus, Jea Gon Park, Adam G. Poulos, Justin Avram Clark, Michael Jason Gourlay, Brian J. Mount, Daniel J. McCulloch, Arthur C. Tomlin
  • Publication number: 20140306993
    Abstract: Methods for positioning virtual objects within an augmented reality environment using snap grid spaces associated with real-world environments, real-world objects, and/or virtual objects within the augmented reality environment are described. A snap grid space may comprise a two-dimensional or three-dimensional virtual space within an augmented reality environment in which one or more virtual objects may be positioned. In some embodiments, a head-mounted display device (HMD) may identify one or more grid spaces within an augmented reality environment, detect a positioning of a virtual object within the augmented reality environment, determine a target grid space of the one or more grid spaces in which to position the virtual object, determine a position of the virtual object within the target grid space, and display the virtual object within the augmented reality environment based on the position of the virtual object within the target grid space.
    Type: Application
    Filed: April 12, 2013
    Publication date: October 16, 2014
    Inventors: Adam G. Poulos, Jason Scott, Matthew Kaplan, Christopher Obeso, Cameron G. Brown, Daniel J. McCulloch, Abby Lee, Brian J. Mount, Ben J. Sugden
  • Publication number: 20140306994
    Abstract: Methods for generating and displaying personalized virtual billboards within an augmented reality environment are described. The personalized virtual billboards may facilitate the sharing of personalized information between persons within an environment who have varying degrees of acquaintance (e.g., ranging from close familial relationships to strangers). In some embodiments, a head-mounted display device (HMD) may detect a mobile device associated with a particular person within an environment, acquire a personalized information set corresponding with the particular person, generate a virtual billboard based on the personalized information set, and display the virtual billboard on the HMD. The personalized information set may include information associated with the particular person such as shopping lists and classified advertisements.
    Type: Application
    Filed: April 12, 2013
    Publication date: October 16, 2014
    Inventors: Cameron G. Brown, Abby Lee, Brian J. Mount, Daniel J. McCulloch, Michael J. Scavezze, Ryan L. Hastings, John Bevis, Mike Thomas, Ron Amador-Leon
  • Publication number: 20140306891
    Abstract: Methods for providing real-time feedback to an end user of a mobile device as they are interacting with or manipulating one or more virtual objects within an augmented reality environment are described. The real-time feedback may comprise visual feedback, audio feedback, and/or haptic feedback. In some embodiments, a mobile device, such as a head-mounted display device (HMD), may determine an object classification associated with a virtual object within an augmented reality environment, detect an object manipulation gesture performed by an end user of the mobile device, detect an interaction with the virtual object based on the object manipulation gesture, determine a magnitude of a virtual force associated with the interaction, and provide real-time feedback to the end user of the mobile device based on the interaction, the magnitude of the virtual force applied to the virtual object, and the object classification associated with the virtual object.
    Type: Application
    Filed: April 12, 2013
    Publication date: October 16, 2014
    Inventors: Stephen G. Latta, Adam G. Poulos, Cameron G. Brown, Daniel J. McCulloch, Matthew Kaplan, Arnulfo Zepeda Navratil, Jon Paulovich, Kudo Tsunoda
  • Patent number: 8752963
    Abstract: The technology provides various embodiments for controlling brightness of a see-through, near-eye mixed display device based on light intensity of what the user is gazing at. The opacity of the display can be altered, such that external light is reduced if the wearer is looking at a bright object. The wearer's pupil size may be determined and used to adjust the brightness used to display images, as well as the opacity of the display. A suitable balance between opacity and brightness used to display images may be determined that allows real and virtual objects to be seen clearly, while not causing damage or discomfort to the wearer's eyes.
    Type: Grant
    Filed: November 4, 2011
    Date of Patent: June 17, 2014
    Assignee: Microsoft Corporation
    Inventors: Daniel J. McCulloch, Ryan L. Hastings, Kevin A. Geisner, Robert L. Crocco, Alexandru O. Balan, Derek L. Knee, Michael J. Scavezze, Stephen G. Latta, Brian J. Mount
  • Publication number: 20140160157
    Abstract: Methods for generating and displaying people-triggered holographic reminders are described. In some embodiments, a head-mounted display device (HMD) generates and displays an augmented reality environment to an end user of the HMD in which reminders associated with a particular person may be displayed if the particular person is within a field of view of the HMD or if the particular person is within a particular distance of the HMD. The particular person may be identified individually or identified as belonging to a particular group (e.g., a member of a group with a particular job title such as programmer or administrator). In some cases, a completion of a reminder may be automatically detected by applying speech recognition techniques (e.g., to identify key words, phrases, or names) to captured audio of a conversation occurring between the end user and the particular person.
    Type: Application
    Filed: December 11, 2012
    Publication date: June 12, 2014
    Inventors: Adam G. Poulos, Holly A. Hirzel, Anthony J. Ambrus, Daniel J. McCulloch, Brian J. Mount, Jonathan T. Steed
  • Publication number: 20140002444
    Abstract: Technology is described for automatically determining placement of one or more interaction zones in an augmented reality environment in which one or more virtual features are added to a real environment. An interaction zone includes at least one virtual feature and is associated with a space within the augmented reality environment with boundaries of the space determined based on the one or more real environment features. A plurality of activation criteria may be available for an interaction zone and at least one may be selected based on at least one real environment feature. The technology also describes controlling activation of an interaction zone within the augmented reality environment. In some examples, at least some behavior of a virtual object is controlled by emergent behavior criteria which defines an action independently from a type of object in the real world environment.
    Type: Application
    Filed: June 29, 2012
    Publication date: January 2, 2014
    Inventors: Darren Bennett, Brian J. Mount, Michael J. Scavezze, Daniel J. McCulloch, Anthony J. Ambrus, Jonathan T. Steed, Arthur C. Tomlin, Kevin A. Geisner
  • Publication number: 20130342572
    Abstract: A system and method are disclosed for controlling content displayed to a user in a virtual environment. The virtual environment may include virtual controls with which a user may interact using predefined gestures. Interacting with a virtual control may adjust an aspect of the displayed content, including for example one or more of fast forwarding of the content, rewinding of the content, pausing of the content, stopping the content, changing a volume of content, recording the content, changing a brightness of the content, changing a contrast of the content and changing the content from a first still image to a second still image.
    Type: Application
    Filed: June 26, 2012
    Publication date: December 26, 2013
    Inventors: Adam G. Poulos, Stephen G. Latta, Daniel J. McCulloch, Jeffrey Cole
  • Publication number: 20130335405
    Abstract: A system and method are disclosed for building and experiencing three-dimensional virtual objects from within a virtual environment in which they will be viewed upon completion. A virtual object may be created, edited and animated using a natural user interface while the object is displayed to the user in a three-dimensional virtual environment.
    Type: Application
    Filed: June 18, 2012
    Publication date: December 19, 2013
    Inventors: Michael J. Scavezze, Jonathan T. Steed, Ryan L. Hastings, Stephen G. Latta, Daniel J. McCulloch