Patents by Inventor Daniel McCulloch

Daniel McCulloch has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11550399
    Abstract: Techniques for sharing across environments are described. Generally, different types of input may be employed to share content, such as using a pen, a stylus, a finger, touchless gesture input, and so forth. According to various embodiments, content may be shared between devices in local proximity, and/or between devices that are remote from one another. In at least some embodiments, content is shared based on an identity of a sharing user and/or sharing device.
    Type: Grant
    Filed: October 9, 2020
    Date of Patent: January 10, 2023
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Ryan Lucas Hastings, Daniel McCulloch, Michael John Patten
  • Publication number: 20210026457
    Abstract: Techniques for sharing across environments are described. Generally, different types of input may be employed to share content, such as using a pen, a stylus, a finger, touchless gesture input, and so forth. According to various embodiments, content may be shared between devices in local proximity, and/or between devices that are remote from one another. In at least some embodiments, content is shared based on an identity of a sharing user and/or sharing device.
    Type: Application
    Filed: October 9, 2020
    Publication date: January 28, 2021
    Inventors: Ryan Lucas HASTINGS, Daniel MCCULLOCH, Michael John PATTEN
  • Patent number: 10838502
    Abstract: Techniques for sharing across environments are described. Generally, different types of input may be employed to share content, such as using a pen, a stylus, a finger, touchless gesture input, and so forth. According to various embodiments, content may be shared between devices in local proximity, and/or between devices that are remote from one another. In at least some embodiments, content is shared based on an identity of a sharing user and/or sharing device.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: November 17, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ryan Lucas Hastings, Daniel McCulloch, Michael John Patten
  • Patent number: 10705602
    Abstract: Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes receiving a user input selecting an object in a field of view of the see-through display system, determining a first group of commands currently operable based on one or more of an identification of the selected object and a state of the object, and presenting the first group of commands to a user. The method may further include receiving a command from the first group of commands, changing the state of the selected object from a first state to a second state in response to the command, determining a second group of commands based on the second state, where the second group of commands is different than the first group of commands, and presenting the second group of commands to the user.
    Type: Grant
    Filed: September 25, 2017
    Date of Patent: July 7, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Adam Poulos, Cameron Brown, Daniel McCulloch, Jeff Cole
  • Patent number: 10691880
    Abstract: Techniques for ink in an electronic document are described. According to various implementations, techniques described herein provide a rich set of tools which allow a user to markup an electronic document such as a web page, not only in static 2D where the user writes on top of a document, but in dynamic 3D. In addition, when adding 3D elements to an electronic document, the 3D elements are added based on awareness of the content of the electronic document and can adapt its content in relationship to the document.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: June 23, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Ryan Lucas Hastings, Daniel McCulloch, Michael John Patten
  • Patent number: 10510190
    Abstract: Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display.
    Type: Grant
    Filed: September 1, 2017
    Date of Patent: December 17, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Michael Scavezze, Jonathan Steed, Stephen Latta, Kevin Geisner, Daniel McCulloch, Brian Mount, Ryan Hastings, Phillip Charles Heckinger
  • Patent number: 10497175
    Abstract: A head-mounted display includes a see-through display and a virtual reality engine. The see-through display is configured to visually augment an appearance of a physical space to a user viewing the physical space through the see-through display. The virtual reality engine is configured to cause the see-through display to visually present a virtual monitor that appears to be integrated with the physical space to a user viewing the physical space through the see-through display.
    Type: Grant
    Filed: September 7, 2016
    Date of Patent: December 3, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Brian Mount, Stephen Latta, Adam Poulos, Daniel McCulloch, Darren Bennett, Ryan Hastings, Jason Scott
  • Patent number: 10083540
    Abstract: A head-mounted display system includes a see-through display that is configured to visually augment an appearance of a physical environment to a user viewing the physical environment through the see-through display. Graphical content presented via the see-through display is created by modeling the ambient lighting conditions of the physical environment.
    Type: Grant
    Filed: January 23, 2017
    Date of Patent: September 25, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Ben Sugden, Darren Bennett, Brian Mount, Sebastian Sylvan, Arthur Tomlin, Ryan Hastings, Daniel McCulloch, Kevin Geisner, Robert Crocco, Jr.
  • Patent number: 10008044
    Abstract: Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a display system. For example, one disclosed embodiment includes displaying a virtual object via the display system as free-floating, detecting a trigger to display the object as attached to a surface, and, in response to the trigger, displaying the virtual object as attached to the surface via the display system. The method may further include detecting a trigger to detach the virtual object from the surface and, in response to the trigger to detach the virtual object from the surface, detaching the virtual object from the surface and displaying the virtual object as free-floating.
    Type: Grant
    Filed: December 23, 2016
    Date of Patent: June 26, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Adam G. Poulos, Evan Michael Keibler, Arthur Tomlin, Cameron Brown, Daniel McCulloch, Brian Mount, Dan Kroymann, Gregory Lowell Alt
  • Patent number: 9979809
    Abstract: Embodiments are disclosed herein that relate to the automatic tracking of objects. For example, one disclosed embodiment provides a method of operating a mobile computing device having an image sensor. The method includes acquiring image data, identifying an inanimate moveable object in the image data, determining whether the inanimate moveable object is a tracked object, if the inanimate moveable object is a tracked object, then storing information regarding a state of the inanimate moveable object, detecting a trigger to provide a notification of the state of the inanimate moveable object, and providing an output of the notification of the state of the inanimate moveable object.
    Type: Grant
    Filed: September 2, 2016
    Date of Patent: May 22, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Mike Scavezze, Jason Scott, Jonathan Steed, Ian McIntyre, Aaron Krauss, Daniel McCulloch, Stephen Latta
  • Patent number: 9977882
    Abstract: Embodiments are disclosed that relate to authenticating a user of a display device. For example, one disclosed embodiment includes displaying one or more virtual images on the display device, wherein the one or more virtual images include a set of augmented reality features. The method further includes identifying one or more movements of the user via data received from a sensor of the display device, and comparing the identified movements of the user to a predefined set of authentication information for the user that links user authentication to a predefined order of the augmented reality features. If the identified movements indicate that the user selected the augmented reality features in the predefined order, then the user is authenticated, and if the identified movements indicate that the user did not select the augmented reality features in the predefined order, then the user is not authenticated.
    Type: Grant
    Filed: July 22, 2015
    Date of Patent: May 22, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Mike Scavezze, Jason Scott, Jonathan Steed, Ian McIntyre, Aaron Krauss, Daniel McCulloch, Stephen Latta, Kevin Geisner, Brian Mount
  • Patent number: 9934614
    Abstract: An example wearable display system includes a controller, a left display to display a left-eye augmented reality image with a left-eye display size at left-eye display coordinates, and a right display to display a right-eye augmented reality image with a right-eye display size at right-eye display coordinates, the left-eye and right-eye augmented reality images collectively forming an augmented reality object perceivable at an apparent real world depth by a wearer of the display system. The controller sets the left-eye display coordinates relative to the right-eye display coordinates as a function of the apparent real world depth of the augmented reality object. The function maintains an aspect of the left-eye and right-eye display sizes throughout a non-scaling range of apparent real world depths of the augmented reality object, and the function scales the left-eye and right-eye display sizes with changing apparent real world depth outside the non-scaling range.
    Type: Grant
    Filed: May 20, 2015
    Date of Patent: April 3, 2018
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Scott Ramsby, Dan Osborn, Shawn Wright, Anatolie Gavriliuc, Forest Woodcroft Gouin, Megan Saunders, Jesse Rapczak, Stephen Latta, Adam G. Poulos, Daniel McCulloch, Wei Zhang
  • Publication number: 20180011534
    Abstract: Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes receiving a user input selecting an object in a field of view of the see-through display system, determining a first group of commands currently operable based on one or more of an identification of the selected object and a state of the object, and presenting the first group of commands to a user. The method may further include receiving a command from the first group of commands, changing the state of the selected object from a first state to a second state in response to the command, determining a second group of commands based on the second state, where the second group of commands is different than the first group of commands, and presenting the second group of commands to the user.
    Type: Application
    Filed: September 25, 2017
    Publication date: January 11, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Adam Poulos, Cameron Brown, Daniel McCulloch, Jeff Cole
  • Publication number: 20180012412
    Abstract: Embodiments that relate to interacting with a physical object in a mixed reality environment via a head-mounted display are disclosed. In one embodiment a mixed reality interaction program identifies an object based on an image from captured by the display. An interaction context for the object is determined based on an aspect of the mixed reality environment. A profile for the physical object is queried to determine interaction modes for the object. A selected interaction mode is programmatically selected based on the interaction context. A user input directed at the object is received via the display and interpreted to correspond to a virtual action based on the selected interaction mode. The virtual action is executed with respect to a virtual object associated with the physical object to modify an appearance of the virtual object. The modified virtual object is then displayed via the display.
    Type: Application
    Filed: September 1, 2017
    Publication date: January 11, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Michael Scavezze, Jonathan Steed, Stephen Latta, Kevin Geisner, Daniel McCulloch, Brian Mount, Ryan Hastings, Phillip Charles Heckinger
  • Patent number: 9851787
    Abstract: A system and related methods for a resource management in a head-mounted display device are provided. In one example, the head-mounted display device includes a plurality of sensors and a display system for presenting holographic objects. A resource management program is configured to operate a selected sensor in a default power mode to achieve a selected fidelity. The program receives user-related information from one or more of the sensors, and determines whether target information is detected. Where target information is detected, the program adjusts the selected sensor to operate in a reduced power mode that uses less power than the default power mode.
    Type: Grant
    Filed: November 29, 2012
    Date of Patent: December 26, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Stephen Latta, Jedd Anthony Perry, Rod G. Fleck, Jack Clevenger, Frederik Schaffalitzky, Drew Steedly, Daniel McCulloch, Ian McIntyre, Alexandru Balan, Ben Sugden, Ryan Hastings, Brian Mount
  • Patent number: 9836889
    Abstract: Embodiments for interacting with an executable virtual object associated with a real object are disclosed. In one example, a method for interacting with an executable virtual object associated with a real object includes receiving sensor input from one or more sensors attached to the portable see-through display device, and obtaining information regarding a location of the user based on the sensor input. The method also includes, if the location includes a real object comprising an associated executable virtual object, then determining an intent of the user to interact with the executable virtual object, and if the intent to interact is determined, then interacting with the executable object.
    Type: Grant
    Filed: March 1, 2017
    Date of Patent: December 5, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Ben Sugden, John Clavin, Ben Vaught, Stephen Latta, Kathryn Stone Perez, Daniel McCulloch, Jason Scott, Wei Zhang, Darren Bennett, Ryan Hastings, Arthur Tomlin, Kevin Geisner
  • Patent number: 9799145
    Abstract: Embodiments are disclosed that relate to augmenting an appearance of a surface via a see-through display device. For example, one disclosed embodiment provides, on a computing device comprising a see-through display device, a method of augmenting an appearance of a surface. The method includes acquiring, via an outward-facing image sensor, image data of a first scene viewable through the display. The method further includes recognizing a surface viewable through the display based on the image data and, in response to recognizing the surface, acquiring a representation of a second scene comprising one or more of a scene located physically behind the surface viewable through the display and a scene located behind a surface contextually related to the surface viewable through the display. The method further includes displaying the representation via the see-through display.
    Type: Grant
    Filed: July 22, 2015
    Date of Patent: October 24, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Mike Scavezze, Jason Scott, Jonathan Steed, Ian McIntyre, Aaron Krauss, Daniel McCulloch, Stephen Latta
  • Patent number: 9791921
    Abstract: Embodiments are disclosed that relate to operating a user interface on an augmented reality computing device comprising a see-through display system. For example, one disclosed embodiment includes receiving a user input selecting an object in a field of view of the see-through display system, determining a first group of commands currently operable based on one or more of an identification of the selected object and a state of the object, and presenting the first group of commands to a user. The method may further include receiving a command from the first group of commands, changing the state of the selected object from a first state to a second state in response to the command, determining a second group of commands based on the second state, where the second group of commands is different than the first group of commands, and presenting the second group of commands to the user.
    Type: Grant
    Filed: February 19, 2013
    Date of Patent: October 17, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Adam Poulos, Cameron Brown, Daniel McCulloch, Jeff Cole
  • Publication number: 20170286385
    Abstract: Techniques for ink in an electronic document are described. According to various implementations, techniques described herein provide a rich set of tools which allow a user to markup an electronic document such as a web page, not only in static 2D where the user writes on top of a document, but in dynamic 3D. In addition, when adding 3D elements to an electronic document, the 3D elements are added based on awareness of the content of the electronic document and can adapt its content in relationship to the document.
    Type: Application
    Filed: June 30, 2016
    Publication date: October 5, 2017
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Ryan Lucas Hastings, Daniel McCulloch, Michael John Patten
  • Publication number: 20170285932
    Abstract: Techniques for ink input for browser navigation are described. Generally, ink refers to freehand input to a touch-sensing functionality and/or a functionality for sensing touchless gestures, which is interpreted as digital ink. According to various embodiments, ink input for browser navigation provides a seamless integration of an ink input canvas with a web browser graphical user interface (“GUI”) to enable intuitive input of network addresses (e.g., web addresses) via ink input.
    Type: Application
    Filed: June 29, 2016
    Publication date: October 5, 2017
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Ryan Lucas Hastings, Daniel McCulloch, Michael John Patten