Patents by Inventor Daniel Joseph McCulloch
Daniel Joseph McCulloch has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10620717Abstract: In embodiments of a camera-based input device, the input device includes an inertial measurement unit that collects motion data associated with velocity and acceleration of the input device in an environment, such as in three-dimensional (3D) space. The input device also includes at least two visual light cameras that capture images of the environment. A positioning application is implemented to receive the motion data from the inertial measurement unit, and receive the images of the environment from the at least two visual light cameras. The positioning application can then determine positions of the input device based on the motion data and the images correlated with a map of the environment, and track a motion of the input device in the environment based on the determined positions of the input device.Type: GrantFiled: June 30, 2016Date of Patent: April 14, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Daniel Joseph McCulloch, Nicholas Gervase Fajt, Adam G. Poulos, Christopher Douglas Edmonds, Lev Cherkashin, Brent Charles Allen, Constantin Dulu, Muhammad Jabir Kapasi, Michael Grabner, Michael Edward Samples, Cecilia Bong, Miguel Angel Susffalich, Varun Ramesh Mani, Anthony James Ambrus, Arthur C. Tomlin, James Gerard Dack, Jeffrey Alan Kohler, Eric S. Rehmeyer, Edward D. Parker
-
Patent number: 10613642Abstract: Embodiments are disclosed herein that relate to tuning gesture recognition characteristics for a device configured to receive gesture-based user inputs. For example, one disclosed embodiment provides a head-mounted display device including a plurality of sensors, a display configured to present a user interface, a logic machine, and a storage machine that holds instructions executable by the logic machine to detect a gesture based upon information received from a first sensor of the plurality of sensors, perform an action in response to detecting the gesture, and determine whether the gesture matches an intended gesture input. The instructions are further executable to update a gesture parameter that defines the intended gesture input if it is determined that the gesture detected does not match the intended gesture input.Type: GrantFiled: March 12, 2014Date of Patent: April 7, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Michael Scavezze, Adam G. Poulos, John Bevis, Jeremy Lee, Daniel Joseph McCulloch, Nicholas Gervase Fajt
-
Patent number: 10373392Abstract: Embodiments are disclosed for transitioning views presented via a head-mounted display device. One example method for operating a head-mounted display device includes displaying a virtual model at a first position in a coordinate frame of the head-mounted display device, receiving sensor data from one or more sensors of the head-mounted display device, and determining a line of sight of the user that intersects the virtual model to identify a location the user is viewing. The example method further includes, responsive to a trigger, moving the virtual model to a second position in the coordinate frame of the head-mounted display device corresponding to the location the user is viewing and simultaneously scaling the virtual model.Type: GrantFiled: August 26, 2015Date of Patent: August 6, 2019Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Jonathan R. Christen, Robert Courtney Memmott, Benjamin John Sugden, James L. Nance, Marcus Tanner, Daniel Joseph McCulloch
-
Patent number: 10007352Abstract: Disclosed are techniques for performing undo operations on holographic objects in an immersive 3D visual environment. A display system allows the user to undo a given user operation performed on a particular selected holographic object without affecting any other holographic objects, based on a user's gaze and/or other user input. The technique can be implemented in conjunction with a scrollable visual “timeline” in which multiple past states of the display environment are displayed to the user and are selectable by the user as the target state of the revert operation. Also disclosed is a technique for partially undoing a single continuous user action in a holographic display system.Type: GrantFiled: August 21, 2015Date of Patent: June 26, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Adam Gabriel Poulos, Johanna Dy Lynn, Michael Scavezze, Daniel Joseph McCulloch
-
Patent number: 9952656Abstract: Disclosed are a method and corresponding apparatus to enable a user of a display system to manipulate holographic objects. Multiple holographic user interface objects capable of being independently manipulated by a user are displayed to the user, overlaid on a real-world view of a 3D physical space in which the user is located. In response to a first user action, the holographic user interface objects are made to appear to be combined into a holographic container object that appears at a first location in the 3D physical space. In response to the first user action or a second user action, the holographic container object is made to appear to relocate to a second location in the 3D physical space. The holographic user interface objects are then made to appear to deploy from the holographic container object when the holographic container object appears to be located at the second location.Type: GrantFiled: August 21, 2015Date of Patent: April 24, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Adam Gabriel Poulos, Cameron Graeme Brown, Aaron Daniel Krauss, Marcus Ghaly, Michael Thomas, Jonathan Paulovich, Daniel Joseph McCulloch
-
Publication number: 20180005445Abstract: In embodiments of augmenting a moveable entity with a hologram, an alternate reality device includes a tracking system that can recognize an entity in an environment and track movement of the entity in the environment. The alternate reality device can also include a detection algorithm implemented to identify the entity recognized by the tracking system based on identifiable characteristics of the entity. A hologram positioning application is implemented to receive motion data from the tracking system, receive entity characteristic data from the detection algorithm, and determine a position and an orientation of the entity in the environment based on the motion data and the entity characteristic data. The hologram positioning application can then generate a hologram that appears associated with the entity as the entity moves in the environment.Type: ApplicationFiled: June 30, 2016Publication date: January 4, 2018Applicant: Microsoft Technology Licensing, LLCInventors: Daniel Joseph McCulloch, Nicholas Gervase Fajt, Adam G. Poulos, Christopher Douglas Edmonds, Lev Cherkashin, Brent Charles Allen, Constantin Dulu, Muhammad Jabir Kapasi, Michael Grabner, Michael Edward Samples, Cecilia Bong, Miguel Angel Susffalich, Varun Ramesh Mani, Anthony James Ambrus, Arthur C. Tomlin, James Gerard Dack, Jeffrey Alan Kohler, Eric S. Rehmeyer, Edward D. Parker
-
Publication number: 20180004308Abstract: In embodiments of a camera-based input device, the input device includes an inertial measurement unit that collects motion data associated with velocity and acceleration of the input device in an environment, such as in three-dimensional (3D) space. The input device also includes at least two visual light cameras that capture images of the environment. A positioning application is implemented to receive the motion data from the inertial measurement unit, and receive the images of the environment from the at least two visual light cameras. The positioning application can then determine positions of the input device based on the motion data and the images correlated with a map of the environment, and track a motion of the input device in the environment based on the determined positions of the input device.Type: ApplicationFiled: June 30, 2016Publication date: January 4, 2018Inventors: Daniel Joseph McCulloch, Nicholas Gervase Fajt, Adam G. Poulos, Christopher Douglas Edmonds, Lev Cherkashin, Brent Charles Allen, Constantin Dulu, Muhammad Jabir Kapasi, Michael Grabner, Michael Edward Samples, Cecilia Bong, Miguel Angel Susffalich, Varun Ramesh Mani, Anthony James Ambrus, Arthur C. Tomlin, James Gerard Dack, Jeffrey Alan Kohler, Eric S. Rehmeyer, Edward D. Parker
-
Publication number: 20170287219Abstract: A mixed reality system may comprise a head-mounted display (HMD) device with a location sensor from which the HMD device determines a location of the location sensor in space and a base station mounted a predetermined offset from the location sensor and configured to emit an electromagnetic field (EMF). An EMF sensor affixed to an object may be configured to sense a strength of the EMF. The HMD device may determine a location of the EMF sensor relative to the base station based on the sensed strength and determine a location of the EMF sensor in space based on the relative location, the predetermined offset, and the location of the location sensor in space. In some aspects, the HMD device may comprise a see-through display configured to display augmented reality images and overlay a hologram that corresponds to the location of the EMF sensor in space over time.Type: ApplicationFiled: March 31, 2016Publication date: October 5, 2017Inventors: Adam G. Poulos, Daniel Joseph McCulloch, Nicholas Gervase Fajt, Arthur Tomlin, Brian Mount, Lev Cherkashin, Lorenz Henric Jentz
-
Publication number: 20170061702Abstract: Embodiments are disclosed for transitioning views presented via a head-mounted display device. One example method for operating a head-mounted display device includes displaying a virtual model at a first position in a coordinate frame of the head-mounted display device, receiving sensor data from one or more sensors of the head-mounted display device, and determining a line of sight of the user that intersects the virtual model to identify a location the user is viewing. The example method further includes, responsive to a trigger, moving the virtual model to a second position in the coordinate frame of the head-mounted display device corresponding to the location the user is viewing and simultaneously scaling the virtual model.Type: ApplicationFiled: August 26, 2015Publication date: March 2, 2017Inventors: Jonathan R. Christen, Robert Courtney Memmott, Benjamin John Sugden, James L. Nance, Marcus Tanner, Daniel Joseph McCulloch
-
Publication number: 20170052507Abstract: Disclosed are a method and corresponding apparatus to enable a user of a display system to manipulate holographic objects. Multiple holographic user interface objects capable of being independently manipulated by a user are displayed to the user, overlaid on a real-world view of a 3D physical space in which the user is located. In response to a first user action, the holographic user interface objects are made to appear to be combined into a holographic container object that appears at a first location in the 3D physical space. In response to the first user action or a second user action, the holographic container object is made to appear to relocate to a second location in the 3D physical space. The holographic user interface objects are then made to appear to deploy from the holographic container object when the holographic container object appears to be located at the second location.Type: ApplicationFiled: August 21, 2015Publication date: February 23, 2017Inventors: Adam Gabriel Poulos, Cameron Graeme Brown, Aaron Daniel Krauss, Marcus Ghaly, Michael Thomas, Jonathan Paulovich, Daniel Joseph McCulloch
-
Publication number: 20170052595Abstract: Disclosed are techniques for performing undo operations on holographic objects in an immersive 3D visual environment. A display system allows the user to undo a given user operation performed on a particular selected holographic object without affecting any other holographic objects, based on a user's gaze and/or other user input. The technique can be implemented in conjunction with a scrollable visual “timeline” in which multiple past states of the display environment are displayed to the user and are selectable by the user as the target state of the revert operation. Also disclosed is a technique for partially undoing a single continuous user action in a holographic display system.Type: ApplicationFiled: August 21, 2015Publication date: February 23, 2017Inventors: Adam Gabriel Poulos, Johanna Dy Lynn, Michael Scavezze, Daniel Joseph McCulloch
-
Patent number: 9313481Abstract: A method for displaying virtual imagery on a stereoscopic display system having a display matrix. The virtual imagery presents a surface of individually renderable loci viewable to an eye of the user. The method includes, for each locus of the viewable surface, illuminating a pixel of the display matrix. The illuminated pixel is chosen based on a pupil position of the eye as determined by the stereoscopic display system. For each locus of the viewable surface, a virtual image of the illuminated pixel is formed in a plane in front of the eye. The virtual image is positioned on a straight line passing through the locus, the plane, and the pupil position. In this manner, the virtual image tracks change in the user's pupil position.Type: GrantFiled: February 19, 2014Date of Patent: April 12, 2016Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Roger Sebastian Sylvan, Arthur Tomlin, Daniel Joseph McCulloch, Brian Mount, Tony Ambrus
-
Publication number: 20150261318Abstract: Embodiments are disclosed herein that relate to tuning gesture recognition characteristics for a device configured to receive gesture-based user inputs. For example, one disclosed embodiment provides a head-mounted display device including a plurality of sensors, a display configured to present a user interface, a logic machine, and a storage machine that holds instructions executable by the logic machine to detect a gesture based upon information received from a first sensor of the plurality of sensors, perform an action in response to detecting the gesture, and determine whether the gesture matches an intended gesture input. The instructions are further executable to update a gesture parameter that defines the intended gesture input if it is determined that the gesture detected does not match the intended gesture input.Type: ApplicationFiled: March 12, 2014Publication date: September 17, 2015Inventors: Michael Scavezze, Adam G. Poulos, John Bevis, Jeremy Lee, Daniel Joseph McCulloch, Nicholas Gervase Fajt
-
Publication number: 20150262425Abstract: An augmented reality system includes a see-through display, a sensor array including one or more sensors, a logic machine, and a storage machine. The storage machine holds instructions executable by the logic machine to display via the see-through display an activity report. The activity report includes an assessment and a classification of a plurality of tasks performed by a wearer of the see-through display over a period of time. The assessment and the classification of the plurality of tasks is derived from sensor data collected from the one or more sensors over the period of time.Type: ApplicationFiled: March 13, 2014Publication date: September 17, 2015Inventors: Ryan Hastings, Cameron Brown, Nicholas Gervase Fajt, Daniel Joseph McCulloch
-
Publication number: 20150237336Abstract: A method for displaying virtual imagery on a stereoscopic display system having a display matrix. The virtual imagery presents a surface of individually renderable loci viewable to an eye of the user. The method includes, for each locus of the viewable surface, illuminating a pixel of the display matrix. The illuminated pixel is chosen based on a pupil position of the eye as determined by the stereoscopic display system. For each locus of the viewable surface, a virtual image of the illuminated pixel is formed in a plane in front of the eye. The virtual image is positioned on a straight line passing through the locus, the plane, and the pupil position. In this manner, the virtual image tracks change in the user's pupil position.Type: ApplicationFiled: February 19, 2014Publication date: August 20, 2015Inventors: Roger Sebastian Sylvan, Arthur Tomlin, Daniel Joseph McCulloch, Brian Mount, Tony Ambrus