Patents by Inventor Cameron G. Brown
Cameron G. Brown has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 10679648Abstract: Various embodiments relating to detecting at least one of conversation, the presence and the identity of others during presentation of digital content on a computing device. When another person is detected, one or more actions may be taken with respect to the digital content. For example, the digital content may be minimized, moved, resized or otherwise modified.Type: GrantFiled: January 12, 2018Date of Patent: June 9, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Arthur Charles Tomlin, Dave Hill, Jonathan Paulovich, Evan Michael Keibler, Jason Scott, Cameron G. Brown, Thomas Forsythe, Jeffrey A. Kohler, Brian Murphy
-
Patent number: 10235807Abstract: A system and method are disclosed for building virtual content from within a virtual environment using virtual tools to build and modify the virtual content.Type: GrantFiled: January 20, 2015Date of Patent: March 19, 2019Assignee: Microsoft Technology Licensing, LLCInventors: Michael Thomas, Jonathan Paulovich, Adam G. Poulos, Omer Bilal Orhan, Marcus Ghaly, Cameron G. Brown, Nicholas Gervase Fajt, Matthew Kaplan
-
Publication number: 20180137879Abstract: Various embodiments relating to detecting at least one of conversation, the presence and the identity of others during presentation of digital content on a computing device. When another person is detected, one or more actions may be taken with respect to the digital content. For example, the digital content may be minimized, moved, resized or otherwise modified.Type: ApplicationFiled: January 12, 2018Publication date: May 17, 2018Inventors: Arthur Charles Tomlin, Dave Hill, Jonathan Paulovich, Evan Michael Keibler, Jason Scott, Cameron G. Brown, Thomas Forsythe, Jeffrey A. Kohler, Brian Murphy
-
Patent number: 9922667Abstract: Various embodiments relating to detecting at least one of conversation, the presence and the identity of others during presentation of digital content on a computing device. When another person is detected, one or more actions may be taken with respect to the digital content. For example, the digital content may be minimized, moved, resized or otherwise modified.Type: GrantFiled: January 16, 2015Date of Patent: March 20, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Arthur Charles Tomlin, Dave Hill, Jonathan Paulovich, Evan Michael Keibler, Jason Scott, Cameron G. Brown, Thomas Forsythe, Jeffrey A. Kohler, Brian Murphy
-
Patent number: 9779512Abstract: Methods for automatically generating a texture exemplar that may be used for rendering virtual objects that appear to be made from the texture exemplar are described. In some embodiments, a head-mounted display device (HMD) may identify a real-world object within an environment, acquire a three-dimensional model of the real-world object, determine a portion of the real-world object from which a texture exemplar is to be generated, capture one or more images of the portion of the real-world object, determine an orientation of the real-world object, and generate the texture exemplar using the one or more images, the three-dimensional model, and the orientation of the real-world object. The HMD may then render and display images of a virtual object such that the virtual object appears to be made from a virtual material associated with the texture exemplar.Type: GrantFiled: January 29, 2015Date of Patent: October 3, 2017Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Arthur C. Tomlin, Roger Sebastian-Kevin Sylvan, Dan Kroymann, Cameron G. Brown, Nicholas Gervase Fajt
-
Patent number: 9728010Abstract: Methods for generating virtual proxy objects and controlling the location of the virtual proxy objects within an augmented reality environment are described. In some embodiments, a head-mounted display device (HMD) may identify a real-world object for which to generate a virtual proxy object, generate the virtual proxy object corresponding with the real-world object, and display the virtual proxy object using the HMD such that the virtual proxy object is perceived to exist within an augmented reality environment displayed to an end user of the HMD. In some cases, image processing techniques may be applied to depth images derived from a depth camera embedded within the HMD in order to identify boundary points for the real-world object and to determine the dimensions of the virtual proxy object corresponding with the real-world object.Type: GrantFiled: December 30, 2014Date of Patent: August 8, 2017Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Mike Thomas, Cameron G. Brown, Nicholas Gervase Fajt, Jamie Bryant Kirschenbaum
-
Patent number: 9563331Abstract: Technology is described for web-like hierarchical menu interface which displays a menu in a web-like hierarchical menu display configuration in a near-eye display (NED). The web-like hierarchical menu display configuration links menu levels and menu items within a menu level with flexible spatial dimensions for menu elements. One or more processors executing the interface select a web-like hierarchical menu display configuration based on the available menu space and user head view direction determined from a 3D mapping of the NED field of view data and stored user head comfort rules. Activation parameters in menu item selection criteria are adjusted to be user specific based on user head motion data tracked based on data from one or more sensors when the user wears the NED. Menu display layout may be triggered by changes in head view direction of the user and available menu space about the user's head.Type: GrantFiled: June 28, 2013Date of Patent: February 7, 2017Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Adam G. Poulos, Anthony J. Ambrus, Cameron G. Brown, Jason Scott, Brian J. Mount, Daniel J. McCulloch, John Bevis, Wei Zhang
-
Patent number: 9442567Abstract: Methods for enabling hands-free selection of virtual objects are described. In some embodiments, a gaze swipe gesture may be used to select a virtual object. The gaze swipe gesture may involve an end user of a head-mounted display device (HMD) performing head movements that are tracked by the HMD to detect whether a virtual pointer controlled by the end user has swiped across two or more edges of the virtual object. In some cases, the gaze swipe gesture may comprise the end user using their head movements to move the virtual pointer through two edges of the virtual object while the end user gazes at the virtual object. In response to detecting the gaze swipe gesture, the HMD may determine a second virtual object to be displayed on the HMD based on a speed of the gaze swipe gesture and a size of the virtual object.Type: GrantFiled: October 28, 2015Date of Patent: September 13, 2016Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Jason Scott, Arthur C. Tomlin, Mike Thomas, Matthew Kaplan, Cameron G. Brown, Jonathan Plumb, Nicholas Gervase Fajt, Daniel J. McCulloch, Jeremy Lee
-
Publication number: 20160225164Abstract: Methods for automatically generating a texture exemplar that may be used for rendering virtual objects that appear to be made from the texture exemplar are described. In some embodiments, a head-mounted display device (HMD) may identify a real-world object within an environment, acquire a three-dimensional model of the real-world object, determine a portion of the real-world object from which a texture exemplar is to be generated, capture one or more images of the portion of the real-world object, determine an orientation of the real-world object, and generate the texture exemplar using the one or more images, the three-dimensional model, and the orientation of the real-world object. The HMD may then render and display images of a virtual object such that the virtual object appears to be made from a virtual material associated with the texture exemplar.Type: ApplicationFiled: January 29, 2015Publication date: August 4, 2016Inventors: Arthur C. Tomlin, Roger Sebastian-Kevin Sylvan, Dan Kroymann, Cameron G. Brown, Nicholas Gervase Fajt
-
Publication number: 20160210781Abstract: A system and method are disclosed for building virtual content from within a virtual environment using virtual tools to build and modify the virtual content.Type: ApplicationFiled: January 20, 2015Publication date: July 21, 2016Inventors: Michael Thomas, Jonathan Paulovich, Adam G. Poulos, Omer Bilal Orhan, Marcus Ghaly, Cameron G. Brown, Nicholas Gervase Fajt, Matthew Kaplan
-
Publication number: 20160210780Abstract: A system and method are disclosed for scaled viewing, experiencing and interacting with a virtual workpiece in a mixed reality. The system includes an immersion mode, where the user is able to select a virtual avatar, which the user places somewhere in or adjacent a virtual workpiece. The view then displayed to the user may be that from the perspective of the avatar. The user is, in effect, immersed into the virtual content, and can view, experience, explore and interact with the workpiece in the virtual content on a life-size scale.Type: ApplicationFiled: January 20, 2015Publication date: July 21, 2016Inventors: Jonathan Paulovich, Johnathan Robert Bevis, Cameron G. Brown, Jonathan Plumb, Daniel J. McCulloch, Nicholas Gervase Fajt
-
Patent number: 9390561Abstract: Methods for generating and displaying personalized virtual billboards within an augmented reality environment are described. The personalized virtual billboards may facilitate the sharing of personalized information between persons within an environment who have varying degrees of acquaintance (e.g., ranging from close familial relationships to strangers). In some embodiments, a head-mounted display device (HMD) may detect a mobile device associated with a particular person within an environment, acquire a personalized information set corresponding with the particular person, generate a virtual billboard based on the personalized information set, and display the virtual billboard on the HMD. The personalized information set may include information associated with the particular person such as shopping lists and classified advertisements.Type: GrantFiled: April 12, 2013Date of Patent: July 12, 2016Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Cameron G. Brown, Abby Lee, Brian J. Mount, Daniel J. McCulloch, Michael J. Scavezze, Ryan L. Hastings, John Bevis, Mike Thomas, Ron Amador-Leon
-
Publication number: 20160189426Abstract: Methods for generating virtual proxy objects and controlling the location of the virtual proxy objects within an augmented reality environment are described. In some embodiments, a head-mounted display device (HMD) may identify a real-world object for which to generate a virtual proxy object, generate the virtual proxy object corresponding with the real-world object, and display the virtual proxy object using the HMD such that the virtual proxy object is perceived to exist within an augmented reality environment displayed to an end user of the HMD. In some cases, image processing techniques may be applied to depth images derived from a depth camera embedded within the HMD in order to identify boundary points for the real-world object and to determine the dimensions of the virtual proxy object corresponding with the real-world object.Type: ApplicationFiled: December 30, 2014Publication date: June 30, 2016Inventors: Mike Thomas, Cameron G. Brown, Nicholas Gervase Fajt, Jamie Bryant Kirschenbaum
-
Patent number: 9367136Abstract: Methods for providing real-time feedback to an end user of a mobile device as they are interacting with or manipulating one or more virtual objects within an augmented reality environment are described. The real-time feedback may comprise visual feedback, audio feedback, and/or haptic feedback. In some embodiments, a mobile device, such as a head-mounted display device (HMD), may determine an object classification associated with a virtual object within an augmented reality environment, detect an object manipulation gesture performed by an end user of the mobile device, detect an interaction with the virtual object based on the object manipulation gesture, determine a magnitude of a virtual force associated with the interaction, and provide real-time feedback to the end user of the mobile device based on the interaction, the magnitude of the virtual force applied to the virtual object, and the object classification associated with the virtual object.Type: GrantFiled: April 12, 2013Date of Patent: June 14, 2016Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Stephen G. Latta, Adam G. Poulos, Cameron G. Brown, Daniel J. McCulloch, Matthew Kaplan, Arnulfo Zepeda Navratil, Jon Paulovich, Kudo Tsunoda
-
Patent number: 9311718Abstract: Methods for controlling the display of content as the content is being viewed by an end user of a head-mounted display device (HMD) are described. In some embodiments, an HMD may display the content using a virtual content reader for reading the content. The content may comprise text and/or images, such as text or images associated with an electronic book, an electronic magazine, a word processing document, a webpage, or an email. The virtual content reader may provide automated content scrolling based on a rate at which the end user reads a portion of the displayed content on the virtual content reader. In one embodiment, an HMD may combine automatic scrolling of content displayed on the virtual content reader with user controlled scrolling (e.g., via head tracking of the end user of the HMD).Type: GrantFiled: January 23, 2014Date of Patent: April 12, 2016Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Michael J. Scavezze, Adam G. Poulos, Johnathan Robert Bevis, Nicholas Gervase Fajt, Cameron G. Brown, Daniel J. McCulloch, Jeremy Lee
-
Publication number: 20160048204Abstract: Methods for enabling hands-free selection of virtual objects are described. In some embodiments, a gaze swipe gesture may be used to select a virtual object. The gaze swipe gesture may involve an end user of a head-mounted display device (HMD) performing head movements that are tracked by the HMD to detect whether a virtual pointer controlled by the end user has swiped across two or more edges of the virtual object. In some cases, the gaze swipe gesture may comprise the end user using their head movements to move the virtual pointer through two edges of the virtual object while the end user gazes at the virtual object. In response to detecting the gaze swipe gesture, the HMD may determine a second virtual object to be displayed on the HMD based on a speed of the gaze swipe gesture and a size of the virtual object.Type: ApplicationFiled: October 28, 2015Publication date: February 18, 2016Inventors: Jason Scott, Arthur C. Tomlin, Mike Thomas, Matthew Kaplan, Cameron G. Brown, Jonathan Plumb, Nicholas Gervase Fajt, Daniel J. McCulloch, Jeremy Lee
-
Patent number: 9245387Abstract: Methods for positioning virtual objects within an augmented reality environment using snap grid spaces associated with real-world environments, real-world objects, and/or virtual objects within the augmented reality environment are described. A snap grid space may comprise a two-dimensional or three-dimensional virtual space within an augmented reality environment in which one or more virtual objects may be positioned. In some embodiments, a head-mounted display device (HMD) may identify one or more grid spaces within an augmented reality environment, detect a positioning of a virtual object within the augmented reality environment, determine a target grid space of the one or more grid spaces in which to position the virtual object, determine a position of the virtual object within the target grid space, and display the virtual object within the augmented reality environment based on the position of the virtual object within the target grid space.Type: GrantFiled: April 12, 2013Date of Patent: January 26, 2016Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Adam G. Poulos, Jason Scott, Matthew Kaplan, Christopher Obeso, Cameron G. Brown, Daniel J. McCulloch, Abby Lee, Brian J. Mount, Ben J. Sugden
-
Patent number: 9201578Abstract: Methods for enabling hands-free selection of virtual objects are described. In some embodiments, a gaze swipe gesture may be used to select a virtual object. The gaze swipe gesture may involve an end user of a head-mounted display device (HMD) performing head movements that are tracked by the HMD to detect whether a virtual pointer controlled by the end user has swiped across two or more edges of the virtual object. In some cases, the gaze swipe gesture may comprise the end user using their head movements to move the virtual pointer through two edges of the virtual object while the end user gazes at the virtual object. In response to detecting the gaze swipe gesture, the HMD may determine a second virtual object to be displayed on the HMD based on a speed of the gaze swipe gesture and a size of the virtual object.Type: GrantFiled: January 23, 2014Date of Patent: December 1, 2015Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Jason Scott, Arthur C. Tomlin, Mike Thomas, Matthew Kaplan, Cameron G. Brown, Jonathan Plumb, Nicholas Gervase Fajt, Daniel J. McCulloch, Jeremy Lee
-
Publication number: 20150331240Abstract: Assisted viewing of web-based resources by an end user of a head-mounted display device (HMD) is described. An HMD may display content from web-based resources using a see-through display while tracking eye and head movement of the end user viewing the content within an augmented reality environment. Active view regions within the see-through display are identified based on tracking information including eye gaze data and head direction data. The web-based resources are analyzed to identify content and display elements. The analysis is correlated with the active view regions to identify the underlying content that is a desired point of focus of a corresponding active view region, as well as to identify the display elements corresponding to that content. A web-based resource is modified based on the correlation. The content from the web-based resource is displayed based on the modifications to assist the end user in viewing the web-based resource.Type: ApplicationFiled: May 15, 2014Publication date: November 19, 2015Inventors: Adam G. Poulos, Cameron G. Brown, Stephen G. Latta, Brian J. Mount, Daniel J. McCulloch
-
Publication number: 20150302869Abstract: Various embodiments relating to detecting at least one of conversation, the presence and the identity of others during presentation of digital content on a computing device. When another person is detected, one or more actions may be taken with respect to the digital content. For example, the digital content may be minimized, moved, resized or otherwise modified.Type: ApplicationFiled: January 16, 2015Publication date: October 22, 2015Inventors: Arthur Charles Tomlin, Dave Hill, Jonathan Paulovich, Evan Michael Keibler, Jason Scott, Cameron G. Brown, Thomas Forsythe, Jeffrey A. Kohler, Brian Murphy