Patents by Inventor Arthur C. Tomlin

Arthur C. Tomlin has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10921446
    Abstract: Generally, a scanning device performs a sonic scan of a space by generating an ultrasonic impulse and measuring reflected signals as raw audio data. Sonic scan data including raw audio data and an associated scan location is forwarded to a sonic mapping service, which generates and distributes a 3D map of the space called a sonic map. When multiple devices contribute, the map is a collaborative sonic map. The sonic mapping service is advantageously available as distributed computing service, and can detect acoustic characteristics of the space and/or attribute visual/audio features to elements of a 3D model based on a corresponding detected acoustic characteristic. Various implementations that utilize a sonic map, detected acoustic characteristics, an impacted visual map, and/or an impacted 3D object include mixed reality communications, automatic calibration, relocalization, visualizing materials, rendering 3D geometry, and the like.
    Type: Grant
    Filed: April 6, 2018
    Date of Patent: February 16, 2021
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Jeffrey Ryan Sipko, Adolfo Hernandez Santisteban, Aaron Daniel Krauss, Priya Ganadas, Arthur C. Tomlin
  • Patent number: 10802278
    Abstract: Technology is described for (3D) space carving of a user environment based on movement through the user environment of one or more users wearing a near-eye display (NED) system. One or more sensors of the NED system provide sensor data from which a distance and direction of movement can be determined. Spatial dimensions for a navigable path can be represented based on user height data and user width data of the one or more users who have traversed the path. Space carving data identifying carved out space can be stored in a 3D space carving model of the user environment. The navigable paths can also be related to position data in another kind of 3D mapping like a 3D surface reconstruction mesh model of the user environment generated from depth images.
    Type: Grant
    Filed: January 9, 2019
    Date of Patent: October 13, 2020
    Assignee: Microsoft Technology Licensing LLC
    Inventors: Anthony J. Ambrus, Jea Gon Park, Adam G. Poulos, Justin Avram Clark, Michael Jason Gourlay, Brian J. Mount, Daniel J. McCulloch, Arthur C. Tomlin
  • Patent number: 10674305
    Abstract: The disclosed technology provides multi-dimensional audio output by providing a relative physical location of an audio transmitting device relative to an audio outputting device in a shared map of physical space shared between the audio transmitting device and the audio outputting device. An orientation of the audio outputting device relative to the audio transmitting device is determined and an audio signal received from the audio transmitting device via a communication network is processed using the determined orientation of the audio outputting device relative to the audio transmitting device and the relative physical location of the audio transmitting device to create an augmented audio signal. The augmented audio signal is output through at least one audio output on the audio outputting device in a manner indicating a relative physical direction of the audio transmitting device to the audio outputting device in the shared map of the physical space.
    Type: Grant
    Filed: March 15, 2018
    Date of Patent: June 2, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kendall Clark York, Jeffrey Sipko, Aaron Krauss, Andrew F. Muehlhausen, Adolfo Hernandez Santisteban, Arthur C. Tomlin
  • Patent number: 10620717
    Abstract: In embodiments of a camera-based input device, the input device includes an inertial measurement unit that collects motion data associated with velocity and acceleration of the input device in an environment, such as in three-dimensional (3D) space. The input device also includes at least two visual light cameras that capture images of the environment. A positioning application is implemented to receive the motion data from the inertial measurement unit, and receive the images of the environment from the at least two visual light cameras. The positioning application can then determine positions of the input device based on the motion data and the images correlated with a map of the environment, and track a motion of the input device in the environment based on the determined positions of the input device.
    Type: Grant
    Filed: June 30, 2016
    Date of Patent: April 14, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Daniel Joseph McCulloch, Nicholas Gervase Fajt, Adam G. Poulos, Christopher Douglas Edmonds, Lev Cherkashin, Brent Charles Allen, Constantin Dulu, Muhammad Jabir Kapasi, Michael Grabner, Michael Edward Samples, Cecilia Bong, Miguel Angel Susffalich, Varun Ramesh Mani, Anthony James Ambrus, Arthur C. Tomlin, James Gerard Dack, Jeffrey Alan Kohler, Eric S. Rehmeyer, Edward D. Parker
  • Publication number: 20190310366
    Abstract: Generally, a scanning device performs a sonic scan of a space by generating an ultrasonic impulse and measuring reflected signals as raw audio data. Sonic scan data including raw audio data and an associated scan location is forwarded to a sonic mapping service, which generates and distributes a 3D map of the space called a sonic map. When multiple devices contribute, the map is a collaborative sonic map. The sonic mapping service is advantageously available as distributed computing service, and can detect acoustic characteristics of the space and/or attribute visual/audio features to elements of a 3D model based on a corresponding detected acoustic characteristic. Various implementations that utilize a sonic map, detected acoustic characteristics, an impacted visual map, and/or an impacted 3D object include mixed reality communications, automatic calibration, relocalization, visualizing materials, rendering 3D geometry, and the like.
    Type: Application
    Filed: April 6, 2018
    Publication date: October 10, 2019
    Inventors: Jeffrey Ryan SIPKO, Adolfo HERNANDEZ SANTISTEBAN, Aaron Daniel KRAUSS, Priya GANADAS, Arthur C. TOMLIN
  • Publication number: 20190289416
    Abstract: The disclosed technology provides multi-dimensional audio output by providing a relative physical location of an audio transmitting device relative to an audio outputting device in a shared map of physical space shared between the audio transmitting device and the audio outputting device. An orientation of the audio outputting device relative to the audio transmitting device is determined and an audio signal received from the audio transmitting device via a communication network is processed using the determined orientation of the audio outputting device relative to the audio transmitting device and the relative physical location of the audio transmitting device to create an augmented audio signal. The augmented audio signal is output through at least one audio output on the audio outputting device in a manner indicating a relative physical direction of the audio transmitting device to the audio outputting device in the shared map of the physical space.
    Type: Application
    Filed: March 15, 2018
    Publication date: September 19, 2019
    Inventors: Kendall Clark YORK, Jeffrey SIPKO, Aaron KRAUSS, Andrew F. MUEHLHAUSEN, Adolfo HERNANDEZ SANTISTEBAN, Arthur C. Tomlin
  • Patent number: 10330931
    Abstract: Technology is described for (3D) space carving of a user environment based on movement through the user environment of one or more users wearing a near-eye display (NED) system. One or more sensors on the near-eye display (NED) system provide sensor data from which a distance and direction of movement can be determined. Spatial dimensions for a navigable path can be represented based on user height data and user width data of the one or more users who have traversed the path. Space carving data identifying carved out space can be stored in a 3D space carving model of the user environment. The navigable paths can also be related to position data in another kind of 3D mapping like a 3D surface reconstruction mesh model of the user environment generated from depth images.
    Type: Grant
    Filed: June 28, 2013
    Date of Patent: June 25, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Anthony J. Ambrus, Jea Gon Park, Adam G. Poulos, Justin Avram Clark, Michael Jason Gourlay, Brian J. Mount, Daniel J. McCulloch, Arthur C. Tomlin
  • Publication number: 20190162964
    Abstract: Technology is described for (3D) space carving of a user environment based on movement through the user environment of one or more users wearing a near-eye display (NED) system. One or more sensors of the NED system provide sensor data from which a distance and direction of movement can be determined. Spatial dimensions for a navigable path can be represented based on user height data and user width data of the one or more users who have traversed the path. Space carving data identifying carved out space can be stored in a 3D space carving model of the user environment. The navigable paths can also be related to position data in another kind of 3D mapping like a 3D surface reconstruction mesh model of the user environment generated from depth images.
    Type: Application
    Filed: January 9, 2019
    Publication date: May 30, 2019
    Inventors: Anthony J. Ambrus, Jea Gon Park, Adam G. Poulos, Justin Avram Clark, Michael Jason Gourlay, Brian J. Mount, Daniel J. McCulloch, Arthur C. Tomlin
  • Patent number: 10007349
    Abstract: Methods for recognizing gestures using adaptive multi-sensor gesture recognition are described. In some embodiments, a gesture recognition system receives a plurality of sensor inputs from a plurality of sensor devices and a plurality of confidence thresholds associated with the plurality of sensor inputs. A confidence threshold specifies a minimum confidence value for which it is deemed that a particular gesture has occurred. Upon detection of a compensating event, such as excessive motion involving one of the plurality of sensor devices, the gesture recognition system may modify the plurality of confidence thresholds based on the compensating event. Subsequently, the gesture recognition system generates a multi-sensor confidence value based on whether at least a subset of the plurality of confidence thresholds has been satisfied. The gesture recognition system may also modify the plurality of confidence thresholds based on the plugging and unplugging of sensor inputs from the gesture recognition system.
    Type: Grant
    Filed: May 4, 2015
    Date of Patent: June 26, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Stephen G. Latta, Brian J. Mount, Adam G. Poulos, Jeffrey A. Kohler, Arthur C. Tomlin, Jonathan T. Steed
  • Publication number: 20180005445
    Abstract: In embodiments of augmenting a moveable entity with a hologram, an alternate reality device includes a tracking system that can recognize an entity in an environment and track movement of the entity in the environment. The alternate reality device can also include a detection algorithm implemented to identify the entity recognized by the tracking system based on identifiable characteristics of the entity. A hologram positioning application is implemented to receive motion data from the tracking system, receive entity characteristic data from the detection algorithm, and determine a position and an orientation of the entity in the environment based on the motion data and the entity characteristic data. The hologram positioning application can then generate a hologram that appears associated with the entity as the entity moves in the environment.
    Type: Application
    Filed: June 30, 2016
    Publication date: January 4, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Daniel Joseph McCulloch, Nicholas Gervase Fajt, Adam G. Poulos, Christopher Douglas Edmonds, Lev Cherkashin, Brent Charles Allen, Constantin Dulu, Muhammad Jabir Kapasi, Michael Grabner, Michael Edward Samples, Cecilia Bong, Miguel Angel Susffalich, Varun Ramesh Mani, Anthony James Ambrus, Arthur C. Tomlin, James Gerard Dack, Jeffrey Alan Kohler, Eric S. Rehmeyer, Edward D. Parker
  • Publication number: 20180004308
    Abstract: In embodiments of a camera-based input device, the input device includes an inertial measurement unit that collects motion data associated with velocity and acceleration of the input device in an environment, such as in three-dimensional (3D) space. The input device also includes at least two visual light cameras that capture images of the environment. A positioning application is implemented to receive the motion data from the inertial measurement unit, and receive the images of the environment from the at least two visual light cameras. The positioning application can then determine positions of the input device based on the motion data and the images correlated with a map of the environment, and track a motion of the input device in the environment based on the determined positions of the input device.
    Type: Application
    Filed: June 30, 2016
    Publication date: January 4, 2018
    Inventors: Daniel Joseph McCulloch, Nicholas Gervase Fajt, Adam G. Poulos, Christopher Douglas Edmonds, Lev Cherkashin, Brent Charles Allen, Constantin Dulu, Muhammad Jabir Kapasi, Michael Grabner, Michael Edward Samples, Cecilia Bong, Miguel Angel Susffalich, Varun Ramesh Mani, Anthony James Ambrus, Arthur C. Tomlin, James Gerard Dack, Jeffrey Alan Kohler, Eric S. Rehmeyer, Edward D. Parker
  • Patent number: 9779512
    Abstract: Methods for automatically generating a texture exemplar that may be used for rendering virtual objects that appear to be made from the texture exemplar are described. In some embodiments, a head-mounted display device (HMD) may identify a real-world object within an environment, acquire a three-dimensional model of the real-world object, determine a portion of the real-world object from which a texture exemplar is to be generated, capture one or more images of the portion of the real-world object, determine an orientation of the real-world object, and generate the texture exemplar using the one or more images, the three-dimensional model, and the orientation of the real-world object. The HMD may then render and display images of a virtual object such that the virtual object appears to be made from a virtual material associated with the texture exemplar.
    Type: Grant
    Filed: January 29, 2015
    Date of Patent: October 3, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Arthur C. Tomlin, Roger Sebastian-Kevin Sylvan, Dan Kroymann, Cameron G. Brown, Nicholas Gervase Fajt
  • Patent number: 9747726
    Abstract: Methods for generating and displaying images associated with one or more virtual objects within an augmented reality environment at a frame rate that is greater than a rendering frame rate are described. The rendering frame rate may correspond with the minimum time to render images associated with a pose of a head-mounted display device (HMD). In some embodiments, the HMD may determine a predicted pose associated with a future position and orientation of the HMD, generate a pre-rendered image based on the predicted pose, determine an updated pose associated with the HMD subsequent to generating the pre-rendered image, generate an updated image based on the updated pose and the pre-rendered image, and display the updated image on the HMD. The updated image may be generated via a homographic transformation and/or a pixel offset adjustment of the pre-rendered image.
    Type: Grant
    Filed: August 3, 2016
    Date of Patent: August 29, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Oliver Michael Christian Williams, Paul Barham, Michael Isard, Tuan Wong, Kevin Woo, Georg Klein, Douglas Kevin Service, Ashraf Ayman Michail, Andrew Pearson, Martin Shetter, Jeffrey Neil Margolis, Nathan Ackerman, Calvin Chan, Arthur C. Tomlin
  • Patent number: 9552060
    Abstract: Methods for enabling hands-free selection of objects within an augmented reality environment are described. In some embodiments, an object may be selected by an end user of a head-mounted display device (HMD) based on detecting a vestibulo-ocular reflex (VOR) with the end user's eyes while the end user is gazing at the object and performing a particular head movement for selecting the object. The object selected may comprise a real object or a virtual object. The end user may select the object by gazing at the object for a first time period and then performing a particular head movement in which the VOR is detected for one or both of the end user's eyes. In one embodiment, the particular head movement may involve the end user moving their head away from a direction of the object at a particular head speed while gazing at the object.
    Type: Grant
    Filed: January 28, 2014
    Date of Patent: January 24, 2017
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Anthony J. Ambrus, Adam G. Poulos, Lewey Alec Geselowitz, Dan Kroymann, Arthur C. Tomlin, Roger Sebastian-Kevin Sylvan, Mathew J. Lamb, Brian J. Mount
  • Patent number: 9519640
    Abstract: A see-through, near-eye, mixed reality display apparatus for providing translations of real world data for a user. A wearer's location and orientation with the apparatus is determined and input data for translation is selected using sensors of the apparatus. Input data can be audio or visual in nature, and selected by reference to the gaze of a wearer. The input data is translated for the user relative to user profile information bearing on accuracy of a translation and determining from the input data whether a linguistic translation, knowledge addition translation or context translation is useful.
    Type: Grant
    Filed: May 4, 2012
    Date of Patent: December 13, 2016
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Kathryn Stone Perez, John Clavin, Kevin A. Geisner, Stephen G. Latta, Brian J. Mount, Arthur C. Tomlin, Adam G. Poulos
  • Patent number: 9514571
    Abstract: Methods for generating and displaying images associated with one or more virtual objects within an augmented reality environment at a frame rate that is greater than a rendering frame rate are described. The rendering frame rate may correspond with the minimum time to render images associated with a pose of a head-mounted display device (HMD). In some embodiments, the HMD may determine a predicted pose associated with a future position and orientation of the HMD, generate a pre-rendered image based on the predicted pose, determine an updated pose associated with the HMD subsequent to generating the pre-rendered image, generate an updated image based on the updated pose and the pre-rendered image, and display the updated image on the HMD. The updated image may be generated via a homographic transformation and/or a pixel offset adjustment of the pre-rendered image.
    Type: Grant
    Filed: July 25, 2013
    Date of Patent: December 6, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Oliver Michael Christian Williams, Paul Barham, Michael Isard, Tuan Wong, Kevin Woo, Georg Klein, Douglas Kevin Service, Ashraf Ayman Michail, Andrew Pearson, Martin Shetter, Jeffrey Neil Margolis, Nathan Ackerman, Calvin Chan, Arthur C. Tomlin
  • Publication number: 20160343172
    Abstract: Methods for generating and displaying images associated with one or more virtual objects within an augmented reality environment at a frame rate that is greater than a rendering frame rate are described. The rendering frame rate may correspond with the minimum time to render images associated with a pose of a head-mounted display device (HMD). In some embodiments, the HMD may determine a predicted pose associated with a future position and orientation of the HMD, generate a pre-rendered image based on the predicted pose, determine an updated pose associated with the HMD subsequent to generating the pre-rendered image, generate an updated image based on the updated pose and the pre-rendered image, and display the updated image on the HMD. The updated image may be generated via a homographic transformation and/or a pixel offset adjustment of the pre-rendered image.
    Type: Application
    Filed: August 3, 2016
    Publication date: November 24, 2016
    Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Oliver Michael Christian Williams, Paul Barham, Michael Isard, Tuan Wong, Kevin Woo, Georg Klein, Douglas Kevin Service, Ashraf Ayman Michail, Andrew Pearson, Martin Shetter, Jeffrey Neil Margolis, Nathan Ackerman, Calvin Chan, Arthur C. Tomlin
  • Patent number: 9442567
    Abstract: Methods for enabling hands-free selection of virtual objects are described. In some embodiments, a gaze swipe gesture may be used to select a virtual object. The gaze swipe gesture may involve an end user of a head-mounted display device (HMD) performing head movements that are tracked by the HMD to detect whether a virtual pointer controlled by the end user has swiped across two or more edges of the virtual object. In some cases, the gaze swipe gesture may comprise the end user using their head movements to move the virtual pointer through two edges of the virtual object while the end user gazes at the virtual object. In response to detecting the gaze swipe gesture, the HMD may determine a second virtual object to be displayed on the HMD based on a speed of the gaze swipe gesture and a size of the virtual object.
    Type: Grant
    Filed: October 28, 2015
    Date of Patent: September 13, 2016
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Jason Scott, Arthur C. Tomlin, Mike Thomas, Matthew Kaplan, Cameron G. Brown, Jonathan Plumb, Nicholas Gervase Fajt, Daniel J. McCulloch, Jeremy Lee
  • Publication number: 20160225164
    Abstract: Methods for automatically generating a texture exemplar that may be used for rendering virtual objects that appear to be made from the texture exemplar are described. In some embodiments, a head-mounted display device (HMD) may identify a real-world object within an environment, acquire a three-dimensional model of the real-world object, determine a portion of the real-world object from which a texture exemplar is to be generated, capture one or more images of the portion of the real-world object, determine an orientation of the real-world object, and generate the texture exemplar using the one or more images, the three-dimensional model, and the orientation of the real-world object. The HMD may then render and display images of a virtual object such that the virtual object appears to be made from a virtual material associated with the texture exemplar.
    Type: Application
    Filed: January 29, 2015
    Publication date: August 4, 2016
    Inventors: Arthur C. Tomlin, Roger Sebastian-Kevin Sylvan, Dan Kroymann, Cameron G. Brown, Nicholas Gervase Fajt
  • Publication number: 20160196603
    Abstract: An augmented reality system that provides augmented product and environment information to a wearer of a see through head mounted display. The augmentation information may include advertising, inventory, pricing and other information about products a wearer may be interested in. Interest is determined from wearer actions and a wearer profile. The information may be used to incentivize purchases of real world products by a wearer, or allow the wearer to make better purchasing decisions. The augmentation information may enhance a wearer's shopping experience by allowing the wearer easy access to important product information while the wearer is shopping in a retail establishment. Through virtual rendering, a wearer may be provided with feedback on how an item would appear in a wearer environment, such as the wearer's home.
    Type: Application
    Filed: January 11, 2016
    Publication date: July 7, 2016
    Inventors: Kathryn Stone Perez, John Clavin, Kevin A. Geisner, Stephen G. Latta, Brian J. Mount, Arthur C. Tomlin, Adam G. Poulos