Patents by Inventor Austin S. Lee

Austin S. Lee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11960790
    Abstract: A computer implemented method includes detecting user interaction with mixed reality displayed content in a mixed reality system. User focus is determined as a function of the user interaction based on the user interaction using a spatial intent model. A length of time for extending voice engagement with the mixed reality system is modified based on the determined user focus. Detecting user interaction with the displayed content may include tracking eye movements to determine objects in the displayed content at which the user is looking and determining a context of a user dialog during the voice engagement.
    Type: Grant
    Filed: May 27, 2021
    Date of Patent: April 16, 2024
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Austin S. Lee, Jonathan Kyle Palmer, Anthony James Ambrus, Mathew J. Lamb, Sheng Kai Tang, Sophie Stellmach
  • Patent number: 11630509
    Abstract: This disclosure relates to displaying a user interface for a computing device based upon a user intent determined via a spatial intent model. One example provides a computing device comprising a see-through display, a logic subsystem, and a storage subsystem. The storage subsystem comprises instructions executable by the logic machine to receive, via an eye-tracking sensor, eye tracking samples each corresponding to a gaze direction of a user, based at least on the eye tracking samples, determine a time-dependent attention value for a location in a field of view of the see-through display, based at least on the time-dependent attention value for the location, determine an intent of the user to interact with a user interface associated with the location that is at least partially hidden from a current view, and in response to determining the intent, display via the see-through display the user interface.
    Type: Grant
    Filed: December 11, 2020
    Date of Patent: April 18, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Austin S. Lee, Anthony James Ambrus, Sheng Kai Tang, Keiichi Matsuda, Aleksandar Josic
  • Patent number: 11620780
    Abstract: Examples are disclosed that relate to utilizing image sensor inputs from different devices having different perspectives in physical space to construct an avatar of a first user in a video stream. The avatar comprises a three-dimensional representation of at least a portion of a face of the first user texture mapped onto a three-dimensional body simulation that follows actual physical movement of the first user. The three-dimensional body simulation of the first user is generated based on image data received from an imaging device and image sensor data received from a head-mounted display device both associated with the first user. The three-dimensional representation of the face of the first user is generated based on the image data received from the imaging device. The resulting video stream is sent, via a communication network, to a display device associated with a second user.
    Type: Grant
    Filed: November 18, 2020
    Date of Patent: April 4, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Austin S. Lee, Kenneth Mitchell Jakubzak, Mathew J. Lamb, Alton Kwok
  • Publication number: 20220382510
    Abstract: A computer implemented method includes detecting user interaction with mixed reality displayed content in a mixed reality system. User focus is determined as a function of the user interaction based on the user interaction using a spatial intent model. A length of time for extending voice engagement with the mixed reality system is modified based on the determined user focus. Detecting user interaction with the displayed content may include tracking eye movements to determine objects in the displayed content at which the user is looking and determining a context of a user dialog during the voice engagement.
    Type: Application
    Filed: May 27, 2021
    Publication date: December 1, 2022
    Inventors: Austin S. LEE, Jonathan Kyle PALMER, Anthony James AMBRUS, Mathew J. LAMB, Sheng Kai TANG, Sophie STELLMACH
  • Patent number: 11429186
    Abstract: One example provides a computing device comprising instructions executable to receive information regarding one or more entities in the scene, to receive eye tracking a plurality of eye tracking samples, each eye tracking sample corresponding to a gaze direction of a user and, based at least on the eye tracking samples, determine a time-dependent attention value for each entity of the one or more entities at different locations in a use environment, the time-dependent attention value determined using a leaky integrator. The instructions are further executable to receive a user input indicating an intent to perform a location-dependent action, associate the user input to with a selected entity based at least upon the time-dependent attention value for each entity, and perform the location-dependent action based at least upon a location of the selected entity.
    Type: Grant
    Filed: November 18, 2020
    Date of Patent: August 30, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Austin S. Lee, Mathew J. Lamb, Anthony James Ambrus, Amy Mun Hong, Jonathan Palmer, Sophie Stellmach
  • Publication number: 20220187907
    Abstract: This disclosure relates to displaying a user interface for a computing device based upon a user intent determined via a spatial intent model. One example provides a computing device comprising a see-through display, a logic subsystem, and a storage subsystem. The storage subsystem comprises instructions executable by the logic machine to receive, via an eye-tracking sensor, eye tracking samples each corresponding to a gaze direction of a user, based at least on the eye tracking samples, determine a time-dependent attention value for a location in a field of view of the see-through display, based at least on the time-dependent attention value for the location, determine an intent of the user to interact with a user interface associated with the location that is at least partially hidden from a current view, and in response to determining the intent, display via the see-through display the user interface.
    Type: Application
    Filed: December 11, 2020
    Publication date: June 16, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Austin S. LEE, Anthony James AMBRUS, Sheng Kai TANG, Keiichi MATSUDA, Aleksandar JOSIC
  • Publication number: 20220155857
    Abstract: One example provides a computing device comprising instructions executable to receive information regarding one or more entities in the scene, to receive eye tracking a plurality of eye tracking samples, each eye tracking sample corresponding to a gaze direction of a user and, based at least on the eye tracking samples, determine a time-dependent attention value for each entity of the one or more entities at different locations in a use environment, the time-dependent attention value determined using a leaky integrator. The instructions are further executable to receive a user input indicating an intent to perform a location-dependent action, associate the user input to with a selected entity based at least upon the time-dependent attention value for each entity, and perform the location-dependent action based at least upon a location of the selected entity.
    Type: Application
    Filed: November 18, 2020
    Publication date: May 19, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Austin S. LEE, Mathew J. LAMB, Anthony James AMBRUS, Amy Mun HONG, Jonathan PALMER, Sophie STELLMACH
  • Publication number: 20220156998
    Abstract: Examples are disclosed that relate to utilizing image sensor inputs from different devices having different perspectives in physical space to construct an avatar of a first user in a video stream. The avatar comprises a three-dimensional representation of at least a portion of a face of the first user texture mapped onto a three-dimensional body simulation that follows actual physical movement of the first user. The three-dimensional body simulation of the first user is generated based on image data received from an imaging device and image sensor data received from a head-mounted display device both associated with the first user. The three-dimensional representation of the face of the first user is generated based on the image data received from the imaging device. The resulting video stream is sent, via a communication network, to a display device associated with a second user.
    Type: Application
    Filed: November 18, 2020
    Publication date: May 19, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Austin S. LEE, Kenneth Mitchell JAKUBZAK, Mathew J. LAMB, Alton KWOK
  • Patent number: 11270672
    Abstract: Examples are disclosed herein relating to displaying a virtual assistant. One example provides an augmented reality display device comprising a see-through display, a logic subsystem, and a storage subsystem storing instructions executable by the logic subsystem to display via the see-through display a virtual assistant associated with a location in a real-world environment, detect a change in a field of view of the see-through display, and when the virtual assistant is out of the field of view of the see-through display after the change in the field of view, display the virtual assistant in a virtual window on the see-through display.
    Type: Grant
    Filed: November 2, 2020
    Date of Patent: March 8, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Austin S. Lee, Anthony James Ambrus, Mathew Julian Lamb, Sophie Stellmach, Keiichi Matsuda
  • Patent number: 10891014
    Abstract: Disclosed techniques enable participants of a communication session that is rendered within a mixed reality environment to change their view or perspective in the mixed reality environment. The participants may alter their view or perspective in the mixed reality environment using a data processing device displaying the communication session. The participant may interact with a data processing device displaying the communication session to cause the change in their view or perspective in the mixed reality environment. The participant interaction may include user touch and/or gestures with an associated data processing device to cause the change in the participant's view or perspective in the mixed reality environment. The participant may use a plurality of fingers on a display of the data processing device to zoom, pan or rotate their view or perspective in the mixed reality environment.
    Type: Grant
    Filed: March 21, 2018
    Date of Patent: January 12, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Austin S. Lee, Angela Chin, Hae Jin Lee, Malek Mohamad Nafez Chalabi, Sean Michael Lynch, Siddhant Mehta
  • Patent number: 10643394
    Abstract: In a device including a processor and a memory in communication with the processor is described, the memory includes executable instructions that, when executed by the processor, cause the processor to control the device to perform functions of: generating, based on a plurality of local 3D models, a global 3D model representing a portion of a real-world environment; determining a location of a 3D virtual object in the global 3D model; and generating augmentation data for rendering the 3D virtual object to be seen at a location of the real-world environment corresponding to the location of the 3D virtual object in the global 3D model.
    Type: Grant
    Filed: December 16, 2018
    Date of Patent: May 5, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Henry Yao-Tsu Chen, Brandon V. Taylor, Mark Robert Swift, Austin S. Lee, Ryan S. Menezes
  • Publication number: 20190294313
    Abstract: Disclosed techniques enable participants of a communication session that is rendered within a mixed reality environment to change their view or perspective in the mixed reality environment. The participants may alter their view or perspective in the mixed reality environment using a data processing device displaying the communication session. The participant may interact with a data processing device displaying the communication session to cause the change in their view or perspective in the mixed reality environment. The participant interaction may include user touch and/or gestures with an associated data processing device to cause the change in the participant's view or perspective in the mixed reality environment. The participant may use a plurality of fingers on a display of the data processing device to zoom, pan or rotate their view or perspective in the mixed reality environment.
    Type: Application
    Filed: March 21, 2018
    Publication date: September 26, 2019
    Inventors: Austin S. LEE, Angela CHIN, Hae Jin LEE, Malek Mohamad Nafez CHALABI, Sean Michael LYNCH, Siddhant MEHTA
  • Publication number: 20190122442
    Abstract: In a device including a processor and a memory in communication with the processor is described, the memory includes executable instructions that, when executed by the processor, cause the processor to control the device to perform functions of: generating, based on a plurality of local 3D models, a global 3D model representing a portion of a real-world environment; determining a location of a 3D virtual object in the global 3D model; and generating augmentation data for rendering the 3D virtual object to be seen at a location of the real-world environment corresponding to the location of the 3D virtual object in the global 3D model.
    Type: Application
    Filed: December 16, 2018
    Publication date: April 25, 2019
    Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Henry Yao-Tsu Chen, Brandon V. Taylor, Mark Robert Swift, Austin S. Lee, Ryan S. Menezes
  • Patent number: 10235808
    Abstract: A user device comprises a network interface, a rendering module, and a scene modification module. The network interface is configured to receive a video signal from another device via a network. The rendering module is configured to control display apparatus of the user device to display a virtual element to a user of the user device, the virtual element comprising a video image derived from the video signal. The modification module is configured to generate rendering data for displaying a modified version of the virtual element at the other device. The modified version does not include said video image. The network interface is configured to transmit the rendering data to the other device via the network. Alternatively or in addition, the rendering data can be modified at the other device to the same end.
    Type: Grant
    Filed: April 26, 2016
    Date of Patent: March 19, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Henry Yao-Tsu Chen, Brandon V. Taylor, Mark Robert Swift, Austin S. Lee, Ryan S. Menezes, Jason Thomas Faulkner
  • Patent number: 10169917
    Abstract: An augmented reality (AR) system receives a plurality of local 3D models of a part of a real-world environment, each having been generated by a different AR device when located in the real-world environment. The local 3D models are combined to generate a global 3D model, at least part of which is transmitted to a device remote from the real-world environment. The global 3D model represents a greater portion of the real-environment than any of the local 3D models individually. The AR system receives rendering data from the remote device, and transmits it to an AR device when the AR device is located in the real-world environment. Alternatively, the rendering data may be transmitted from the remote device to the AR device via a network directly. The rendering data is for use in rendering a virtual object at the AR device in the real-world environment.
    Type: Grant
    Filed: April 26, 2016
    Date of Patent: January 1, 2019
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Henry Yao-Tsu Chen, Brandon V. Taylor, Mark Robert Swift, Austin S. Lee, Ryan S. Menezes
  • Publication number: 20170054815
    Abstract: A user device within a communication architecture, the user device comprising an asynchronous session generator configured to: capture at least one image; determine camera pose data associated with the at least one image; capture surface reconstruction data, the surface reconstruction data being associated with the camera pose data; generate an asynchronous session comprising asynchronous session data, the asynchronous session data comprising the at least one image, the camera pose data, surface reconstruction data, and at least one annotation object wherein the asynchronous data is configured to be stored and retrieved at a later time.
    Type: Application
    Filed: April 28, 2016
    Publication date: February 23, 2017
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Henry Yao-Tsu Chen, Brandon V. Taylor, Mark Robert Swift, Austin S. Lee, Ryan S. Menezes
  • Publication number: 20170053447
    Abstract: An augmented reality (AR) system receives a plurality of local 3D models of a part of a real-world environment, each having been generated by a different AR device when located in the real-world environment. The local 3D models are combined to generate a global 3D model, at least part of which is transmitted to a device remote from the real-world environment. The global 3D model represents a greater portion of the real-environment than any of the local 3D models individually. The AR system receives rendering data from the remote device, and transmits it to an AR device when the AR device is located in the real-world environment. Alternatively, the rendering data may be transmitted from the remote device to the AR device via a network directly. The rendering data is for use in rendering a virtual object at the AR device in the real-world environment.
    Type: Application
    Filed: April 26, 2016
    Publication date: February 23, 2017
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Henry Yao-Tsu Chen, Brandon V. Taylor, Mark Robert Swift, Austin S. Lee, Ryan S. Menezes
  • Publication number: 20170053445
    Abstract: Augmented reality apparatus comprises stereoscopic display apparatus, a computer interface, and a rendering module. The stereoscopic display apparatus is arranged to provide to a user of the augmented reality apparatus a view of a real-world environment in which the user is located. The display apparatus is configured to generate a stereoscopic image that is visible to the user simultaneously with the real-world view. The computer interface is configured to receive from a network externally generated 3D model data of the real-world environment in which the user is located. The rendering module is configured to use the externally generated 3D model data to control the display apparatus to render a virtual element in a manner such that it is perceived by the user as a 3D element located at a desired location in the real-world environment.
    Type: Application
    Filed: April 26, 2016
    Publication date: February 23, 2017
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Henry Yao-Tsu Chen, Brandon V. Taylor, Mark Robert Swift, Austin S. Lee, Ryan S. Menezes
  • Publication number: 20170053446
    Abstract: A user device comprises a network interface, a rendering module, and a scene modification module. The network interface is configured to receive a video signal from another device via a network. The rendering module is configured to control display apparatus of the user device to display a virtual element to a user of the user device, the virtual element comprising a video image derived from the video signal. The modification module is configured to generate rendering data for displaying a modified version of the virtual element at the other device. The modified version does not include said video image. The network interface is configured to transmit the rendering data to the other device via the network. Alternatively or in addition, the rendering data can be modified at the other device to the same end.
    Type: Application
    Filed: April 26, 2016
    Publication date: February 23, 2017
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Henry Yao-Tsu Chen, Brandon V. Taylor, Mark Robert Swift, Austin S. Lee, Ryan S. Menezes, Jason Thomas Faulkner
  • Publication number: 20170053455
    Abstract: A user device within a communication architecture, the user device comprising an asynchronous session viewer configured to: receive asynchronous session data, the asynchronous session data comprising at least one image, camera pose data associated with the at least one image, and surface reconstruction data associated with the camera pose data; select a field of view position; and edit the asynchronous session data by adding/amending/deleting at least one annotation object based on the selected field of view.
    Type: Application
    Filed: April 28, 2016
    Publication date: February 23, 2017
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Henry Yao-Tsu Chen, Brandon V. Taylor, Mark Robert Swift, Austin S. Lee, Ryan S. Menezes