Patents by Inventor Mathew Julian LAMB

Mathew Julian LAMB has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11824821
    Abstract: Systems and methods are provided for facilitating the presentation of expressive intent and other status information with messaging and other communication applications. The expressive intent is based on expressive effect data associated with the message recipients and/or message senders. The expressive intent can be conveyed through avatars and modified message content. The avatars convey gestures, emotions and other status information and the presentation of the avatars can be reactive to detected state information of the message recipient(s), message sender(s) and/or corresponding messaging device(s).
    Type: Grant
    Filed: November 1, 2022
    Date of Patent: November 21, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Austin Seungmin Lee, Amy Mun Hong, Keiichi Matsuda, Anthony James Ambrus, Mathew Julian Lamb, Kenneth Mitchell Jakubzak
  • Publication number: 20230111597
    Abstract: Systems and methods are provided for facilitating the presentation of expressive intent and other status information with messaging and other communication applications. The expressive intent is based on expressive effect data associated with the message recipients and/or message senders. The expressive intent can be conveyed through avatars and modified message content. The avatars convey gestures, emotions and other status information and the presentation of the avatars can be reactive to detected state information of the message recipient(s), message sender(s) and/or corresponding messaging device(s).
    Type: Application
    Filed: November 1, 2022
    Publication date: April 13, 2023
    Inventors: Austin Seungmin LEE, Amy Mun HONG, Keiichi MATSUDA, Anthony James AMBRUS, Mathew Julian LAMB, Kenneth Mitchell JAKUBZAK
  • Patent number: 11567633
    Abstract: A computer-implemented method for determining focus of a user is provided. User input is received. An intention image of a scene including a plurality of interactive objects is generated. The intention image includes pixels encoded with intention values determined based on the user input. An intention value indicates a likelihood that the user intends to focus on the pixel. An intention score is determined for each interactive object based on the intention values of pixels that correspond to the interactive object. An interactive object of the plurality of interactive objects is determined to be a focused object that has the user's focus based on the intention scores of the plurality of interactive objects.
    Type: Grant
    Filed: February 8, 2021
    Date of Patent: January 31, 2023
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Julia Schwarz, Andrew D. Wilson, Sophie Stellmach, Erian Vazquez, Kristian Jose Davila, Adam Edwin Behringer, Jonathan Palmer, Jason Michael Ray, Mathew Julian Lamb
  • Patent number: 11509612
    Abstract: Systems and methods are provided for facilitating the presentation of expressive intent and other status information with messaging and other communication applications. The expressive intent is based on expressive effect data associated with the message recipients and/or message senders. The expressive intent can be conveyed through avatars and modified message content. The avatars convey gestures, emotions and other status information and the presentation of the avatars can be reactive to detected state information of the message recipient(s), message sender(s) and/or corresponding messaging device(s).
    Type: Grant
    Filed: December 15, 2020
    Date of Patent: November 22, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Austin Seungmin Lee, Amy Mun Hong, Keiichi Matsuda, Anthony James Ambrus, Mathew Julian Lamb, Kenneth Mitchell Jakubzak
  • Publication number: 20220253182
    Abstract: A computer-implemented method for determining focus of a user is provided. User input is received. An intention image of a scene including a plurality of interactive objects is generated. The intention image includes pixels encoded with intention values determined based on the user input. An intention value indicates a likelihood that the user intends to focus on the pixel. An intention score is determined for each interactive object based on the intention values of pixels that correspond to the interactive object. An interactive object of the plurality of interactive objects is determined to be a focused object that has the user's focus based on the intention scores of the plurality of interactive objects.
    Type: Application
    Filed: February 8, 2021
    Publication date: August 11, 2022
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Julia SCHWARZ, Andrew D. WILSON, Sophie STELLMACH, Erian VAZQUEZ, Kristian Jose DAVILA, Adam Edwin BEHRINGER, Jonathan PALMER, Jason Michael RAY, Mathew Julian LAMB
  • Publication number: 20220191157
    Abstract: Systems and methods are provided for facilitating the presentation of expressive intent and other status information with messaging and other communication applications. The expressive intent is based on expressive effect data associated with the message recipients and/or message senders. The expressive intent can be conveyed through avatars and modified message content. The avatars convey gestures, emotions and other status information and the presentation of the avatars can be reactive to detected state information of the message recipient(s), message sender(s) and/or corresponding messaging device(s).
    Type: Application
    Filed: December 15, 2020
    Publication date: June 16, 2022
    Inventors: Austin Seungmin LEE, Amy Mun HONG, Keiichi MATSUDA, Anthony James AMBRUS, Mathew Julian LAMB, Kenneth Mitchell JAKUBZAK
  • Patent number: 11270672
    Abstract: Examples are disclosed herein relating to displaying a virtual assistant. One example provides an augmented reality display device comprising a see-through display, a logic subsystem, and a storage subsystem storing instructions executable by the logic subsystem to display via the see-through display a virtual assistant associated with a location in a real-world environment, detect a change in a field of view of the see-through display, and when the virtual assistant is out of the field of view of the see-through display after the change in the field of view, display the virtual assistant in a virtual window on the see-through display.
    Type: Grant
    Filed: November 2, 2020
    Date of Patent: March 8, 2022
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Austin S. Lee, Anthony James Ambrus, Mathew Julian Lamb, Sophie Stellmach, Keiichi Matsuda
  • Patent number: 10930075
    Abstract: Devices, systems, and methods for interacting with a three-dimensional virtual environment, including receiving an input associated with a change in pose of a user's hand; estimating, based on at least the input, a first pose in the virtual environment for an input source associated with the hand; identifying a surface of a virtual object in the virtual environment; rendering a frame depicting elements of the virtual environment, the frame including a pixel rendered for a position on the surface; determining a distance between the position and a virtual input line extending through a position of the first pose and in a direction of the first pose; changing a pixel color rendered for the pixel based on the distance between the position and the virtual input line; and displaying the frame including the pixel with the changed pixel color to the user via a head-mounted display device.
    Type: Grant
    Filed: October 16, 2017
    Date of Patent: February 23, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Carlos Fernando Faria Costa, Mathew Julian Lamb, Brian Thomas Merrell
  • Publication number: 20190114835
    Abstract: Devices, systems, and methods for interacting with a three-dimensional virtual environment, including receiving an input associated with a change in pose of a user's hand; estimating, based on at least the input, a first pose in the virtual environment for an input source associated with the hand; identifying a surface of a virtual object in the virtual environment; rendering a frame depicting elements of the virtual environment, the frame including a pixel rendered for a position on the surface; determining a distance between the position and a virtual input line extending through a position of the first pose and in a direction of the first pose; changing a pixel color rendered for the pixel based on the distance between the position and the virtual input line; and displaying the frame including the pixel with the changed pixel color to the user via a head-mounted display device.
    Type: Application
    Filed: October 16, 2017
    Publication date: April 18, 2019
    Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Carlos Fernando Faria COSTA, Mathew Julian LAMB, Brian Thomas MERRELL