Patents by Inventor Matthaeus KRENN

Matthaeus KRENN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240160337
    Abstract: Methods and systems described herein are directed to a virtual web browser for providing access to multiple virtual worlds interchangeably. Browser tabs for corresponding website and virtual world pairs can be displayed along with associated controls, the selection of such controls effecting the instantiation of 3D content for the virtual worlds. One or more of the tabs can be automatically generated as a result of interactions with objects in the virtual worlds, such that travel to a world, corresponding to an object to which an interaction was directed, is facilitated.
    Type: Application
    Filed: January 25, 2024
    Publication date: May 16, 2024
    Inventors: Jeremy EDELBLUT, Matthaeus KRENN, John Nicholas JITKOFF
  • Patent number: 11928314
    Abstract: Methods and systems described herein are directed to a virtual web browser for providing access to multiple virtual worlds interchangeably. Browser tabs for corresponding website and virtual world pairs can be displayed along with associated controls, the selection of such controls effecting the instantiation of 3D content for the virtual worlds. One or more of the tabs can be automatically generated as a result of interactions with objects in the virtual worlds, such that travel to a world, corresponding to an object to which an interaction was directed, is facilitated.
    Type: Grant
    Filed: November 17, 2022
    Date of Patent: March 12, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Jeremy Edelblut, Matthaeus Krenn, John Nicholas Jitkoff
  • Publication number: 20240054996
    Abstract: Systems and processes for operating an intelligent automated assistant are provided. In one example process, a speech input is received from a user. In response to determining that the speech input corresponds to a user intent of obtaining information associated with a user experience of the user, one or more parameters referencing a user experience of the user are identified. Metadata associated with the referenced user experience is obtained from an experiential data structure. Based on the metadata, one or more media items associated with the referenced are retrieved based on the metadata. The one or more media items associated with the referenced user experience are output together.
    Type: Application
    Filed: October 23, 2023
    Publication date: February 15, 2024
    Inventors: Marcos Regis VESCOVI, Eric M. G. CIRCLAEYS, Richard WARREN, Jeffrey Traer BERNSTEIN, Matthaeus KRENN
  • Patent number: 11900923
    Abstract: Systems and processes for operating an intelligent automated assistant are provided. In one example process, a speech input is received from a user. In response to determining that the speech input corresponds to a user intent of obtaining information associated with a user experience of the user, one or more parameters referencing a user experience of the user are identified. Metadata associated with the referenced user experience is obtained from an experiential data structure. Based on the metadata, one or more media items associated with the referenced are retrieved based on the metadata. The one or more media items associated with the referenced user experience are output together.
    Type: Grant
    Filed: September 7, 2021
    Date of Patent: February 13, 2024
    Assignee: Apple Inc.
    Inventors: Marcos Regis Vescovi, Eric M. G. Circlaeys, Richard Warren, Jeffrey Traer Bernstein, Matthaeus Krenn
  • Publication number: 20230418442
    Abstract: Methods and systems described herein are directed to a virtual web browser for providing access to multiple virtual worlds interchangeably. Browser tabs for corresponding website and virtual world pairs can be displayed along with associated controls, the selection of such controls effecting the instantiation of 3D content for the virtual worlds. One or more of the tabs can be automatically generated as a result of interactions with objects in the virtual worlds, such that travel to a world, corresponding to an object to which an interaction was directed, is facilitated.
    Type: Application
    Filed: November 17, 2022
    Publication date: December 28, 2023
    Inventors: Jeremy EDELBLUT, Matthaeus KRENN, John Nicholas JITKOFF
  • Publication number: 20230419618
    Abstract: Methods and systems described herein are directed to a virtual personal interface (herein “personal interface”) for controlling an artificial reality (XR) environment, such as by providing user interfaces for interactions with a current XR application, providing detail views for selected items, navigating between multiple virtual worlds without having to transition in and out of a home lobby for those worlds, executing aspects of a second XR application while within a world controlled by a first XR application, and providing 3D content that is separate from the current world. While in at least one of those worlds, the personal interface can itself present content in a runtime separate from the current virtual world, corresponding to an item, action, or application for that world. XR applications can be defined for use with the personal interface to create both a 3D world portion and 2D interface portions that are displayed via the personal interface.
    Type: Application
    Filed: November 17, 2022
    Publication date: December 28, 2023
    Inventors: Matthaeus KRENN, Jeremy EDELBLUT, John Nicholas JITKOFF
  • Publication number: 20230419617
    Abstract: Methods and systems described herein are directed to a virtual personal interface (herein “personal interface”) for controlling an artificial reality (XR) environment, such as by providing user interfaces for interactions with a current XR application, providing detail views for selected items, navigating between multiple virtual worlds without having to transition in and out of a home lobby for those worlds, executing aspects of a second XR application while within a world controlled by a first XR application, and providing 3D content that is separate from the current world. While in at least one of those worlds, the personal interface can itself present content in a runtime separate from the current virtual world, corresponding to an item, action, or application for that world. XR applications can be defined for use with the personal interface to create both a 3D world portion and 2D interface portions that are displayed via the personal interface.
    Type: Application
    Filed: July 19, 2022
    Publication date: December 28, 2023
    Inventors: Matthaeus KRENN, Jeremy EDELBLUT, John Nicholas JITKOFF
  • Patent number: 11854539
    Abstract: Systems and processes for operating an intelligent automated assistant are provided. In one example process, a speech input is received from a user. In response to determining that the speech input corresponds to a user intent of obtaining information associated with a user experience of the user, one or more parameters referencing a user experience of the user are identified. Metadata associated with the referenced user experience is obtained from an experiential data structure. Based on the metadata, one or more media items associated with the referenced are retrieved based on the metadata. The one or more media items associated with the referenced user experience are output together.
    Type: Grant
    Filed: August 11, 2020
    Date of Patent: December 26, 2023
    Assignee: Apple Inc.
    Inventors: Marcos Regis Vescovi, Eric M. G. Circlaeys, Richard Warren, Jeffrey Traer Bernstein, Matthaeus Krenn
  • Publication number: 20230393705
    Abstract: An electronic device displays a messaging interface that allows a participant in a message conversation to capture, send, and/or play media content. The media content includes images, video, and/or audio. The media content is captured, sent, and/or played based on the electronic device detecting one or more conditions.
    Type: Application
    Filed: August 23, 2023
    Publication date: December 7, 2023
    Inventor: Matthaeus KRENN
  • Patent number: 11818455
    Abstract: A first device sends a request to a second device to initiate a shared annotation session. In response to receiving acceptance of the request, a first prompt to move the first device toward the second device is displayed. In accordance with a determination that connection criteria for the first device and the second device are met, a representation of a field of view of the camera(s) of the first device is displayed in the shared annotation session with the second device. During the shared annotation session, one or more annotations are displayed via the first display generation component and one or more second virtual annotations corresponding to annotation input directed to the respective location in the physical environment by the second device is displayed via the first display generation component, provided that the respective location is included in the field of view of the first set of cameras.
    Type: Grant
    Filed: February 8, 2023
    Date of Patent: November 14, 2023
    Assignee: APPLE INC.
    Inventors: Joseph A. Malia, Mark K. Hauenstein, Praveen Sharma, Matan Stauber, Julian K. Missig, Jeffrey T. Bernstein, Lukas Robert Tom Girling, Matthaeus Krenn
  • Publication number: 20230305674
    Abstract: A computer system displays, in a first viewing mode, a simulated environment that is oriented relative to a physical environment of the computer system. In response to detecting a first change in attitude, the computer system changes an appearance of a first virtual user interface object so as to maintain a fixed spatial relationship between the first virtual user interface object and the physical environment. The computing system detects a gesture. In response to detecting a second change in attitude, in accordance with a determination that the gesture met mode change criteria, the computer system transitions from displaying the simulated environment in the first viewing mode to displaying the simulated environment in a second viewing mode. Displaying the virtual model in the simulated environment in the second viewing mode includes forgoing changing the appearance of the first virtual user interface object to maintain the fixed spatial relationship.
    Type: Application
    Filed: April 27, 2023
    Publication date: September 28, 2023
    Inventors: Mark K. Hauenstein, Joseph A. Malia, Julian K. Missig, Matthaeus Krenn, Jeffrey T. Bernstein
  • Patent number: 11755180
    Abstract: Methods and systems described herein are directed to a virtual web browser for providing access to multiple virtual worlds interchangeably. Browser tabs for corresponding website and virtual world pairs can be displayed along with associated controls, the selection of such controls effecting the instantiation of 3D content for the virtual worlds. One or more of the tabs can be automatically generated as a result of interactions with objects in the virtual worlds, such that travel to a world, corresponding to an object to which an interaction was directed, is facilitated.
    Type: Grant
    Filed: August 18, 2022
    Date of Patent: September 12, 2023
    Inventors: Jeremy Edelblut, Matthaeus Krenn, John Nicholas Jitkoff
  • Patent number: 11740755
    Abstract: A computer system while displaying an augmented reality environment, concurrently displays: a representation of at least a portion of a field of view of one or more cameras that includes a physical object, and a virtual user interface object at a location in the representation of the field of view, where the location is determined based on the respective physical object in the field of view. While displaying the augmented reality environment, in response to detecting an input that changes a virtual environment setting for the augmented reality environment, the computer system adjusts an appearance of the virtual user interface object in accordance with the change made to the virtual environment setting and applies to at least a portion of the representation of the field of view a filter selected based on the change made to the virtual environment setting.
    Type: Grant
    Filed: September 28, 2021
    Date of Patent: August 29, 2023
    Assignee: APPLE INC.
    Inventors: Mark K. Hauenstein, Joseph A. Malia, Julian K. Missig, Matthaeus Krenn, Jeffrey T. Bernstein
  • Publication number: 20230252659
    Abstract: The present disclosure generally relates to displaying and editing an image with depth information. In response to an input, an object in the image having a one or more elements in a first depth range is identified. The identified object is then isolated from other elements in the image and displayed separately from the other elements. The isolated object may then be utilized in different applications.
    Type: Application
    Filed: April 20, 2023
    Publication date: August 10, 2023
    Inventors: Matan STAUBER, Amir HOFFNUNG, Matthaeus KRENN, Jeffrey Traer BERNSTEIN, Joseph A. MALIA, Mark HAUENSTEIN
  • Publication number: 20230199296
    Abstract: A first device sends a request to a second device to initiate a shared annotation session. In response to receiving acceptance of the request, a first prompt to move the first device toward the second device is displayed. In accordance with a determination that connection criteria for the first device and the second device are met, a representation of a field of view of the camera(s) of the first device is displayed in the shared annotation session with the second device. During the shared annotation session, one or more annotations are displayed via the first display generation component and one or more second virtual annotations corresponding to annotation input directed to the respective location in the physical environment by the second device is displayed via the first display generation component, provided that the respective location is included in the field of view of the first set of cameras.
    Type: Application
    Filed: February 8, 2023
    Publication date: June 22, 2023
    Inventors: Joseph A. Malia, Mark K. Hauenstein, Praveen Sharma, Matan Stauber, Julian K. Missig, Jeffrey T. Bernstein, Lukas Robert Tom Girling, Matthaeus Krenn
  • Patent number: 11669985
    Abstract: The present disclosure generally relates to displaying and editing image with depth information. Image data associated with an image includes depth information associated with a subject. In response to a request to display the image, a first modified image is displayed. Displaying the first modified image includes displaying, based on the depth information, a first level of simulated lighting on a first portion of the subject and a second level of simulated lighting on a second portion of the subject. After displaying the first modified image, a second modified image is displayed. Displaying the second modified image includes displaying, based on the depth information, a third level of simulated lighting on the first portion of the subject and a fourth level of simulated lighting on the second portion of the subject.
    Type: Grant
    Filed: April 28, 2022
    Date of Patent: June 6, 2023
    Assignee: Apple Inc.
    Inventors: Matan Stauber, Amir Hoffnung, Matthaeus Krenn, Jeffrey Traer Bernstein, Joseph A. Malia, Mark Hauenstein
  • Patent number: 11632600
    Abstract: While displaying playback of a first portion of a video in a video playback region, a device receives a request to add a first annotation to the video playback. In response to receiving the request, the device pauses playback of the video at a first position in the video and displays a still image that corresponds to the first, paused position of the video. While displaying the still image, the device receives the first annotation on a first portion of a physical environment captured in the still image. After receiving the first annotation, the device displays, in the video playback region, a second portion of the video that corresponds to a second position in the video, where the first portion of the physical environment is captured in the second portion of the video and the first annotation is displayed in the second portion of the video.
    Type: Grant
    Filed: April 8, 2022
    Date of Patent: April 18, 2023
    Assignee: APPLE INC.
    Inventors: Joseph A. Malia, Mark K. Hauenstein, Praveen Sharma, Matan Stauber, Julian K. Missig, Jeffrey T. Bernstein, Lukas Robert Tom Girling, Matthaeus Krenn
  • Publication number: 20220262022
    Abstract: The present disclosure generally relates to displaying and editing image with depth information. Image data associated with an image includes depth information associated with a subject. In response to a request to display the image, a first modified image is displayed. Displaying the first modified image includes displaying, based on the depth information, a first level of simulated lighting on a first portion of the subject and a second level of simulated lighting on a second portion of the subject. After displaying the first modified image, a second modified image is displayed. Displaying the second modified image includes displaying, based on the depth information, a third level of simulated lighting on the first portion of the subject and a fourth level of simulated lighting on the second portion of the subject.
    Type: Application
    Filed: April 28, 2022
    Publication date: August 18, 2022
    Inventors: Matan STAUBER, Amir HOFFNUNG, Matthaeus KRENN, Jeffrey Traer BERNSTEIN, Joseph A. MALIA, Mark HAUENSTEIN
  • Publication number: 20220239842
    Abstract: While displaying playback of a first portion of a video in a video playback region, a device receives a request to add a first annotation to the video playback. In response to receiving the request, the device pauses playback of the video at a first position in the video and displays a still image that corresponds to the first, paused position of the video. While displaying the still image, the device receives the first annotation on a first portion of a physical environment captured in the still image. After receiving the first annotation, the device displays, in the video playback region, a second portion of the video that corresponds to a second position in the video, where the first portion of the physical environment is captured in the second portion of the video and the first annotation is displayed in the second portion of the video.
    Type: Application
    Filed: April 8, 2022
    Publication date: July 28, 2022
    Inventors: Joseph A. Malia, Mark K. Hauenstein, Praveen Sharma, Matan Stauber, Julian K. Missig, Jeffrey T. Bernstein, Lukas Robert Tom Girling, Matthaeus Krenn
  • Patent number: 11321857
    Abstract: The present disclosure generally relates to displaying and editing image with depth information. Image data associated with an image includes depth information associated with a subject. In response to a request to display the image, a first modified image is displayed. Displaying the first modified image includes displaying, based on the depth information, a first level of simulated lighting on a first portion of the subject and a second level of simulated lighting on a second portion of the subject. After displaying the first modified image, a second modified image is displayed. Displaying the second modified image includes displaying, based on the depth information, a third level of simulated lighting on the first portion of the subject and a fourth level of simulated lighting on the second portion of the subject.
    Type: Grant
    Filed: July 31, 2019
    Date of Patent: May 3, 2022
    Assignee: Apple Inc.
    Inventors: Matan Stauber, Amir Hoffnung, Matthaeus Krenn, Jeffrey Traer Bernstein, Joseph A. Malia, Mark Hauenstein