Patents by Inventor Ian M. Richter

Ian M. Richter has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12380334
    Abstract: Various implementations disclosed herein include devices, systems, and methods for presenting objective-effectuators in synthesized reality settings. In various implementations, a device includes a non-transitory memory and one or more processors coupled with the non-transitory memory. In some implementations, a method includes instantiating an objective-effectuator into a synthesized reality setting. In some implementations, the objective-effectuator is characterized by a set of predefined actions and a set of visual rendering attributes. In some implementations, the method includes obtaining an objective for the objective-effectuator. In some implementations, the method includes determining contextual information characterizing the synthesized reality setting. In some implementations, the method includes generating a sequence of actions from the set of predefined actions based on the contextual information and the objective.
    Type: Grant
    Filed: October 14, 2020
    Date of Patent: August 5, 2025
    Assignee: Apple Inc.
    Inventors: Ian M. Richter, Amritpal Singh Saini, Olivier Soares
  • Patent number: 12373996
    Abstract: First content may be obtained in response to identifying a first physical element of a first object type. The first content may be associated with the first object type. Second content may be obtained in response to identifying a second physical element of a second object type. The second content may be associated with the second object type. The second physical element may be detected as being within a threshold distance of the first physical element. Third content may be generated based on a combination of the first content and the second content. The third content may be associated with a third object type that is different from the first object type and the second object type. The third content may be displayed on the display.
    Type: Grant
    Filed: August 24, 2023
    Date of Patent: July 29, 2025
    Assignee: APPLE INC.
    Inventors: Ian M. Richter, Andrew Scott Robertson
  • Publication number: 20250231623
    Abstract: In various implementations, a method comprises: identifying a plurality of data items, each of the plurality of data items having at least a first metadata field or a second metadata field; displaying a volumetric environment including a first plurality of SR objects corresponding to a first plurality of data items among the plurality of data items, wherein the first plurality of data items includes the first metadata field with first metadata field values; detecting a first user input indicative of the second metadata field; and in response to detecting the first user input, replacing the first plurality of SR objects within the volumetric environment with a second plurality of SR objects corresponding to a second plurality of data items among the plurality of data items, wherein each of the second plurality of data items includes the second metadata field with second metadata field values.
    Type: Application
    Filed: April 3, 2025
    Publication date: July 17, 2025
    Inventors: Ian M. Richter, Christopher Eubank, Tomlinson Holman
  • Publication number: 20250227360
    Abstract: Various implementations disclosed herein include devices, systems, and methods for capturing a new media content item. In various implementations, a device includes a display, one or more processors and a non-transitory memory. In some implementations, a method includes determining a plot template for generating a media content item based on other media content items that are distributed temporally. In some implementations, the plot template is defined by a set of one or more conditional triggers for capturing the other media content items. In some implementations, the method includes determining that a condition associated with a first conditional trigger of the set of one or more conditional triggers is satisfied. In some implementations, the method includes in response to the condition associated with the first conditional trigger being satisfied, displaying, on the display, a notification to capture a new media content item for populating the plot template.
    Type: Application
    Filed: March 27, 2025
    Publication date: July 10, 2025
    Inventor: Ian M. RICHTER
  • Patent number: 12346409
    Abstract: In one implementation, a method of presenting content is performed by a device including an image sensor, a display one or more processors, and non-transitory memory. The method includes obtaining, using the image sensor, an image of a physical environment. The method includes classifying, based on the image of the physical environment, the physical environment as a particular environment type of a plurality of environment types. The method includes obtaining content based on the particular environment type. The method includes displaying, on the display, a representation of the content in association with the physical environment.
    Type: Grant
    Filed: October 21, 2021
    Date of Patent: July 1, 2025
    Assignee: Apple Inc.
    Inventor: Ian M. Richter
  • Patent number: 12316957
    Abstract: Various implementations disclosed herein include devices, systems, and methods for capturing a new media content item. In various implementations, a device includes a display, one or more processors and a non-transitory memory. In some implementations, a method includes determining a plot template for generating a media content item based on other media content items that are distributed temporally. In some implementations, the plot template is defined by a set of one or more conditional triggers for capturing the other media content items. In some implementations, the method includes determining that a condition associated with a first conditional trigger of the set of one or more conditional triggers is satisfied. In some implementations, the method includes in response to the condition associated with the first conditional trigger being satisfied, displaying, on the display, a notification to capture a new media content item for populating the plot template.
    Type: Grant
    Filed: June 13, 2024
    Date of Patent: May 27, 2025
    Assignee: Apple Inc.
    Inventor: Ian M. Richter
  • Patent number: 12293025
    Abstract: In various implementations, a method comprises: identifying a plurality of data items, each of the plurality of data items having at least a first metadata field or a second metadata field; displaying a volumetric environment including a first plurality of SR objects corresponding to a first plurality of data items among the plurality of data items, wherein the first plurality of data items includes the first metadata field with first metadata field values; detecting a first user input indicative of the second metadata field; and in response to detecting the first user input, replacing the first plurality of SR objects within the volumetric environment with a second plurality of SR objects corresponding to a second plurality of data items among the plurality of data items, wherein each of the second plurality of data items includes the second metadata field with second metadata field values.
    Type: Grant
    Filed: June 28, 2023
    Date of Patent: May 6, 2025
    Assignee: APPLE INC.
    Inventors: Ian M. Richter, Christopher Eubank, Tomlinson Holman
  • Patent number: 12293759
    Abstract: In one implementation, a method of generating CGR content to accompany an audio file including audio data and lyric data based on semantic analysis of the audio data and the lyric data is performed by a device including a processor, non-transitory memory, a speaker, and a display. The method includes obtaining an audio file including audio data and lyric data associated with the audio data. The method includes performing natural language analysis of at least a portion of the lyric data to determine a plurality of candidate meanings of the portion of the lyric data. The method includes performing semantic analysis of the portion of the lyric data to determine a meaning of the portion of the lyric data by selecting, based on a corresponding portion of the audio data, one of the plurality of candidate meanings as the meaning of the portion of the lyric data. The method includes generating CGR content associated with the portion of the lyric data based on the meaning of the portion of the lyric data.
    Type: Grant
    Filed: October 31, 2023
    Date of Patent: May 6, 2025
    Assignee: APPLE INC.
    Inventor: Ian M. Richter
  • Patent number: 12272004
    Abstract: In some implementations, a method includes obtaining an end state of a first content item spanning a first time duration. In some implementations, the end state of the first content item indicates a first state of a synthesized reality (SR) agent at the end of the first time duration. In some implementations, the method includes obtaining an initial state of a second content item spanning a second time duration subsequent the first time duration. In some implementations, the initial state of the second content item indicates a second state of the SR agent at the beginning of the second time duration. In some implementations, the method includes synthesizing an intermediary emergent content item spanning over an intermediary time duration that is between the end of the first time duration and the beginning of the second time duration.
    Type: Grant
    Filed: October 26, 2023
    Date of Patent: April 8, 2025
    Assignee: APPLE INC.
    Inventor: Ian M. Richter
  • Patent number: 12272008
    Abstract: Various implementations disclosed herein include devices, systems, and methods for synthesizing an environment based on an image. In various implementations, a device includes a non-transitory memory and one or more processors coupled with the non-transitory memory. In various implementations, a method includes determining an engagement score that characterizes a level of engagement between a user and a representation of a subject included in an image. In some implementations, the method includes, in response to the engagement score satisfying an engagement threshold, obtaining stored information regarding the subject, and synthesizing an environment based on the image and the stored information regarding the subject.
    Type: Grant
    Filed: July 15, 2022
    Date of Patent: April 8, 2025
    Assignee: Apple Inc.
    Inventors: Ian M. Richter, Qi Shan
  • Patent number: 12249253
    Abstract: Various implementations disclosed herein include devices, systems, and methods for generating a response to a user focus indicator value based on a user comprehension value characterizing a user's association with the user focus indicator value. In some implementations, a method includes obtaining a user focus indicator value. A sequence of user voice inputs relating to the user focus indicator value is obtained. A user comprehension value characterizing an assessment of a user relative to the user focus indicator value is determined based on the user voice inputs. Based on a plurality of media content items that provide information about the user focus indicator value, a response to the user focus indicator value is synthesized that satisfies the user comprehension value. The response is outputted.
    Type: Grant
    Filed: July 8, 2021
    Date of Patent: March 11, 2025
    Assignee: Apple Inc.
    Inventors: Ian M. Richter, David Anders Winarsky
  • Publication number: 20250069340
    Abstract: In some implementations, a method includes: identifying a plurality of subsets associated with a physical environment; determining a set of spatial characteristics for each of the plurality of subsets, wherein a first set of spatial characteristics characterizes dimensions of a first subset and a second set of spatial characteristics characterizes dimensions of a second subset; generating an adapted first extended reality (XR) content portion for the first subset based at least in part on the first set of spatial characteristics; generating an adapted second XR content portion for the second subset based at least in part on the second set of spatial characteristics; and generating one or more navigation options that allow a user to traverse between the first and second subsets based on the first and second sets of spatial characteristics.
    Type: Application
    Filed: August 22, 2024
    Publication date: February 27, 2025
    Inventors: Gutemberg B. Guerra Filho, Raffi A. Bedikian, Ian M. Richter
  • Patent number: 12223943
    Abstract: Various implementations disclosed herein include devices, systems, and methods for synthesizing virtual speech. In various implementations, a device includes a display, an audio sensor, a non-transitory memory and one or more processors coupled with the non-transitory memory. A computer-generated reality (CGR) representation of a fictional character is displayed in a CGR environment on the display. A speech input is received from a first person via the audio sensor. The speech input is modified based on one or more language characteristic values associated with the fictional character in order to generate CGR speech. The CGR speech is outputted in the CGR environment via the CGR representation of the fictional character.
    Type: Grant
    Filed: August 24, 2023
    Date of Patent: February 11, 2025
    Assignee: APPLE INC.
    Inventor: Ian M. Richter
  • Publication number: 20250045978
    Abstract: In one implementation, a method includes: obtaining a user input to view SR content associated with video content; if the video content includes a first scene when the user input was detected: obtaining first SR content for a first time period of the video content associated with the first scene; obtaining a task associated with the first scene; and causing presentation of the first SR content and a first indication of the task associated with the first scene; and if the video content includes a second scene when the user input was detected: obtaining second SR content for a second time period of the video content associated with the second scene; obtaining a task associated with the second scene; and causing presentation of the second SR content and a second indication of the task associated with the second scene.
    Type: Application
    Filed: October 17, 2024
    Publication date: February 6, 2025
    Inventors: Ian M. Richter, Daniel Ulbricht, Jean-Daniel E. Nahmias, Omar Elafifi, Peter Meier
  • Publication number: 20250029379
    Abstract: Various implementations disclosed herein include devices, systems, and methods for obfuscating location data associated with a physical environment. In some implementations, a method includes obtaining, via an environmental sensor, environmental data corresponding to a physical environment. A first portion of the environmental data that corresponds to a first location is identified. In response to the first location being of a first location type, location data indicative of the first location is obfuscated from the environmental data by modifying the first portion of the environmental data.
    Type: Application
    Filed: October 8, 2024
    Publication date: January 23, 2025
    Inventor: Ian M. Richter
  • Publication number: 20240428444
    Abstract: Various implementations disclosed herein include devices, systems, and methods for low bandwidth transmission of event data. In various implementations, a device includes one or more cameras, a non-transitory memory, and one or more processors coupled with the one or more cameras and the non-transitory memory. In various implementations, the method includes obtaining, by the device, a set of images that correspond to a scene with a person. In various implementations, the method includes generating pose information for the person based on the set of images. In some implementations, the pose information indicates respective positions of body portions of the person. In some implementations, the method includes transmitting the pose information in accordance with a bandwidth utilization criterion.
    Type: Application
    Filed: August 12, 2024
    Publication date: December 26, 2024
    Inventor: Ian M. Richter
  • Patent number: 12175796
    Abstract: A method includes displaying, via a display, an environment that includes a representation of a person associated with the device. The representation includes a virtual face with virtual facial features corresponding to respective physical facial features of the person associated with the device. The method includes detecting, via a sensor, a change in a physical facial feature of the person associated with the device. The physical facial feature indicates a physical facial expression of the person. In response to determining that the physical facial expression breaches a criterion, modifying one or more virtual facial features of the virtual face so that the virtual face indicates a virtual facial expression that satisfies the criterion.
    Type: Grant
    Filed: October 7, 2021
    Date of Patent: December 24, 2024
    Assignee: APPLE INC.
    Inventor: Ian M. Richter
  • Patent number: 12148090
    Abstract: In some implementations, a method of generating a third person view of a computer-generated reality (CGR) environment is performed at a device including non-transitory memory and one or more processors coupled with the non-transitory memory. The method includes: obtaining a first viewing vector associated with a first user within a CGR environment; determining a first viewing frustum for the first user within the CGR environment based on the first viewing vector associated with the first user and one or more depth attributes; generating a representation of the first viewing frustum; and displaying, via the display device, a third person view of the CGR environment including an avatar of the first user and the representation of the first viewing frustum adjacent to the avatar of the first user.
    Type: Grant
    Filed: August 14, 2023
    Date of Patent: November 19, 2024
    Inventors: Ian M. Richter, John Joon Park, David Michael Hobbins
  • Patent number: 12148066
    Abstract: In one implementation, a method includes: obtaining a user input to view SR content associated with video content; if the video content includes a first scene when the user input was detected: obtaining first SR content for a first time period of the video content associated with the first scene; obtaining a task associated with the first scene; and causing presentation of the first SR content and a first indication of the task associated with the first scene; and if the video content includes a second scene when the user input was detected: obtaining second SR content for a second time period of the video content associated with the second scene; obtaining a task associated with the second scene; and causing presentation of the second SR content and a second indication of the task associated with the second scene.
    Type: Grant
    Filed: June 28, 2023
    Date of Patent: November 19, 2024
    Assignee: APPLE INC.
    Inventors: Ian M. Richter, Daniel Ulbricht, Jean-Daniel E. Nahmias, Omar Elafifi, Peter Meier
  • Patent number: 12136264
    Abstract: Various implementations disclosed herein include devices, systems, and methods for obfuscating location data associated with a physical environment. In some implementations, a method includes obtaining, via an environmental sensor, environmental data corresponding to a physical environment. A first portion of the environmental data that corresponds to a first location is identified. In response to the first location being of a first location type, location data indicative of the first location is obfuscated from the environmental data by modifying the first portion of the environmental data.
    Type: Grant
    Filed: August 31, 2021
    Date of Patent: November 5, 2024
    Assignee: APPLE INC.
    Inventor: Ian M. Richter