Patents by Inventor Omar ELAFIFI

Omar ELAFIFI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250068781
    Abstract: Implementations disclosed herein provide systems and methods that determine relationships between objects based on an original semantic mesh of vertices and faces that represent the 3D geometry of a physical environment. Such an original semantic mesh may be generated and used to provide input to a machine learning model that estimates relationships between the objects in the physical environment. For example, the machine learning model may output a graph of nodes and edges indicating that a vase is on top of a table or that a particular instance of a vase, V1, is on top of a particular instance of a table, T1.
    Type: Application
    Filed: November 13, 2024
    Publication date: February 27, 2025
    Inventors: Angela Blechschmidt, Daniel Ulbricht, Omar Elafifi
  • Publication number: 20250045978
    Abstract: In one implementation, a method includes: obtaining a user input to view SR content associated with video content; if the video content includes a first scene when the user input was detected: obtaining first SR content for a first time period of the video content associated with the first scene; obtaining a task associated with the first scene; and causing presentation of the first SR content and a first indication of the task associated with the first scene; and if the video content includes a second scene when the user input was detected: obtaining second SR content for a second time period of the video content associated with the second scene; obtaining a task associated with the second scene; and causing presentation of the second SR content and a second indication of the task associated with the second scene.
    Type: Application
    Filed: October 17, 2024
    Publication date: February 6, 2025
    Inventors: Ian M. Richter, Daniel Ulbricht, Jean-Daniel E. Nahmias, Omar Elafifi, Peter Meier
  • Patent number: 12175162
    Abstract: Implementations disclosed herein provide systems and methods that determine relationships between objects based on an original semantic mesh of vertices and faces that represent the 3D geometry of a physical environment. Such an original semantic mesh may be generated and used to provide input to a machine learning model that estimates relationships between the objects in the physical environment. For example, the machine learning model may output a graph of nodes and edges indicating that a vase is on top of a table or that a particular instance of a vase, V1, is on top of a particular instance of a table, T1.
    Type: Grant
    Filed: August 4, 2020
    Date of Patent: December 24, 2024
    Assignee: APPLE INC.
    Inventors: Angela Blechschmidt, Daniel Ulbricht, Omar Elafifi
  • Patent number: 12148066
    Abstract: In one implementation, a method includes: obtaining a user input to view SR content associated with video content; if the video content includes a first scene when the user input was detected: obtaining first SR content for a first time period of the video content associated with the first scene; obtaining a task associated with the first scene; and causing presentation of the first SR content and a first indication of the task associated with the first scene; and if the video content includes a second scene when the user input was detected: obtaining second SR content for a second time period of the video content associated with the second scene; obtaining a task associated with the second scene; and causing presentation of the second SR content and a second indication of the task associated with the second scene.
    Type: Grant
    Filed: June 28, 2023
    Date of Patent: November 19, 2024
    Assignee: APPLE INC.
    Inventors: Ian M. Richter, Daniel Ulbricht, Jean-Daniel E. Nahmias, Omar Elafifi, Peter Meier
  • Publication number: 20240212346
    Abstract: In one implementation, a method of remedying a medical impairment of a user is performed by a device including a processor, non-transitory memory, one or more biometric sensors, an image sensor, and a display. The method includes detecting, based on data from at least one of the image sensor and the one or more biometric sensors, a medical impairment of a user of the head-mounted device from a plurality of potential medical impairments associated with a plurality of remedies. The method includes selecting, from the plurality of remedies, a remedy of the medical impairment of the user. The method includes controlling the display to effect the remedy of the medical impairment of the user.
    Type: Application
    Filed: March 8, 2024
    Publication date: June 27, 2024
    Inventors: Anselm Grundhoefer, Pedro Manuel Da Silva Quelhas, Phillip N. Smith, Omar Elafifi, Eshan Verma, Daniele Casaburo
  • Patent number: 11961290
    Abstract: In one implementation, a method of remedying a medical impairment of a user is performed by a device including a processor, non-transitory memory, one or more biometric sensors, an image sensor, and a display. The method includes detecting, based on data from at least one of the image sensor and the one or more biometric sensors, a medical impairment of a user of the head-mounted device from a plurality of potential medical impairments associated with a plurality of remedies. The method includes selecting, from the plurality of remedies, a remedy of the medical impairment of the user. The method includes controlling the display to effect the remedy of the medical impairment of the user.
    Type: Grant
    Filed: June 16, 2020
    Date of Patent: April 16, 2024
    Assignee: APPLE INC.
    Inventors: Anselm Grundhoefer, Pedro Manuel Da Silva Quelhas, Phillip N. Smith, Omar Elafifi, Eshan Verma, Daniele Casaburo
  • Publication number: 20240013487
    Abstract: In one implementation, a method includes: identifying a plurality of plot-effectuators and a plurality of environmental elements within a scene associated with a portion of video content; determining one or more spatial relationships between the plurality of plot-effectuators and the plurality of environmental elements within the scene; synthesizing a representation of the scene based at least in part on the one or more spatial relationships; extracting a plurality of action sequences corresponding to the plurality of plot-effectuators based at least in part on the portion of the video content; and generating a corresponding synthesized reality (SR) reconstruction of the scene by driving a plurality of digital assets, associated with the plurality of plot-effectuators, within the representation of the scene according to the plurality of action sequences.
    Type: Application
    Filed: July 11, 2022
    Publication date: January 11, 2024
    Inventors: Ian M. Richter, Daniel Ulbricht, Jean-Daniel E. Nahmias, Omar Elafifi, Peter Meier
  • Publication number: 20230351644
    Abstract: In one implementation, a method includes: obtaining a user input to view SR content associated with video content; if the video content includes a first scene when the user input was detected: obtaining first SR content for a first time period of the video content associated with the first scene; obtaining a task associated with the first scene; and causing presentation of the first SR content and a first indication of the task associated with the first scene; and if the video content includes a second scene when the user input was detected: obtaining second SR content for a second time period of the video content associated with the second scene; obtaining a task associated with the second scene; and causing presentation of the second SR content and a second indication of the task associated with the second scene.
    Type: Application
    Filed: June 28, 2023
    Publication date: November 2, 2023
    Inventors: Ian M. Richter, Daniel Ulbricht, Jean-Daniel E. Nahmias, Omar Elafifi, Peter Meier
  • Patent number: 11804014
    Abstract: In some implementations, representations of applications are identified, positioned, and configured in a computer generated reality (CGR) environment based on context. The location at which the representation of the application is positioned may be based on the context of the CGR environment. The context may be determined based on non-image data that is separate from image data of the physical environment being captured for the CGR environment. As examples, the non-image data may relate to the user, a user preferences, a user attribute, a user gesture, motion, activity, or interaction, semantics related to user input or an external source of information, the current time, date, or time period, information from another device involved in the CGR, etc.
    Type: Grant
    Filed: April 10, 2020
    Date of Patent: October 31, 2023
    Assignee: Apple Inc.
    Inventors: Daniele Casaburo, Anselm Grundhoefer, Eshan Verma, Omar Elafifi, Pedro Da Silva Quelhas
  • Patent number: 11727606
    Abstract: In one implementation, a method includes: obtaining a user input to view SR content associated with video content; if the video content includes a first scene when the user input was detected: obtaining first SR content for a first time period of the video content associated with the first scene; obtaining a task associated with the first scene; and causing presentation of the first SR content and a first indication of the task associated with the first scene; and if the video content includes a second scene when the user input was detected: obtaining second SR content for a second time period of the video content associated with the second scene; obtaining a task associated with the second scene; and causing presentation of the second SR content and a second indication of the task associated with the second scene.
    Type: Grant
    Filed: March 29, 2022
    Date of Patent: August 15, 2023
    Assignee: APPLE INC.
    Inventors: Ian M. Richter, Daniel Ulbricht, Jean-Daniel E. Nahmias, Omar Elafifi, Peter Meier
  • Publication number: 20220222869
    Abstract: In one implementation, a method includes: obtaining a user input to view SR content associated with video content; if the video content includes a first scene when the user input was detected: obtaining first SR content for a first time period of the video content associated with the first scene; obtaining a task associated with the first scene; and causing presentation of the first SR content and a first indication of the task associated with the first scene; and if the video content includes a second scene when the user input was detected: obtaining second SR content for a second time period of the video content associated with the second scene; obtaining a task associated with the second scene; and causing presentation of the second SR content and a second indication of the task associated with the second scene.
    Type: Application
    Filed: March 29, 2022
    Publication date: July 14, 2022
    Inventors: Ian M. Richter, Daniel Ulbricht, Jean-Daniel E. Nahmias, Omar Elafifi, Peter Meier
  • Patent number: 11386653
    Abstract: In one implementation, a method includes: identifying a first plot-within a scene associated with a portion of video content; synthesizing a scene description for the scene that corresponds to a trajectory of the first plot-effectuator within a setting associated with the scene and actions performed by the first plot-effectuator; and generating a corresponding synthesized reality (SR) reconstruction of the scene by driving a first digital asset associated with the first plot-effectuator according to the scene description for the scene.
    Type: Grant
    Filed: January 18, 2019
    Date of Patent: July 12, 2022
    Inventors: Ian M. Richter, Daniel Ulbricht, Jean-Daniel E. Nahmias, Omar Elafifi, Peter Meier
  • Patent number: 11328456
    Abstract: In one implementation, a method includes: while causing presentation of video content having a current plot setting, receiving a user input indicating a request to explore the current plot setting; obtaining synthesized reality (SR) content associated with the current plot setting in response to receiving the user input; causing presentation of the SR content associated with the current plot setting; receiving one or more user interactions with the SR content; and adjusting the presentation of the SR content in response to receiving the one or more user interactions with the SR content.
    Type: Grant
    Filed: January 18, 2019
    Date of Patent: May 10, 2022
    Assignee: APPLE INC.
    Inventors: Ian M. Richter, Daniel Ulbricht, Jean-Daniel E. Nahmias, Omar Elafifi, Peter Meier
  • Publication number: 20210073429
    Abstract: Implementations disclosed herein provide systems and methods that determine relationships between objects based on an original semantic mesh of vertices and faces that represent the 3D geometry of a physical environment. Such an original semantic mesh may be generated and used to provide input to a machine learning model that estimates relationships between the objects in the physical environment. For example, the machine learning model may output a graph of nodes and edges indicating that a vase is on top of a table or that a particular instance of a vase, V1, is on top of a particular instance of a table, T1.
    Type: Application
    Filed: August 4, 2020
    Publication date: March 11, 2021
    Inventors: Angela Blechschmidt, Daniel Ulbricht, Omar Elafifi
  • Publication number: 20210074031
    Abstract: In one implementation, a method includes: while causing presentation of video content having a current plot setting, receiving a user input indicating a request to explore the current plot setting; obtaining synthesized reality (SR) content associated with the current plot setting in response to receiving the user input; causing presentation of the SR content associated with the current plot setting; receiving one or more user interactions with the SR content; and adjusting the presentation of the SR content in response to receiving the one or more user interactions with the SR content.
    Type: Application
    Filed: January 18, 2019
    Publication date: March 11, 2021
    Inventors: Ian M. Richter, Daniel Ulbricht, Jean-Daniel E. Nahmias, Omar Elafifi, Peter Meier
  • Publication number: 20200387712
    Abstract: In one implementation, a method includes: identifying a first plot-within a scene associated with a portion of video content; synthesizing a scene description for the scene that corresponds to a trajectory of the first plot-effectuator within a setting associated with the scene and actions performed by the first plot-effectuator; and generating a corresponding synthesized reality (SR) reconstruction of the scene by driving a first digital asset associated with the first plot-effectuator according to the scene description for the scene.
    Type: Application
    Filed: January 18, 2019
    Publication date: December 10, 2020
    Inventors: Ian M. RICHTER, Daniel ULBRICHT, Jean-Daniel E. NAHMIAS, Omar ELAFIFI, Peter MEIER