Patents by Inventor Omar ELAFIFI
Omar ELAFIFI has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250068781Abstract: Implementations disclosed herein provide systems and methods that determine relationships between objects based on an original semantic mesh of vertices and faces that represent the 3D geometry of a physical environment. Such an original semantic mesh may be generated and used to provide input to a machine learning model that estimates relationships between the objects in the physical environment. For example, the machine learning model may output a graph of nodes and edges indicating that a vase is on top of a table or that a particular instance of a vase, V1, is on top of a particular instance of a table, T1.Type: ApplicationFiled: November 13, 2024Publication date: February 27, 2025Inventors: Angela Blechschmidt, Daniel Ulbricht, Omar Elafifi
-
Publication number: 20250045978Abstract: In one implementation, a method includes: obtaining a user input to view SR content associated with video content; if the video content includes a first scene when the user input was detected: obtaining first SR content for a first time period of the video content associated with the first scene; obtaining a task associated with the first scene; and causing presentation of the first SR content and a first indication of the task associated with the first scene; and if the video content includes a second scene when the user input was detected: obtaining second SR content for a second time period of the video content associated with the second scene; obtaining a task associated with the second scene; and causing presentation of the second SR content and a second indication of the task associated with the second scene.Type: ApplicationFiled: October 17, 2024Publication date: February 6, 2025Inventors: Ian M. Richter, Daniel Ulbricht, Jean-Daniel E. Nahmias, Omar Elafifi, Peter Meier
-
Patent number: 12175162Abstract: Implementations disclosed herein provide systems and methods that determine relationships between objects based on an original semantic mesh of vertices and faces that represent the 3D geometry of a physical environment. Such an original semantic mesh may be generated and used to provide input to a machine learning model that estimates relationships between the objects in the physical environment. For example, the machine learning model may output a graph of nodes and edges indicating that a vase is on top of a table or that a particular instance of a vase, V1, is on top of a particular instance of a table, T1.Type: GrantFiled: August 4, 2020Date of Patent: December 24, 2024Assignee: APPLE INC.Inventors: Angela Blechschmidt, Daniel Ulbricht, Omar Elafifi
-
Patent number: 12148066Abstract: In one implementation, a method includes: obtaining a user input to view SR content associated with video content; if the video content includes a first scene when the user input was detected: obtaining first SR content for a first time period of the video content associated with the first scene; obtaining a task associated with the first scene; and causing presentation of the first SR content and a first indication of the task associated with the first scene; and if the video content includes a second scene when the user input was detected: obtaining second SR content for a second time period of the video content associated with the second scene; obtaining a task associated with the second scene; and causing presentation of the second SR content and a second indication of the task associated with the second scene.Type: GrantFiled: June 28, 2023Date of Patent: November 19, 2024Assignee: APPLE INC.Inventors: Ian M. Richter, Daniel Ulbricht, Jean-Daniel E. Nahmias, Omar Elafifi, Peter Meier
-
Publication number: 20240212346Abstract: In one implementation, a method of remedying a medical impairment of a user is performed by a device including a processor, non-transitory memory, one or more biometric sensors, an image sensor, and a display. The method includes detecting, based on data from at least one of the image sensor and the one or more biometric sensors, a medical impairment of a user of the head-mounted device from a plurality of potential medical impairments associated with a plurality of remedies. The method includes selecting, from the plurality of remedies, a remedy of the medical impairment of the user. The method includes controlling the display to effect the remedy of the medical impairment of the user.Type: ApplicationFiled: March 8, 2024Publication date: June 27, 2024Inventors: Anselm Grundhoefer, Pedro Manuel Da Silva Quelhas, Phillip N. Smith, Omar Elafifi, Eshan Verma, Daniele Casaburo
-
Patent number: 11961290Abstract: In one implementation, a method of remedying a medical impairment of a user is performed by a device including a processor, non-transitory memory, one or more biometric sensors, an image sensor, and a display. The method includes detecting, based on data from at least one of the image sensor and the one or more biometric sensors, a medical impairment of a user of the head-mounted device from a plurality of potential medical impairments associated with a plurality of remedies. The method includes selecting, from the plurality of remedies, a remedy of the medical impairment of the user. The method includes controlling the display to effect the remedy of the medical impairment of the user.Type: GrantFiled: June 16, 2020Date of Patent: April 16, 2024Assignee: APPLE INC.Inventors: Anselm Grundhoefer, Pedro Manuel Da Silva Quelhas, Phillip N. Smith, Omar Elafifi, Eshan Verma, Daniele Casaburo
-
Publication number: 20240013487Abstract: In one implementation, a method includes: identifying a plurality of plot-effectuators and a plurality of environmental elements within a scene associated with a portion of video content; determining one or more spatial relationships between the plurality of plot-effectuators and the plurality of environmental elements within the scene; synthesizing a representation of the scene based at least in part on the one or more spatial relationships; extracting a plurality of action sequences corresponding to the plurality of plot-effectuators based at least in part on the portion of the video content; and generating a corresponding synthesized reality (SR) reconstruction of the scene by driving a plurality of digital assets, associated with the plurality of plot-effectuators, within the representation of the scene according to the plurality of action sequences.Type: ApplicationFiled: July 11, 2022Publication date: January 11, 2024Inventors: Ian M. Richter, Daniel Ulbricht, Jean-Daniel E. Nahmias, Omar Elafifi, Peter Meier
-
Publication number: 20230351644Abstract: In one implementation, a method includes: obtaining a user input to view SR content associated with video content; if the video content includes a first scene when the user input was detected: obtaining first SR content for a first time period of the video content associated with the first scene; obtaining a task associated with the first scene; and causing presentation of the first SR content and a first indication of the task associated with the first scene; and if the video content includes a second scene when the user input was detected: obtaining second SR content for a second time period of the video content associated with the second scene; obtaining a task associated with the second scene; and causing presentation of the second SR content and a second indication of the task associated with the second scene.Type: ApplicationFiled: June 28, 2023Publication date: November 2, 2023Inventors: Ian M. Richter, Daniel Ulbricht, Jean-Daniel E. Nahmias, Omar Elafifi, Peter Meier
-
Patent number: 11804014Abstract: In some implementations, representations of applications are identified, positioned, and configured in a computer generated reality (CGR) environment based on context. The location at which the representation of the application is positioned may be based on the context of the CGR environment. The context may be determined based on non-image data that is separate from image data of the physical environment being captured for the CGR environment. As examples, the non-image data may relate to the user, a user preferences, a user attribute, a user gesture, motion, activity, or interaction, semantics related to user input or an external source of information, the current time, date, or time period, information from another device involved in the CGR, etc.Type: GrantFiled: April 10, 2020Date of Patent: October 31, 2023Assignee: Apple Inc.Inventors: Daniele Casaburo, Anselm Grundhoefer, Eshan Verma, Omar Elafifi, Pedro Da Silva Quelhas
-
Patent number: 11727606Abstract: In one implementation, a method includes: obtaining a user input to view SR content associated with video content; if the video content includes a first scene when the user input was detected: obtaining first SR content for a first time period of the video content associated with the first scene; obtaining a task associated with the first scene; and causing presentation of the first SR content and a first indication of the task associated with the first scene; and if the video content includes a second scene when the user input was detected: obtaining second SR content for a second time period of the video content associated with the second scene; obtaining a task associated with the second scene; and causing presentation of the second SR content and a second indication of the task associated with the second scene.Type: GrantFiled: March 29, 2022Date of Patent: August 15, 2023Assignee: APPLE INC.Inventors: Ian M. Richter, Daniel Ulbricht, Jean-Daniel E. Nahmias, Omar Elafifi, Peter Meier
-
Publication number: 20220222869Abstract: In one implementation, a method includes: obtaining a user input to view SR content associated with video content; if the video content includes a first scene when the user input was detected: obtaining first SR content for a first time period of the video content associated with the first scene; obtaining a task associated with the first scene; and causing presentation of the first SR content and a first indication of the task associated with the first scene; and if the video content includes a second scene when the user input was detected: obtaining second SR content for a second time period of the video content associated with the second scene; obtaining a task associated with the second scene; and causing presentation of the second SR content and a second indication of the task associated with the second scene.Type: ApplicationFiled: March 29, 2022Publication date: July 14, 2022Inventors: Ian M. Richter, Daniel Ulbricht, Jean-Daniel E. Nahmias, Omar Elafifi, Peter Meier
-
Patent number: 11386653Abstract: In one implementation, a method includes: identifying a first plot-within a scene associated with a portion of video content; synthesizing a scene description for the scene that corresponds to a trajectory of the first plot-effectuator within a setting associated with the scene and actions performed by the first plot-effectuator; and generating a corresponding synthesized reality (SR) reconstruction of the scene by driving a first digital asset associated with the first plot-effectuator according to the scene description for the scene.Type: GrantFiled: January 18, 2019Date of Patent: July 12, 2022Inventors: Ian M. Richter, Daniel Ulbricht, Jean-Daniel E. Nahmias, Omar Elafifi, Peter Meier
-
Patent number: 11328456Abstract: In one implementation, a method includes: while causing presentation of video content having a current plot setting, receiving a user input indicating a request to explore the current plot setting; obtaining synthesized reality (SR) content associated with the current plot setting in response to receiving the user input; causing presentation of the SR content associated with the current plot setting; receiving one or more user interactions with the SR content; and adjusting the presentation of the SR content in response to receiving the one or more user interactions with the SR content.Type: GrantFiled: January 18, 2019Date of Patent: May 10, 2022Assignee: APPLE INC.Inventors: Ian M. Richter, Daniel Ulbricht, Jean-Daniel E. Nahmias, Omar Elafifi, Peter Meier
-
Publication number: 20210073429Abstract: Implementations disclosed herein provide systems and methods that determine relationships between objects based on an original semantic mesh of vertices and faces that represent the 3D geometry of a physical environment. Such an original semantic mesh may be generated and used to provide input to a machine learning model that estimates relationships between the objects in the physical environment. For example, the machine learning model may output a graph of nodes and edges indicating that a vase is on top of a table or that a particular instance of a vase, V1, is on top of a particular instance of a table, T1.Type: ApplicationFiled: August 4, 2020Publication date: March 11, 2021Inventors: Angela Blechschmidt, Daniel Ulbricht, Omar Elafifi
-
Publication number: 20210074031Abstract: In one implementation, a method includes: while causing presentation of video content having a current plot setting, receiving a user input indicating a request to explore the current plot setting; obtaining synthesized reality (SR) content associated with the current plot setting in response to receiving the user input; causing presentation of the SR content associated with the current plot setting; receiving one or more user interactions with the SR content; and adjusting the presentation of the SR content in response to receiving the one or more user interactions with the SR content.Type: ApplicationFiled: January 18, 2019Publication date: March 11, 2021Inventors: Ian M. Richter, Daniel Ulbricht, Jean-Daniel E. Nahmias, Omar Elafifi, Peter Meier
-
Publication number: 20200387712Abstract: In one implementation, a method includes: identifying a first plot-within a scene associated with a portion of video content; synthesizing a scene description for the scene that corresponds to a trajectory of the first plot-effectuator within a setting associated with the scene and actions performed by the first plot-effectuator; and generating a corresponding synthesized reality (SR) reconstruction of the scene by driving a first digital asset associated with the first plot-effectuator according to the scene description for the scene.Type: ApplicationFiled: January 18, 2019Publication date: December 10, 2020Inventors: Ian M. RICHTER, Daniel ULBRICHT, Jean-Daniel E. NAHMIAS, Omar ELAFIFI, Peter MEIER