Patents by Inventor Olivier Soares

Olivier Soares has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12266106
    Abstract: Rendering an avatar may include determining an expression to be represented by an avatar, obtaining a blood texture map associated with the expression, wherein the blood texture map represents an offset of coloration from an albedo map for the expression, and rendering the avatar utilizing the blood texture map.
    Type: Grant
    Filed: November 27, 2023
    Date of Patent: April 1, 2025
    Assignee: Apple Inc.
    Inventors: Olivier Soares, Andrew P. Mason
  • Patent number: 12243169
    Abstract: Various implementations disclosed herein include devices, systems, and methods that present a view of a device user's face portion, that would otherwise be blocked by an electronic device positioned in front of the face, on an outward-facing display of the user's device. The view of the user's face portion may be configured to enable observers to see the user's eyes and facial expressions as if they were seeing through a clear device at the user's actual eyes and facial expressions. Various techniques are used to provide views of the user's face that are realistic, that show the user's current facial appearance, and/or that present the face portion with 3D spatial accuracy, e.g., each eye appearing to be in its actual 3D position. Some implementations combine live data with previously-obtained data, e.g., combining live data with enrollment data.
    Type: Grant
    Filed: March 20, 2024
    Date of Patent: March 4, 2025
    Assignee: Apple Inc.
    Inventors: Gilles M. Cadet, Shaobo Guan, Olivier Soares, Graham L. Fyffe, Yang Song
  • Publication number: 20250054240
    Abstract: Various implementations disclosed herein include devices, systems, and methods that present a view of a device user's face portion, that would otherwise be blocked by an electronic device positioned in front of the face, on an outward-facing display of the user's device. The view of the user's face portion may be configured to enable observers to see the user's eyes and facial expressions as if they were seeing through a clear device at the user's actual eyes and facial expressions. Various techniques are used to provide views of the user's face that are realistic, that show the user's current facial appearance, and/or that present the face portion with 3D spatial accuracy, e.g., each eye appearing to be in its actual 3D position.
    Type: Application
    Filed: October 30, 2024
    Publication date: February 13, 2025
    Inventors: Gilles M. Cadet, Olivier Soares, Graham L. Fyffe, Yang Song, Shaobo Guan
  • Patent number: 12159351
    Abstract: Various implementations disclosed herein include devices, systems, and methods that present a view of a device user's face portion, that would otherwise be blocked by an electronic device positioned in front of the face, on an outward-facing display of the user's device. The view of the user's face portion may be configured to enable observers to see the user's eyes and facial expressions as if they were seeing through a clear device at the user's actual eyes and facial expressions. Various techniques are used to provide views of the user's face that are realistic, that show the user's current facial appearance, and/or that present the face portion with 3D spatial accuracy, e.g., each eye appearing to be in its actual 3D position.
    Type: Grant
    Filed: September 29, 2023
    Date of Patent: December 3, 2024
    Assignee: Apple Inc.
    Inventors: Gilles M. Cadet, Olivier Soares, Graham L. Fyffe, Yang Song, Shaobo Guan
  • Patent number: 12125130
    Abstract: Sensor data indicating a user's response to an avatar experience in which the user experiences a rendered avatar model is obtained. A perceptual quality metric value corresponding to the rendered avatar model is determined based on the sensor data and a determined relationship between the sensor data and the perceptual quality metric value. The avatar model is re-rendered for display based on the perceptual quality metric value.
    Type: Grant
    Filed: May 11, 2020
    Date of Patent: October 22, 2024
    Assignee: Apple Inc.
    Inventors: Grant H. Mulliken, Akiko Ikkai, Izzet B. Yildiz, John S. McCarten, Lilli I. Jonsson, Olivier Soares, Thomas Gebauer, Fletcher R. Rothkopf, Andrew P. Mason
  • Publication number: 20240331297
    Abstract: Various implementations disclosed herein include devices, systems, and methods that present a view of a device user's face portion, that would otherwise be blocked by an electronic device positioned in front of the face, on an outward-facing display of the user's device. The view of the user's face portion may be configured to enable observers to see the user's eyes and facial expressions as if they were seeing through a clear device at the user's actual eyes and facial expressions. Various techniques are used to provide views of the user's face that are realistic, that show the user's current facial appearance, and/or that present the face portion with 3D spatial accuracy, e.g., each eye appearing to be in its actual 3D position. Some implementations combine live data with previously-obtained data, e.g., combining live data with enrollment data.
    Type: Application
    Filed: March 20, 2024
    Publication date: October 3, 2024
    Inventors: Gilles M. Cadet, Shaobo Guan, Olivier Soares, Graham L. Fyffe, Yang Song
  • Publication number: 20240331294
    Abstract: Various implementations disclosed herein include devices, systems, and methods that present a view of a device user's face portion, that would otherwise be blocked by an electronic device positioned in front of the face, on an outward-facing display of the user's device. The view of the user's face portion may be configured to enable observers to see the user's eyes and facial expressions as if they were seeing through a clear device at the user's actual eyes and facial expressions. Various techniques are used to provide views of the user's face that are realistic, that show the user's current facial appearance, and/or that present the face portion with 3D spatial accuracy, e.g., each eye appearing to be in its actual 3D position.
    Type: Application
    Filed: September 29, 2023
    Publication date: October 3, 2024
    Inventors: Gilles M Cadet, Olivier Soares, Graham L. Fyffe, Yang Song, Shaobo Guan
  • Publication number: 20240331174
    Abstract: Generating a 3D representation of a subject includes obtaining an image of a physical subject. Front depth data is obtained for a front portion of the physical subject. Back depth data is obtained for the physical subject based on the image and the front depth data. A set of joint locations is determined for the physical subject from the image, the front depth data, and the back depth data.
    Type: Application
    Filed: March 25, 2024
    Publication date: October 3, 2024
    Inventors: Ran Luo, Olivier Soares, Rishabh Battulwar
  • Publication number: 20240320937
    Abstract: In one implementation, a method includes presenting, via a display device, a first synthesized reality (SR) view of an event that includes SR content associated with the event. The SR content includes a plurality of related layers of SR content that perform actions associated with the event. The method includes detecting, via one or more input devices, selection of a respective layer among the plurality of related layers of SR content associated with the event. The method includes presenting, via the display device, a second SR view of the event that includes the respective layer of SR content in response to the selection of the respective layer. The second SR view corresponds to a point-of-view of the respective layer.
    Type: Application
    Filed: May 31, 2024
    Publication date: September 26, 2024
    Inventors: Ian M. Richter, Michael J. Rockwell, Amritpal Singh Saini, Olivier Soares
  • Publication number: 20240292175
    Abstract: An audio system and a method of determining an audio filter based on a position of an audio device of the audio system, are described. The audio system receives an image of the audio device being worn by a user and determines, based on the image and a known geometric relationship between a datum on the audio device and an electroacoustic transducer of the audio device, a relative position between the electroacoustic transducer and an anatomical feature of the user. The audio filter is determined based on the relative position. The audio filter can be applied to an audio input signal to render spatialized sound to the user through the electroacoustic transducer, or the audio filter can be applied to a microphone input signal to capture speech of the user by the electroacoustic transducer. Other aspects are also described and claimed.
    Type: Application
    Filed: May 3, 2024
    Publication date: August 29, 2024
    Inventors: Vignesh Ganapathi Subramanian, Antti J. Vanne, Olivier Soares, Andrew R. Harvey, Martin E. Johnson, Theo Auclair
  • Patent number: 12033290
    Abstract: In one implementation, a method includes: instantiating a first objective-effectuator (OE) associated with first attributes and a second OE associated with second attributes into a synthesized reality (SR) setting, wherein the first OE is encapsulated within the second OE; providing a first objective to the first OE based on the first and second attributes; providing a second objective to the second OE based on the second attributes, wherein the first and second objectives are associated with a time period between a first and second temporal points; generating a first set of actions for the first OE based on the first objective and a second set of actions for the second OE based on the second objective; and rendering for display the SR setting for the time period including the first set of actions performed by the first OE and the second set of actions performed by the second OE.
    Type: Grant
    Filed: August 14, 2023
    Date of Patent: July 9, 2024
    Assignee: APPLE INC.
    Inventors: Ian M. Richter, Michael J. Rockwell, Amritpal Singh Saini, Olivier Soares
  • Publication number: 20240221296
    Abstract: Rendering an avatar in a selected environment may include determining as inputs into an inferred shading network, an expression geometry to be represented by an avatar, head pose, and camera angle, along with a lighting representation for the selected environment. The inferred shading network may then generate a texture of a face to be utilized in rendering the avatar. The lighting representation may be obtained as lighting latent variables which are obtained from an environment autoencoder trained on environment images with various lighting conditions.
    Type: Application
    Filed: March 19, 2024
    Publication date: July 4, 2024
    Inventors: Andrew P. Mason, Olivier Soares, Haarm-Pieter Duiker, John S. McCarten
  • Patent number: 12003954
    Abstract: An audio system and a method of determining an audio filter based on a position of an audio device of the audio system, are described. The audio system receives an image of the audio device being worn by a user and determines, based on the image and a known geometric relationship between a datum on the audio device and an electroacoustic transducer of the audio device, a relative position between the electroacoustic transducer and an anatomical feature of the user. The audio filter is determined based on the relative position. The audio filter can be applied to an audio input signal to render spatialized sound to the user through the electroacoustic transducer, or the audio filter can be applied to a microphone input signal to capture speech of the user by the electroacoustic transducer. Other aspects are also described and claimed.
    Type: Grant
    Filed: March 28, 2022
    Date of Patent: June 4, 2024
    Assignee: Apple Inc.
    Inventors: Vignesh Ganapathi Subramanian, Antti J. Vanne, Olivier Soares, Andrew R. Harvey, Martin E. Johnson, Theo Auclair
  • Patent number: 11972526
    Abstract: Various implementations disclosed herein include devices, systems, and methods that present a view of a device user's face portion, that would otherwise be blocked by an electronic device positioned in front of the face, on an outward-facing display of the user's device. The view of the user's face portion may be configured to enable observers to see the user's eyes and facial expressions as if they were seeing through a clear device at the user's actual eyes and facial expressions. Various techniques are used to provide views of the user's face that are realistic, that show the user's current facial appearance, and/or that present the face portion with 3D spatial accuracy, e.g., each eye appearing to be in its actual 3D position. Some implementations combine live data with previously-obtained data, e.g., combining live data with enrollment data.
    Type: Grant
    Filed: September 29, 2023
    Date of Patent: April 30, 2024
    Assignee: Apple Inc.
    Inventors: Gilles M. Cadet, Shaobo Guan, Olivier Soares, Graham L. Fyffe, Yang Song
  • Patent number: 11967018
    Abstract: Rendering an avatar in a selected environment may include determining as inputs into an inferred shading network, an expression geometry to be represented by an avatar, head pose, and camera angle, along with a lighting representation for the selected environment. The inferred shading network may then generate a texture of a face to be utilized in rendering the avatar. The lighting representation may be obtained as lighting latent variables which are obtained from an environment autoencoder trained on environment images with various lighting conditions.
    Type: Grant
    Filed: December 21, 2020
    Date of Patent: April 23, 2024
    Assignee: Apple Inc.
    Inventors: Andrew P. Mason, Olivier Soares, Haarm-Pieter Duiker, John S. McCarten
  • Publication number: 20230386149
    Abstract: In one implementation, a method includes: instantiating a first objective-effectuator (OE) associated with first attributes and a second OE associated with second attributes into a synthesized reality (SR) setting, wherein the first OE is encapsulated within the second OE; providing a first objective to the first OE based on the first and second attributes; providing a second objective to the second OE based on the second attributes, wherein the first and second objectives are associated with a time period between a first and second temporal points; generating a first set of actions for the first OE based on the first objective and a second set of actions for the second OE based on the second objective; and rendering for display the SR setting for the time period including the first set of actions performed by the first OE and the second set of actions performed by the second OE.
    Type: Application
    Filed: August 14, 2023
    Publication date: November 30, 2023
    Inventors: Ian M. Richter, Michael J. Rockwell, Amritpal Singh Saini, Olivier Soares
  • Patent number: 11830182
    Abstract: Rendering an avatar may include determining an expression to be represented by an avatar, obtaining a blood texture map associated with the expression, wherein the blood texture map represents an offset of coloration from an albedo map for the expression, and rendering the avatar utilizing the blood texture map.
    Type: Grant
    Filed: August 20, 2020
    Date of Patent: November 28, 2023
    Assignee: Apple Inc.
    Inventors: Olivier Soares, Andrew P. Mason
  • Publication number: 20230334907
    Abstract: Estimating emotion may include obtaining an image of at least part of a face, and applying, to the image, an expression convolutional neural network (“CNN”) to obtain a latent vector for the image, where the expression CNN is trained from a plurality of pairs each comprising a facial image and a 3D mesh representation corresponding to the facial image. Estimating emotion may further include comparing the latent vector for the image to a plurality of previously processed latent vectors associated with known emotion types to estimate an emotion type for the image.
    Type: Application
    Filed: June 20, 2023
    Publication date: October 19, 2023
    Inventor: Olivier Soares
  • Patent number: 11769305
    Abstract: In one implementation, a method includes: instantiating a first objective-effectuator (OE) associated with first attributes and a second OE associated with second attributes into a synthesized reality (SR) setting, wherein the first OE is encapsulated within the second OE; providing a first objective to the first OE based on the first and second attributes; providing a second objective to the second OE based on the second attributes, wherein the first and second objectives are associated with a time period between a first and second temporal points; generating a first set of actions for the first OE based on the first objective and a second set of actions for the second OE based on the second objective; and rendering for display the SR setting for the time period including the first set of actions performed by the first OE and the second set of actions performed by the second OE.
    Type: Grant
    Filed: December 21, 2021
    Date of Patent: September 26, 2023
    Assignee: APPLE INC.
    Inventors: Ian M. Richter, Michael J. Rockwell, Amritpal Singh Saini, Olivier Soares
  • Patent number: 11727724
    Abstract: Estimating emotion may include obtaining an image of at least part of a face, and applying, to the image, an expression convolutional neural network (“CNN”) to obtain a latent vector for the image, where the expression CNN is trained from a plurality of pairs each comprising a facial image and a 3D mesh representation corresponding to the facial image. Estimating emotion may further include comparing the latent vector for the image to a plurality of previously processed latent vectors associated with known emotion types to estimate an emotion type for the image.
    Type: Grant
    Filed: September 24, 2019
    Date of Patent: August 15, 2023
    Assignee: Apple Inc.
    Inventor: Olivier Soares