Patents by Inventor Olivier Soares
Olivier Soares has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12266106Abstract: Rendering an avatar may include determining an expression to be represented by an avatar, obtaining a blood texture map associated with the expression, wherein the blood texture map represents an offset of coloration from an albedo map for the expression, and rendering the avatar utilizing the blood texture map.Type: GrantFiled: November 27, 2023Date of Patent: April 1, 2025Assignee: Apple Inc.Inventors: Olivier Soares, Andrew P. Mason
-
Patent number: 12243169Abstract: Various implementations disclosed herein include devices, systems, and methods that present a view of a device user's face portion, that would otherwise be blocked by an electronic device positioned in front of the face, on an outward-facing display of the user's device. The view of the user's face portion may be configured to enable observers to see the user's eyes and facial expressions as if they were seeing through a clear device at the user's actual eyes and facial expressions. Various techniques are used to provide views of the user's face that are realistic, that show the user's current facial appearance, and/or that present the face portion with 3D spatial accuracy, e.g., each eye appearing to be in its actual 3D position. Some implementations combine live data with previously-obtained data, e.g., combining live data with enrollment data.Type: GrantFiled: March 20, 2024Date of Patent: March 4, 2025Assignee: Apple Inc.Inventors: Gilles M. Cadet, Shaobo Guan, Olivier Soares, Graham L. Fyffe, Yang Song
-
Publication number: 20250054240Abstract: Various implementations disclosed herein include devices, systems, and methods that present a view of a device user's face portion, that would otherwise be blocked by an electronic device positioned in front of the face, on an outward-facing display of the user's device. The view of the user's face portion may be configured to enable observers to see the user's eyes and facial expressions as if they were seeing through a clear device at the user's actual eyes and facial expressions. Various techniques are used to provide views of the user's face that are realistic, that show the user's current facial appearance, and/or that present the face portion with 3D spatial accuracy, e.g., each eye appearing to be in its actual 3D position.Type: ApplicationFiled: October 30, 2024Publication date: February 13, 2025Inventors: Gilles M. Cadet, Olivier Soares, Graham L. Fyffe, Yang Song, Shaobo Guan
-
Patent number: 12159351Abstract: Various implementations disclosed herein include devices, systems, and methods that present a view of a device user's face portion, that would otherwise be blocked by an electronic device positioned in front of the face, on an outward-facing display of the user's device. The view of the user's face portion may be configured to enable observers to see the user's eyes and facial expressions as if they were seeing through a clear device at the user's actual eyes and facial expressions. Various techniques are used to provide views of the user's face that are realistic, that show the user's current facial appearance, and/or that present the face portion with 3D spatial accuracy, e.g., each eye appearing to be in its actual 3D position.Type: GrantFiled: September 29, 2023Date of Patent: December 3, 2024Assignee: Apple Inc.Inventors: Gilles M. Cadet, Olivier Soares, Graham L. Fyffe, Yang Song, Shaobo Guan
-
Patent number: 12125130Abstract: Sensor data indicating a user's response to an avatar experience in which the user experiences a rendered avatar model is obtained. A perceptual quality metric value corresponding to the rendered avatar model is determined based on the sensor data and a determined relationship between the sensor data and the perceptual quality metric value. The avatar model is re-rendered for display based on the perceptual quality metric value.Type: GrantFiled: May 11, 2020Date of Patent: October 22, 2024Assignee: Apple Inc.Inventors: Grant H. Mulliken, Akiko Ikkai, Izzet B. Yildiz, John S. McCarten, Lilli I. Jonsson, Olivier Soares, Thomas Gebauer, Fletcher R. Rothkopf, Andrew P. Mason
-
Publication number: 20240331297Abstract: Various implementations disclosed herein include devices, systems, and methods that present a view of a device user's face portion, that would otherwise be blocked by an electronic device positioned in front of the face, on an outward-facing display of the user's device. The view of the user's face portion may be configured to enable observers to see the user's eyes and facial expressions as if they were seeing through a clear device at the user's actual eyes and facial expressions. Various techniques are used to provide views of the user's face that are realistic, that show the user's current facial appearance, and/or that present the face portion with 3D spatial accuracy, e.g., each eye appearing to be in its actual 3D position. Some implementations combine live data with previously-obtained data, e.g., combining live data with enrollment data.Type: ApplicationFiled: March 20, 2024Publication date: October 3, 2024Inventors: Gilles M. Cadet, Shaobo Guan, Olivier Soares, Graham L. Fyffe, Yang Song
-
Publication number: 20240331294Abstract: Various implementations disclosed herein include devices, systems, and methods that present a view of a device user's face portion, that would otherwise be blocked by an electronic device positioned in front of the face, on an outward-facing display of the user's device. The view of the user's face portion may be configured to enable observers to see the user's eyes and facial expressions as if they were seeing through a clear device at the user's actual eyes and facial expressions. Various techniques are used to provide views of the user's face that are realistic, that show the user's current facial appearance, and/or that present the face portion with 3D spatial accuracy, e.g., each eye appearing to be in its actual 3D position.Type: ApplicationFiled: September 29, 2023Publication date: October 3, 2024Inventors: Gilles M Cadet, Olivier Soares, Graham L. Fyffe, Yang Song, Shaobo Guan
-
Publication number: 20240331174Abstract: Generating a 3D representation of a subject includes obtaining an image of a physical subject. Front depth data is obtained for a front portion of the physical subject. Back depth data is obtained for the physical subject based on the image and the front depth data. A set of joint locations is determined for the physical subject from the image, the front depth data, and the back depth data.Type: ApplicationFiled: March 25, 2024Publication date: October 3, 2024Inventors: Ran Luo, Olivier Soares, Rishabh Battulwar
-
Publication number: 20240320937Abstract: In one implementation, a method includes presenting, via a display device, a first synthesized reality (SR) view of an event that includes SR content associated with the event. The SR content includes a plurality of related layers of SR content that perform actions associated with the event. The method includes detecting, via one or more input devices, selection of a respective layer among the plurality of related layers of SR content associated with the event. The method includes presenting, via the display device, a second SR view of the event that includes the respective layer of SR content in response to the selection of the respective layer. The second SR view corresponds to a point-of-view of the respective layer.Type: ApplicationFiled: May 31, 2024Publication date: September 26, 2024Inventors: Ian M. Richter, Michael J. Rockwell, Amritpal Singh Saini, Olivier Soares
-
Publication number: 20240292175Abstract: An audio system and a method of determining an audio filter based on a position of an audio device of the audio system, are described. The audio system receives an image of the audio device being worn by a user and determines, based on the image and a known geometric relationship between a datum on the audio device and an electroacoustic transducer of the audio device, a relative position between the electroacoustic transducer and an anatomical feature of the user. The audio filter is determined based on the relative position. The audio filter can be applied to an audio input signal to render spatialized sound to the user through the electroacoustic transducer, or the audio filter can be applied to a microphone input signal to capture speech of the user by the electroacoustic transducer. Other aspects are also described and claimed.Type: ApplicationFiled: May 3, 2024Publication date: August 29, 2024Inventors: Vignesh Ganapathi Subramanian, Antti J. Vanne, Olivier Soares, Andrew R. Harvey, Martin E. Johnson, Theo Auclair
-
Patent number: 12033290Abstract: In one implementation, a method includes: instantiating a first objective-effectuator (OE) associated with first attributes and a second OE associated with second attributes into a synthesized reality (SR) setting, wherein the first OE is encapsulated within the second OE; providing a first objective to the first OE based on the first and second attributes; providing a second objective to the second OE based on the second attributes, wherein the first and second objectives are associated with a time period between a first and second temporal points; generating a first set of actions for the first OE based on the first objective and a second set of actions for the second OE based on the second objective; and rendering for display the SR setting for the time period including the first set of actions performed by the first OE and the second set of actions performed by the second OE.Type: GrantFiled: August 14, 2023Date of Patent: July 9, 2024Assignee: APPLE INC.Inventors: Ian M. Richter, Michael J. Rockwell, Amritpal Singh Saini, Olivier Soares
-
Publication number: 20240221296Abstract: Rendering an avatar in a selected environment may include determining as inputs into an inferred shading network, an expression geometry to be represented by an avatar, head pose, and camera angle, along with a lighting representation for the selected environment. The inferred shading network may then generate a texture of a face to be utilized in rendering the avatar. The lighting representation may be obtained as lighting latent variables which are obtained from an environment autoencoder trained on environment images with various lighting conditions.Type: ApplicationFiled: March 19, 2024Publication date: July 4, 2024Inventors: Andrew P. Mason, Olivier Soares, Haarm-Pieter Duiker, John S. McCarten
-
Patent number: 12003954Abstract: An audio system and a method of determining an audio filter based on a position of an audio device of the audio system, are described. The audio system receives an image of the audio device being worn by a user and determines, based on the image and a known geometric relationship between a datum on the audio device and an electroacoustic transducer of the audio device, a relative position between the electroacoustic transducer and an anatomical feature of the user. The audio filter is determined based on the relative position. The audio filter can be applied to an audio input signal to render spatialized sound to the user through the electroacoustic transducer, or the audio filter can be applied to a microphone input signal to capture speech of the user by the electroacoustic transducer. Other aspects are also described and claimed.Type: GrantFiled: March 28, 2022Date of Patent: June 4, 2024Assignee: Apple Inc.Inventors: Vignesh Ganapathi Subramanian, Antti J. Vanne, Olivier Soares, Andrew R. Harvey, Martin E. Johnson, Theo Auclair
-
Patent number: 11972526Abstract: Various implementations disclosed herein include devices, systems, and methods that present a view of a device user's face portion, that would otherwise be blocked by an electronic device positioned in front of the face, on an outward-facing display of the user's device. The view of the user's face portion may be configured to enable observers to see the user's eyes and facial expressions as if they were seeing through a clear device at the user's actual eyes and facial expressions. Various techniques are used to provide views of the user's face that are realistic, that show the user's current facial appearance, and/or that present the face portion with 3D spatial accuracy, e.g., each eye appearing to be in its actual 3D position. Some implementations combine live data with previously-obtained data, e.g., combining live data with enrollment data.Type: GrantFiled: September 29, 2023Date of Patent: April 30, 2024Assignee: Apple Inc.Inventors: Gilles M. Cadet, Shaobo Guan, Olivier Soares, Graham L. Fyffe, Yang Song
-
Patent number: 11967018Abstract: Rendering an avatar in a selected environment may include determining as inputs into an inferred shading network, an expression geometry to be represented by an avatar, head pose, and camera angle, along with a lighting representation for the selected environment. The inferred shading network may then generate a texture of a face to be utilized in rendering the avatar. The lighting representation may be obtained as lighting latent variables which are obtained from an environment autoencoder trained on environment images with various lighting conditions.Type: GrantFiled: December 21, 2020Date of Patent: April 23, 2024Assignee: Apple Inc.Inventors: Andrew P. Mason, Olivier Soares, Haarm-Pieter Duiker, John S. McCarten
-
Publication number: 20230386149Abstract: In one implementation, a method includes: instantiating a first objective-effectuator (OE) associated with first attributes and a second OE associated with second attributes into a synthesized reality (SR) setting, wherein the first OE is encapsulated within the second OE; providing a first objective to the first OE based on the first and second attributes; providing a second objective to the second OE based on the second attributes, wherein the first and second objectives are associated with a time period between a first and second temporal points; generating a first set of actions for the first OE based on the first objective and a second set of actions for the second OE based on the second objective; and rendering for display the SR setting for the time period including the first set of actions performed by the first OE and the second set of actions performed by the second OE.Type: ApplicationFiled: August 14, 2023Publication date: November 30, 2023Inventors: Ian M. Richter, Michael J. Rockwell, Amritpal Singh Saini, Olivier Soares
-
Patent number: 11830182Abstract: Rendering an avatar may include determining an expression to be represented by an avatar, obtaining a blood texture map associated with the expression, wherein the blood texture map represents an offset of coloration from an albedo map for the expression, and rendering the avatar utilizing the blood texture map.Type: GrantFiled: August 20, 2020Date of Patent: November 28, 2023Assignee: Apple Inc.Inventors: Olivier Soares, Andrew P. Mason
-
Publication number: 20230334907Abstract: Estimating emotion may include obtaining an image of at least part of a face, and applying, to the image, an expression convolutional neural network (“CNN”) to obtain a latent vector for the image, where the expression CNN is trained from a plurality of pairs each comprising a facial image and a 3D mesh representation corresponding to the facial image. Estimating emotion may further include comparing the latent vector for the image to a plurality of previously processed latent vectors associated with known emotion types to estimate an emotion type for the image.Type: ApplicationFiled: June 20, 2023Publication date: October 19, 2023Inventor: Olivier Soares
-
Patent number: 11769305Abstract: In one implementation, a method includes: instantiating a first objective-effectuator (OE) associated with first attributes and a second OE associated with second attributes into a synthesized reality (SR) setting, wherein the first OE is encapsulated within the second OE; providing a first objective to the first OE based on the first and second attributes; providing a second objective to the second OE based on the second attributes, wherein the first and second objectives are associated with a time period between a first and second temporal points; generating a first set of actions for the first OE based on the first objective and a second set of actions for the second OE based on the second objective; and rendering for display the SR setting for the time period including the first set of actions performed by the first OE and the second set of actions performed by the second OE.Type: GrantFiled: December 21, 2021Date of Patent: September 26, 2023Assignee: APPLE INC.Inventors: Ian M. Richter, Michael J. Rockwell, Amritpal Singh Saini, Olivier Soares
-
Patent number: 11727724Abstract: Estimating emotion may include obtaining an image of at least part of a face, and applying, to the image, an expression convolutional neural network (“CNN”) to obtain a latent vector for the image, where the expression CNN is trained from a plurality of pairs each comprising a facial image and a 3D mesh representation corresponding to the facial image. Estimating emotion may further include comparing the latent vector for the image to a plurality of previously processed latent vectors associated with known emotion types to estimate an emotion type for the image.Type: GrantFiled: September 24, 2019Date of Patent: August 15, 2023Assignee: Apple Inc.Inventor: Olivier Soares