Patents by Inventor EMMANUEL PIUZE-PHANEUF
EMMANUEL PIUZE-PHANEUF has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12223117Abstract: In some implementations, a method includes: obtaining uncorrected hand tracking data; obtaining a depth map associated with a physical environment; identifying a position of a portion of the finger within the physical environment based on the depth map and the uncorrected hand tracking data; performing spatial depth smoothing on a region of the depth map adjacent to the position of the portion of the finger; and generating corrected hand tracking data by performing point of view (POV) correction on the uncorrected hand tracking data based on the spatially depth smoothed region of the depth map adjacent to the portion of the finger.Type: GrantFiled: September 21, 2023Date of Patent: February 11, 2025Assignee: Apple Inc.Inventors: Emmanuel Piuze-Phaneuf, Ali Ercan, Julian K. Shutzberg, Paul A. Lacey
-
Publication number: 20240211035Abstract: Various implementations disclosed herein include devices, systems, and methods that adjust a focus of a camera based on a distance associated with a determined user attention. For example, an example process may include obtaining sensor data from one or more sensors in a physical environment. The process may include determining at least one gaze direction of at least one eye based on the sensor data. The process may further include determining a distance associated with user attention based on a convergence determined based on an intersection of gaze directions of the at least one gaze direction, or a distance of an object in a 3D representation of the physical environment based on the at least one gaze direction. The process may further include adjusting a focus of a camera of the one or more sensors based on the distance associated with the user attention.Type: ApplicationFiled: December 20, 2023Publication date: June 27, 2024Inventors: Arthur Y. Zhang, Ray L. Chang, Yanli Zhang, Luke A. Pillans, Ryan J. Dunn, Jeffrey N. Gleason, Christian Moore, Simon Fortin-Deschenes, Emmanuel Piuze-Phaneuf
-
Publication number: 20240202866Abstract: In one implementation, a method of performing perspective correction of an image is performed by a device including an image sensor, a display, one or more processors, and non-transitory memory. The method includes capturing, using the image sensor, an image of a physical environment. The method includes obtaining a plurality of initial depths respectively associated with a plurality of pixels of the image of the physical environment. The method includes generating a depth map for the image of the physical environment based on the plurality of initial depths and a respective plurality of confidences of the plurality of initial depths. The method includes transforming, using the one or more processors, the image of the physical environment based on the depth map and a difference between a perspective of the image sensor and a perspective of a user. The method includes displaying, on the display, the transformed image.Type: ApplicationFiled: March 29, 2022Publication date: June 20, 2024Inventors: Samer Barakat, Bertrand Nepveu, Christian W. Gosch, Emmanuel Piuze-Phaneuf, Vincent Chapdelaine-Couture
-
Publication number: 20240112303Abstract: In some implementations, a method includes: obtaining image data associated with a physical environment; obtaining first contextual information including at least one of first user information associated with a current state of a user of the computing system, first application information associated with a first application being executed by the computing system, and first environment information associated with a current state of the physical environment; selecting a first set of perspective correction operations based at least in part on the first contextual information; generating first corrected image data by performing the first set of perspective correction operations on the image data; and causing presentation of the first corrected image data.Type: ApplicationFiled: September 22, 2023Publication date: April 4, 2024Inventors: Vincent Chapdelaine-Couture, Emmanuel Piuze-Phaneuf, Julien Monat Rodier, Hermannus J. Damveld, Xiaojin Shi, Sebastian Gaweda
-
Publication number: 20240098232Abstract: In one implementation, a method of performing perspective correction is performed by a device having a three-dimensional device coordinate system and including a first image sensor, a first display, one or more processors, and non-transitory memory. The method includes capturing, using the first image sensor, a first image of a physical environment. The method includes transforming the first image from a first perspective of the first image sensor to a second perspective based on a difference between the first perspective and the second perspective, wherein the second perspective is a first distance away from a location corresponding to a first eye of a user less than a second distance between the first perspective and the location corresponding to the first eye of the user. The method includes displaying, on the first display, the transformed first image of the physical environment.Type: ApplicationFiled: September 18, 2023Publication date: March 21, 2024Inventors: Emmanuel Piuze-Phaneuf, Hermannus J. Damveld, Jean-Nicola F. Blanchet, Mohamed Selim Ben Himane, Vincent Chapdelaine-Couture, Xiaojin Shi
-
Publication number: 20240078640Abstract: In one implementation, a method of performing perspective correction is performed by a device including an image sensor, a display, one or more processors, and non-transitory memory. The method includes capturing, using the image sensor, an image of a physical environment. The method includes obtaining a depth map including a plurality of depths respectively associated with a plurality of pixels of the image of the physical environment. The method includes smoothing the depth map based on a world-fixed vector. The method includes transforming, using the one or more processors, the image of the physical environment based on the smoothed depth map. The method includes displaying, on the display, the transformed image.Type: ApplicationFiled: September 1, 2023Publication date: March 7, 2024Inventor: Emmanuel Piuze-Phaneuf
-
Publication number: 20240078692Abstract: In one implementation, a method of performing perspective correction is performed by a device including an image sensor, a display, one or more processors, and non-transitory memory. The method includes capturing, using the image sensor, an image of a physical environment. The method includes obtaining a depth map including a plurality of depths respectively associated with a plurality of pixels of the image of the physical environment, wherein the depth map includes, for a particular pixel at a particular pixel location representing a dynamic object in the physical environment, a particular depth corresponding to a distance between the image sensor and a static object in the physical environment behind the dynamic object. The method includes transforming, using the one or more processors, the image of the physical environment based on the depth map. The method includes displaying, on the display, the transformed image.Type: ApplicationFiled: September 5, 2023Publication date: March 7, 2024Inventors: Emmanuel Piuze-Phaneuf, Maxime Meilland
-
Publication number: 20240005536Abstract: In one implementation, a method of determining a display location is performed by a device including one or more processors and non-transitory memory. The method includes obtaining a camera set of two-dimensional coordinates of a user input object in a physical environment. The method includes obtaining depth information of the physical environment excluding the user input object. The method includes transforming the camera set of two-dimensional coordinates into a display set of two-dimensional coordinates based on the depth information of the physical environment excluding the user input object.Type: ApplicationFiled: June 21, 2023Publication date: January 4, 2024Inventors: Maxime Meilland, Emmanuel Piuze-Phaneuf
-
Publication number: 20230377249Abstract: The method includes: obtaining a first image of an environment from a first image sensor associated with first intrinsic parameters; performing a warping operation on the first image according to perspective offset values to generate a warped first image in order to account for perspective differences between the first image sensor and a user of the electronic device; determining an occlusion mask based on the warped first image that includes a plurality of holes; obtaining a second image of the environment from a second image sensor associated with second intrinsic parameters; normalizing the second image based on a difference between the first and second intrinsic parameters to produce a normalized second image; and filling a first set of one or more holes of the occlusion mask based on the normalized second image to produce a modified first image.Type: ApplicationFiled: December 28, 2022Publication date: November 23, 2023Inventors: Bertrand Nepveu, Vincent Chapdelaine-Couture, Emmanuel Piuze-Phaneuf
-
Patent number: 11656457Abstract: In one implementation, a method includes obtaining an image. The method includes splitting the image to produce a high-frequency component image and a low-frequency component image. The method includes downsampling the low-frequency component image to generate a downsampled low-frequency component image. The method includes correcting color aberration of the downsampled low-frequency component image to generate a color-corrected downsampled low-frequency component image. The method includes upsampling the color-corrected downsampled low-frequency component image to generate a color-corrected low-frequency component image. The method includes combining the color-corrected low-frequency component image and the high-frequency component image to generate a color-corrected version of the image.Type: GrantFiled: August 3, 2022Date of Patent: May 23, 2023Assignee: APPLE INC.Inventors: Zahra Sadeghipoor Kermani, Sheng Lin, Emmanuel Piuze Phaneuf, Ryan Joseph Dunn, Joachim Michael Deguara, Farhan A. Baqai
-
Publication number: 20220382045Abstract: In one implementation, a method includes obtaining an image. The method includes splitting the image to produce a high-frequency component image and a low-frequency component image. The method includes downsampling the low-frequency component image to generate a downsampled low-frequency component image. The method includes correcting color aberration of the downs ampled low-frequency component image to generate a color-corrected downsampled low-frequency component image. The method includes upsampling the color-corrected downsampled low-frequency component image to generate a color-corrected low-frequency component image. The method includes combining the color-corrected low-frequency component image and the high-frequency component image to generate a color-corrected version of the image.Type: ApplicationFiled: August 3, 2022Publication date: December 1, 2022Inventors: Zahra Sadeghipoor Kermani, Sheng Lin, Emmanuel Piuze Phaneuf, Ryan Joseph Dunn, Joachim Michael Deguara, Farhan A. Baqai
-
Patent number: 11442266Abstract: In one implementation, a method includes obtaining an image. The method includes splitting the image to produce a high-frequency component image and a low-frequency component image. The method includes downsampling the low-frequency component image to generate a downsampled low-frequency component image. The method includes correcting color aberration of the downsampled low-frequency component image to generate a color-corrected downsampled low-frequency component image. The method includes upsampling the color-corrected downsampled low-frequency component image to generate a color-corrected low-frequency component image. The method includes combining the color-corrected low-frequency component image and the high-frequency component image to generate a color-corrected version of the image.Type: GrantFiled: June 19, 2020Date of Patent: September 13, 2022Assignee: APPLE INC.Inventors: Zahra Sadeghipoor Kermani, Sheng Lin, Emmanuel Piuze Phaneuf, Ryan Joseph Dunn, Joachim Michael Deguara, Farhan A. Baqai
-
Patent number: 11367166Abstract: In one implementation, a method includes obtaining an image. The method includes correcting color aberration of the image comprising, for a particular pixel of the image by determining one or more chromatic characteristic values for the particular pixel, determining a likelihood that the particular pixel exhibits color aberration based on the one or more chromatic characteristic values for the particular pixel, generating a filtered version of the particular pixel by filtering the particular pixel using pixels in a neighborhood surrounding the particular pixel, and generating a color-corrected version of the particular pixel based on the likelihood that the particular pixel exhibits color aberration and the filtered version of the particular pixel.Type: GrantFiled: June 19, 2020Date of Patent: June 21, 2022Assignee: APPLE INC.Inventors: Zahra Sadeghipoor Kermani, Sheng Lin, Emmanuel Piuze Phaneuf, Ryan Joseph Dunn, Joachim Michael Deguara, Farhan A. Baqai
-
Publication number: 20190370942Abstract: Systems and methods are disclosed for correcting red-eye artifacts in a target image of a subject. Images, captured by a camera, including a raw image, are used to generate the target image. An eye region of the target image is modulated to correct for the red-eye artifacts, wherein correction is carried out based on information extracted from at least one of the raw image and the target image. Modulation comprises detecting landmarks associated with the eye region; estimating spectral response of the red eye artifacts; segmenting an image region of the eye based on the estimated spectral response of the red eye artifacts and the detected landmarks, forming a repair mask; and modifying an image region associated with the repair mask.Type: ApplicationFiled: May 29, 2019Publication date: December 5, 2019Inventors: Alexis GATT, David HAYWARD, Emmanuel PIUZE-PHANEUF, Mark ZIMMER, Yingjun BAI, Zhigang FAN
-
Patent number: 10109081Abstract: There is described herein a method for recovering missing information in diffusion magnetic resonance imaging (dMRI) data. The data are modeled according to the theory of moving frames and regions where frame information is missing are reconstructed by performing diffusions into the regions. Local orthogonal frames computed along the boundary of the regions are rotated into the regions. Connection parameters are estimated at each new data point obtained by a preceding rotation, for application to a subsequent rotation.Type: GrantFiled: May 13, 2016Date of Patent: October 23, 2018Assignee: THE ROYAL INSTITUTION FOR THE ADVANCEMENT OF LEARNENG / MGGILL UNIVERSITYInventors: Kaleem Siddiqi, Emmanuel Piuze-Phaneuf, Jon Sporring
-
Patent number: 9569887Abstract: The identification and determination of aspects of the construction of a patient's heart is important for cardiologists and cardiac surgeons in the diagnosis, analysis, treatment, and management of cardiac patients. For example minimally invasive heart surgery demands knowledge of heart geometry, heart fiber orientation, etc. While medical imaging has advanced significantly the accurate three dimensional (3D) rendering from a series of imaging slices remains a critical step in the planning and execution of patient treatment. Embodiments of the invention construct using diffuse MRI data 3D renderings from iterating connections forms derived from arbitrary smooth frame fields to not only corroborate biological measurements of heart fiber orientation but also provide novel biological views in respect of heart fiber orientation etc.Type: GrantFiled: May 14, 2015Date of Patent: February 14, 2017Assignee: The Royal Institution for the Advancement of Learning / McGill UniversityInventors: Kaleem Siddiqi, Emmanuel Piuze-Phaneuf, Jon Sporring
-
Publication number: 20160335786Abstract: There is described herein a method for recovering missing information in diffusion magnetic resonance imaging (dMRI) data. The data are modeled according to the theory of moving frames and regions where frame information is missing are reconstructed by performing diffusions into the regions. Local orthogonal frames computed along the boundary of the regions are rotated into the regions. Connection parameters are estimated at each new data point obtained by a preceding rotation, for application to a subsequent rotation.Type: ApplicationFiled: May 13, 2016Publication date: November 17, 2016Inventors: Kaleem SIDDIQI, Emmanuel PIUZE-PHANEUF, Jon SPORRING
-
Publication number: 20150332483Abstract: The identification and determination of aspects of the construction of a patient's heart is important for cardiologists and cardiac surgeons in the diagnosis, analysis, treatment, and management of cardiac patients. For example minimally invasive heart surgery demands knowledge of heart geometry, heart fiber orientation, etc. Whilst medical imaging has advanced significantly the accurate three dimensional (3D) rendering from a series of imaging slices remains a critical step in the planning and execution of patient treatment. Embodiments of the invention construct using diffuse MRI data 3D renderings from iterating connections forms derived from arbitrary smooth frame fields to not only corroborate biological measurements of heart fiber orientation but also provide novel biological views in respect of heart fiber orientation etc.Type: ApplicationFiled: May 14, 2015Publication date: November 19, 2015Inventors: KALEEM SIDDIQI, EMMANUEL PIUZE-PHANEUF, JON SPORRING, PETER SAVADJIEV, GUSTAV STRIJKERS, STEVEN ZUCKER