Patents by Inventor Vincent CHAPDELAINE-COUTURE

Vincent CHAPDELAINE-COUTURE has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240112303
    Abstract: In some implementations, a method includes: obtaining image data associated with a physical environment; obtaining first contextual information including at least one of first user information associated with a current state of a user of the computing system, first application information associated with a first application being executed by the computing system, and first environment information associated with a current state of the physical environment; selecting a first set of perspective correction operations based at least in part on the first contextual information; generating first corrected image data by performing the first set of perspective correction operations on the image data; and causing presentation of the first corrected image data.
    Type: Application
    Filed: September 22, 2023
    Publication date: April 4, 2024
    Inventors: Vincent Chapdelaine-Couture, Emmanuel Piuze-Phaneuf, Julien Monat Rodier, Hermannus J. Damveld, Xiaojin Shi, Sebastian Gaweda
  • Publication number: 20240098232
    Abstract: In one implementation, a method of performing perspective correction is performed by a device having a three-dimensional device coordinate system and including a first image sensor, a first display, one or more processors, and non-transitory memory. The method includes capturing, using the first image sensor, a first image of a physical environment. The method includes transforming the first image from a first perspective of the first image sensor to a second perspective based on a difference between the first perspective and the second perspective, wherein the second perspective is a first distance away from a location corresponding to a first eye of a user less than a second distance between the first perspective and the location corresponding to the first eye of the user. The method includes displaying, on the first display, the transformed first image of the physical environment.
    Type: Application
    Filed: September 18, 2023
    Publication date: March 21, 2024
    Inventors: Emmanuel Piuze-Phaneuf, Hermannus J. Damveld, Jean-Nicola F. Blanchet, Mohamed Selim Ben Himane, Vincent Chapdelaine-Couture, Xiaojin Shi
  • Patent number: 11880911
    Abstract: The present disclosure relates to techniques for transitioning between imagery and sounds of two different environments, such as a virtual environment and a real environment. A view of a first environment and audio associated with the first environment are provided. In response to detecting a transition event, a view of the first environment combined with a second environment is provided. The combined view includes imagery of the first environment at a first visibility value and imagery of the second environment at a second visibility value. In addition, in response to detecting a transition event, the first environment audio is mixed with audio associated with the second environment.
    Type: Grant
    Filed: September 6, 2019
    Date of Patent: January 23, 2024
    Assignee: Apple Inc.
    Inventors: Bertrand Nepveu, Sandy J. Carter, Vincent Chapdelaine-Couture, Marc-Andre Chenier, Yan Cote, Simon Fortin-Deschênes, Anthony Ghannoum, Tomlinson Holman, Marc-Olivier Lepage, Yves Millette
  • Patent number: 11838486
    Abstract: In one implementation, a method of performing perspective correction is performed at a head-mounted device including one or more processors, non-transitory memory, an image sensor, and a display. The method includes capturing, using the image sensor, a plurality of images of a scene from a respective plurality of perspectives. The method includes capturing, using the image sensor, a current image of the scene from a current perspective. The method includes obtaining a depth map of the current image of the scene. The method include transforming, using the one or more processors, the current image of the scene based on the depth map, a difference between the current perspective of the image sensor and a current perspective of a user, and at least one of the plurality of images of the scene from the respective plurality of perspectives. The method includes displaying, on the display, the transformed image.
    Type: Grant
    Filed: July 13, 2020
    Date of Patent: December 5, 2023
    Assignee: APPLE INC.
    Inventors: Samer Samir Barakat, Bertrand Nepveu, Vincent Chapdelaine-Couture
  • Publication number: 20230386095
    Abstract: The present disclosure relates to techniques for presenting a combined view of a virtual environment and a real environment in response to detecting a transition event associated with an object in the real environment. While presenting the combined view, if an input of a first type is detected, the combined view is adjusted by increasing the visibility of imagery of the virtual environment and decreasing the visibility of imagery of the real environment. If an input of a second type is detected, the combined view is adjusted by decreasing the visibility of the imagery of the virtual environment and increasing the visibility of the imagery of the real environment.
    Type: Application
    Filed: August 14, 2023
    Publication date: November 30, 2023
    Inventors: Bertrand NEPVEU, Sandy J. CARTER, Vincent CHAPDELAINE-COUTURE, Marc-Andre CHENIER, Yan COTE, Simon FORTIN-DESCHÊNES, Anthony GHANNOUM, Tomlinson HOLMAN, Marc-Olivier LEPAGE, Yves MILLETTE
  • Publication number: 20230377249
    Abstract: The method includes: obtaining a first image of an environment from a first image sensor associated with first intrinsic parameters; performing a warping operation on the first image according to perspective offset values to generate a warped first image in order to account for perspective differences between the first image sensor and a user of the electronic device; determining an occlusion mask based on the warped first image that includes a plurality of holes; obtaining a second image of the environment from a second image sensor associated with second intrinsic parameters; normalizing the second image based on a difference between the first and second intrinsic parameters to produce a normalized second image; and filling a first set of one or more holes of the occlusion mask based on the normalized second image to produce a modified first image.
    Type: Application
    Filed: December 28, 2022
    Publication date: November 23, 2023
    Inventors: Bertrand Nepveu, Vincent Chapdelaine-Couture, Emmanuel Piuze-Phaneuf
  • Patent number: 11790569
    Abstract: The present disclosure relates to techniques for inserting imagery from a real environment into a virtual environment. While presenting (e.g., displaying) the virtual environment at an electronic device, a proximity of the electronic device to a physical object located in a real environment is detected. In response to detecting that the proximity of the electronic device to the physical object is less than a first threshold distance, imagery of the physical object is isolated from other imagery of the real environment. The isolated imagery of the physical object is inserted into the virtual environment at a location corresponding to the location of the physical object in the real environment. The imagery of the physical object has a first visibility value associated with the proximity of the electronic device to the physical object.
    Type: Grant
    Filed: September 6, 2019
    Date of Patent: October 17, 2023
    Assignee: Apple Inc.
    Inventors: Bertrand Nepveu, Sandy J. Carter, Vincent Chapdelaine-Couture, Marc-Andre Chenier, Yan Cote, Simon Fortin-Deschênes, Anthony Ghannoum, Tomlinson Holman, Marc-Olivier Lepage, Yves Millette
  • Publication number: 20230324684
    Abstract: A Head-Mounted Display system together with associated techniques for performing accurate and automatic inside-out positional, user body and environment tracking for virtual or mixed reality are disclosed. The system uses computer vision methods and data fusion from multiple sensors to achieve real-time tracking. High frame rate and low latency is achieved by performing part of the processing on the HMD itself.
    Type: Application
    Filed: June 14, 2023
    Publication date: October 12, 2023
    Inventors: Simon Fortin-Deschenes, Vincent Chapdelaine-Couture, Yan Cote, Anthony Ghannoum
  • Patent number: 11778154
    Abstract: A Head-Mounted Display with camera sensors to perform chroma keying in a mixed reality context is presented. Low latency is achieved by embedding the processing in the HMD itself, specifically, format camera images, detect the selected color range and make a composite with the virtual content.
    Type: Grant
    Filed: August 8, 2018
    Date of Patent: October 3, 2023
    Assignee: APPLE INC.
    Inventors: Vincent Chapdelaine-Couture, Anthony Ghannoum, Yan Cote, Irving Lustigman, Marc-Andre Chenier, Simon Fortin-Deschenes, Bertrand Nepveu
  • Patent number: 11693242
    Abstract: A Head-Mounted Display system together with associated techniques for performing accurate and automatic inside-out positional, user body and environment tracking for virtual or mixed reality are disclosed. The system uses computer vision methods and data fusion from multiple sensors to achieve real-time tracking. High frame rate and low latency is achieved by performing part of the processing on the HMD itself.
    Type: Grant
    Filed: October 29, 2021
    Date of Patent: July 4, 2023
    Assignee: APPLE INC.
    Inventors: Simon Fortin-Deschênes, Vincent Chapdelaine-Couture, Yan Côté, Anthony Ghannoum
  • Patent number: 11580652
    Abstract: One exemplary implementation facilitates object detection using multiple scans of an object in different lighting conditions. For example, a first scan of the object can be created by capturing images of the object by moving an image sensor on a first path in a first lighting condition, e.g., bright lighting. A second scan of the object can then be created by capturing additional images of the object by moving the image sensor on a second path in a second lighting condition, e.g., dim lighting. Implementations determine a transform that associates the scan data from these multiple scans to one another and use the transforms to generate a 3D model of the object in a single coordinate system. Augmented content can be positioned relative to that object in the single coordinate system and thus will be displayed in the appropriate location regardless of the lighting condition in which the physical object is later detected.
    Type: Grant
    Filed: May 21, 2021
    Date of Patent: February 14, 2023
    Assignee: Apple Inc.
    Inventors: Vincent Chapdelaine-Couture, Mohamed Selim Ben Himane
  • Publication number: 20220050290
    Abstract: A Head-Mounted Display system together with associated techniques for performing accurate and automatic inside-out positional, user body and environment tracking for virtual or mixed reality are disclosed. The system uses computer vision methods and data fusion from multiple sensors to achieve real-time tracking. High frame rate and low latency is achieved by performing part of the processing on the HMD itself.
    Type: Application
    Filed: October 29, 2021
    Publication date: February 17, 2022
    Inventors: Simon Fortin-Deschênes, Vincent Chapdelaine-Couture, Yan Côté, Anthony Ghannoum
  • Patent number: 11199706
    Abstract: A Head-Mounted Display system together with associated techniques for performing accurate and automatic inside-out positional, user body and environment tracking for virtual or mixed reality are disclosed. The system uses computer vision methods and data fusion from multiple sensors to achieve real-time tracking. High frame rate and low latency is achieved by performing part of the processing on the HMD itself.
    Type: Grant
    Filed: September 25, 2020
    Date of Patent: December 14, 2021
    Assignee: APPLE INC.
    Inventors: Simon Fortin-Deschênes, Vincent Chapdelaine-Couture, Yan Côté, Anthony Ghannoum
  • Publication number: 20210352255
    Abstract: The present disclosure relates to techniques for transitioning between imagery and sounds of two different environments, such as a virtual environment and a real environment. A view of a first environment and audio associated with the first environment are provided. In response to detecting a transition event, a view of the first environment combined with a second environment is provided. The combined view includes imagery of the first environment at a first visibility value and imagery of the second environment at a second visibility value. In addition, in response to detecting a transition event, the first environment audio is mixed with audio associated with the second environment.
    Type: Application
    Filed: September 6, 2019
    Publication date: November 11, 2021
    Inventors: Bertrand NEPVEU, Sandy J. CARTER, Vincent CHAPDELAINE-COUTURE, Marc-Andre CHENIER, Yan COTE, Simon FORTIN-DESCHÊNES, Anthony GHANNOUM, Tomlinson HOLMAN, Marc-Olivier LEPAGE, Yves MILLETTE
  • Publication number: 20210279898
    Abstract: One exemplary implementation facilitates object detection using multiple scans of an object in different lighting conditions. For example, a first scan of the object can be created by capturing images of the object by moving an image sensor on a first path in a first lighting condition, e.g., bright lighting. A second scan of the object can then be created by capturing additional images of the object by moving the image sensor on a second path in a second lighting condition, e.g., dim lighting. Implementations determine a transform that associates the scan data from these multiple scans to one another and use the transforms to generate a 3D model of the object in a single coordinate system. Augmented content can be positioned relative to that object in the single coordinate system and thus will be displayed in the appropriate location regardless of the lighting condition in which the physical object is later detected.
    Type: Application
    Filed: May 21, 2021
    Publication date: September 9, 2021
    Inventors: Vincent Chapdelaine-Couture, Mohamed Selim Ben Himane
  • Patent number: 11100659
    Abstract: One exemplary implementation facilitates object detection using multiple scans of an object in different conditions. For example, a first scan of the object can be created by capturing images of the object by moving an image sensor on a first path in a first condition, e.g., bright lighting. A second scan of the object can then be created by capturing additional images of the object by moving the image sensor on a second path in a second condition, e.g., dim lighting. Implementations determine a transform that associates the scan data from these multiple scans to one another and use the transforms to generate a 3D model of the object in a single coordinate system. Augmented content can be positioned relative to that object in the single coordinate system and thus will be displayed in the appropriate location regardless of the condition in which the physical object is later detected.
    Type: Grant
    Filed: July 1, 2019
    Date of Patent: August 24, 2021
    Assignee: Apple Inc.
    Inventors: Vincent Chapdelaine-Couture, Mohamed Selim Ben Himane
  • Publication number: 20210192802
    Abstract: The present disclosure relates to techniques for inserting imagery from a real environment into a virtual environment. While presenting (e.g., displaying) the virtual environment at an electronic device, a proximity of the electronic device to a physical object located in a real environment is detected. In response to detecting that the proximity of the electronic device to the physical object is less than a first threshold distance, imagery of the physical object is isolated from other imagery of the real environment. The isolated imagery of the physical object is inserted into the virtual environment at a location corresponding to the location of the physical object in the real environment. The imagery of the physical object has a first visibility value associated with the proximity of the electronic device to the physical object.
    Type: Application
    Filed: September 6, 2019
    Publication date: June 24, 2021
    Inventors: Bertrand NEPVEU, Sandy J. CARTER, Vincent CHAPDELAINE-COUTURE, Marc-Andre CHENIER, Yan COTE, Simon FORTIN-DESCHÊNES, Anthony GHANNOUM, Tomlinson HOLMAN, Marc-Olivier LEPAGE, Yves MILLETTE
  • Patent number: 10964056
    Abstract: One exemplary implementation involves a pixel-based (also referred to as a dense-based approach) to object detection and tracking that can provide more accurate results than a feature-based approach. The efficiency of the detection and tracking is improved by using a reference image of the object that has similar characteristics (e.g., scale, lighting, blur, and the like) as the depiction of the object in the frame. In some implementations, a reference image of an appropriate scale is selected or interpolated based on the scale of the object depicted in the real world image. In other implementations, the real world image is adjusted to better match the reference image. The detection and tracking of the object can be performed with sufficient accuracy and efficiency for computer-generated reality (CGR) and other applications in which it is desirable to detect and track objects in real time.
    Type: Grant
    Filed: May 10, 2019
    Date of Patent: March 30, 2021
    Assignee: Apple Inc.
    Inventors: Vincent Chapdelaine-Couture, Mohamed Selim Ben Himane
  • Publication number: 20210011289
    Abstract: A Head-Mounted Display system together with associated techniques for performing accurate and automatic inside-out positional, user body and environment tracking for virtual or mixed reality are disclosed. The system uses computer vision methods and data fusion from multiple sensors to achieve real-time tracking. High frame rate and low latency is achieved by performing part of the processing on the HMD itself.
    Type: Application
    Filed: September 25, 2020
    Publication date: January 14, 2021
    Inventors: Simon Fortin-Deschênes, Vincent Chapdelaine-Couture, Yan Côté, Anthony Ghannoum
  • Patent number: 10838206
    Abstract: A Head-Mounted Display system together with associated techniques for performing accurate and automatic inside-out positional, user body and environment tracking for virtual or mixed reality are disclosed. The system uses computer vision methods and data fusion from multiple sensors to achieve real-time tracking. High frame rate and low latency is achieved by performing part of the processing on the HMD itself.
    Type: Grant
    Filed: February 20, 2017
    Date of Patent: November 17, 2020
    Assignee: Apple Inc.
    Inventors: Simon Fortin-Deschênes, Vincent Chapdelaine-Couture, Yan Côté, Anthony Ghannoum