Patents by Inventor Bertrand Nepveu

Bertrand Nepveu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250069341
    Abstract: The present disclosure relates to techniques for inserting imagery from a real environment into a virtual environment. While presenting (e.g., displaying) the virtual environment at an electronic device, a proximity of the electronic device to a physical object located in a real environment is detected. In response to detecting that the proximity of the electronic device to the physical object is less than a first threshold distance, imagery of the physical object is isolated from other imagery of the real environment. The isolated imagery of the physical object is inserted into the virtual environment at a location corresponding to the location of the physical object in the real environment. The imagery of the physical object has a first visibility value associated with the proximity of the electronic device to the physical object.
    Type: Application
    Filed: September 13, 2024
    Publication date: February 27, 2025
    Inventors: Bertrand NEPVEU, Sandy J. CARTER, Vincent CHAPDELAINE-COUTURE, Marc-Andre CHENIER, Yan COTE, Simon FORTIN-DESCHÊNES, Anthony GHANNOUM, Tomlinson HOLMAN, Marc-Olivier LEPAGE, Yves MILLETTE
  • Publication number: 20240386819
    Abstract: A head-mountable display device includes a housing defining a front opening and a rear opening, a display screen disposed in the front opening, a display assembly disposed in the rear opening, a first securement strap coupled to the housing, the first securement strap including a first electronic component, a second securement strap coupled to the housing, the second securement strap including a second electronic component, and a securement band extending between and coupled to the first securement strap and the second securement strap.
    Type: Application
    Filed: May 13, 2024
    Publication date: November 21, 2024
    Inventors: Michael J. Rockwell, Oriel Y. Bergig, Geoffrey Stahl, Thibaut Weise, Peter Kaufmann, Branko Petljanski, Jason L. Slupeiks, Tom Sengelaub, Kathrin Berkner Cieslicki, Yanghai Tsin, Hesam Najafi, Arthur Y. Zhang, Julian Hoenig, Julian Jaede, Yoonhoo Jo, Forrest C. Wang, Bertrand Nepveu, Muhammad F. Hossain, William A. Sorrentino, III, Jonathan Ive, Alan C. Dye, Stephen O. Lemay, Jeffrey M. Faulkner
  • Publication number: 20240385454
    Abstract: A head-mountable display device includes a housing defining a front opening and a rear opening, a display screen disposed in the front opening, a display assembly disposed in the rear opening, a first securement strap coupled to the housing, the first securement strap including a first electronic component, a second securement strap coupled to the housing, the second securement strap including a second electronic component, and a securement band extending between and coupled to the first securement strap and the second securement strap.
    Type: Application
    Filed: May 13, 2024
    Publication date: November 21, 2024
    Inventors: Ricardo J. Motta, Brett D. Miller, Chad A. Bronstein, Bennett S. Wilburn, Branko Petljanski, Mahmut C. Orsan, David W. Lum, Edward A. Valko, Graham B. Myhre, Wonjae Choi, Manohar B. Srikanth, Michael Slootsky, Nicolas P. Bonnier, Jiaying Wu, ZhiBing Ge, William W. Sprague, Bertrand Nepveu, Michael J. Rockwell, Edward S. Huo, Marinus Meursing, Timothy Y. Chang, Katharina Buckl
  • Publication number: 20240386679
    Abstract: A head-mountable display device includes a housing defining a front opening and a rear opening, a display screen disposed in the front opening, a display assembly disposed in the rear opening, a first securement strap coupled to the housing, the first securement strap including a first electronic component, a second securement strap coupled to the housing, the second securement strap including a second electronic component, and a securement band extending between and coupled to the first securement strap and the second securement strap.
    Type: Application
    Filed: May 13, 2024
    Publication date: November 21, 2024
    Inventors: Michael J. Rockwell, Geoffrey Stahl, Thibaut Weise, Peter Kaufmann, Branko Petljanski, Jason L. Slupeiks, Tom Sengelaub, Kathrin Berkner Cieslicki, Yanghai Tsin, Hesam Najafi, Arthur Y. Zhang, Julian Hoenig, Julian Jaede, Jason C. Sauers, James W. Vandyke, Yoonhoo Jo, Forrest C. Wang, Cheng Chen, Graham B. Myhre, Fletcher R. Rothkopf, Marinus Meursing, Phil M. Hobson, Jan K. Quijalvo, Jia Tao, Ivan S. Maric, Jeremy C. Franklin, Wey-Jiun Lin, Bertrand Nepveu, Muhammad F. Hossain, William A. Sorrentino, III, Jonathan Ive, Alan C. Dye, Stephen O. Lemay
  • Patent number: 12125123
    Abstract: In some implementations, a method includes: determining a complexity value for first image data associated with of a physical environment that corresponds to a first time period; determining an estimated composite setup time based on the complexity value for the first image data and virtual content for compositing with the first image data; in accordance with a determination that the estimated composite setup time exceeds the threshold time: forgoing rendering the virtual content from the perspective that corresponds to the camera pose of the device relative to the physical environment during the first time period; and compositing a previous render of the virtual content for a previous time period with the first image data to generate the graphical environment for the first time period.
    Type: Grant
    Filed: June 14, 2023
    Date of Patent: October 22, 2024
    Assignee: APPLE INC.
    Inventors: Bertrand Nepveu, Marc-Andre Chenier, Yan Cote, Yves Millette
  • Patent number: 12094069
    Abstract: The present disclosure relates to techniques for presenting a combined view of a virtual environment and a real environment in response to detecting a transition event associated with an object in the real environment. While presenting the combined view, if an input of a first type is detected, the combined view is adjusted by increasing the visibility of imagery of the virtual environment and decreasing the visibility of imagery of the real environment. If an input of a second type is detected, the combined view is adjusted by decreasing the visibility of the imagery of the virtual environment and increasing the visibility of the imagery of the real environment.
    Type: Grant
    Filed: August 14, 2023
    Date of Patent: September 17, 2024
    Assignee: Apple Inc.
    Inventors: Bertrand Nepveu, Sandy J. Carter, Vincent Chapdelaine-Couture, Marc-Andre Chenier, Yan Cote, Simon Fortin-Deschênes, Anthony Ghannoum, Tomlinson Holman, Marc-Olivier Lepage, Yves Millette
  • Publication number: 20240202866
    Abstract: In one implementation, a method of performing perspective correction of an image is performed by a device including an image sensor, a display, one or more processors, and non-transitory memory. The method includes capturing, using the image sensor, an image of a physical environment. The method includes obtaining a plurality of initial depths respectively associated with a plurality of pixels of the image of the physical environment. The method includes generating a depth map for the image of the physical environment based on the plurality of initial depths and a respective plurality of confidences of the plurality of initial depths. The method includes transforming, using the one or more processors, the image of the physical environment based on the depth map and a difference between a perspective of the image sensor and a perspective of a user. The method includes displaying, on the display, the transformed image.
    Type: Application
    Filed: March 29, 2022
    Publication date: June 20, 2024
    Inventors: Samer Barakat, Bertrand Nepveu, Christian W. Gosch, Emmanuel Piuze-Phaneuf, Vincent Chapdelaine-Couture
  • Patent number: 11880911
    Abstract: The present disclosure relates to techniques for transitioning between imagery and sounds of two different environments, such as a virtual environment and a real environment. A view of a first environment and audio associated with the first environment are provided. In response to detecting a transition event, a view of the first environment combined with a second environment is provided. The combined view includes imagery of the first environment at a first visibility value and imagery of the second environment at a second visibility value. In addition, in response to detecting a transition event, the first environment audio is mixed with audio associated with the second environment.
    Type: Grant
    Filed: September 6, 2019
    Date of Patent: January 23, 2024
    Assignee: Apple Inc.
    Inventors: Bertrand Nepveu, Sandy J. Carter, Vincent Chapdelaine-Couture, Marc-Andre Chenier, Yan Cote, Simon Fortin-Deschênes, Anthony Ghannoum, Tomlinson Holman, Marc-Olivier Lepage, Yves Millette
  • Patent number: 11838486
    Abstract: In one implementation, a method of performing perspective correction is performed at a head-mounted device including one or more processors, non-transitory memory, an image sensor, and a display. The method includes capturing, using the image sensor, a plurality of images of a scene from a respective plurality of perspectives. The method includes capturing, using the image sensor, a current image of the scene from a current perspective. The method includes obtaining a depth map of the current image of the scene. The method include transforming, using the one or more processors, the current image of the scene based on the depth map, a difference between the current perspective of the image sensor and a current perspective of a user, and at least one of the plurality of images of the scene from the respective plurality of perspectives. The method includes displaying, on the display, the transformed image.
    Type: Grant
    Filed: July 13, 2020
    Date of Patent: December 5, 2023
    Assignee: APPLE INC.
    Inventors: Samer Samir Barakat, Bertrand Nepveu, Vincent Chapdelaine-Couture
  • Publication number: 20230386095
    Abstract: The present disclosure relates to techniques for presenting a combined view of a virtual environment and a real environment in response to detecting a transition event associated with an object in the real environment. While presenting the combined view, if an input of a first type is detected, the combined view is adjusted by increasing the visibility of imagery of the virtual environment and decreasing the visibility of imagery of the real environment. If an input of a second type is detected, the combined view is adjusted by decreasing the visibility of the imagery of the virtual environment and increasing the visibility of the imagery of the real environment.
    Type: Application
    Filed: August 14, 2023
    Publication date: November 30, 2023
    Inventors: Bertrand NEPVEU, Sandy J. CARTER, Vincent CHAPDELAINE-COUTURE, Marc-Andre CHENIER, Yan COTE, Simon FORTIN-DESCHÊNES, Anthony GHANNOUM, Tomlinson HOLMAN, Marc-Olivier LEPAGE, Yves MILLETTE
  • Publication number: 20230377249
    Abstract: The method includes: obtaining a first image of an environment from a first image sensor associated with first intrinsic parameters; performing a warping operation on the first image according to perspective offset values to generate a warped first image in order to account for perspective differences between the first image sensor and a user of the electronic device; determining an occlusion mask based on the warped first image that includes a plurality of holes; obtaining a second image of the environment from a second image sensor associated with second intrinsic parameters; normalizing the second image based on a difference between the first and second intrinsic parameters to produce a normalized second image; and filling a first set of one or more holes of the occlusion mask based on the normalized second image to produce a modified first image.
    Type: Application
    Filed: December 28, 2022
    Publication date: November 23, 2023
    Inventors: Bertrand Nepveu, Vincent Chapdelaine-Couture, Emmanuel Piuze-Phaneuf
  • Patent number: 11790569
    Abstract: The present disclosure relates to techniques for inserting imagery from a real environment into a virtual environment. While presenting (e.g., displaying) the virtual environment at an electronic device, a proximity of the electronic device to a physical object located in a real environment is detected. In response to detecting that the proximity of the electronic device to the physical object is less than a first threshold distance, imagery of the physical object is isolated from other imagery of the real environment. The isolated imagery of the physical object is inserted into the virtual environment at a location corresponding to the location of the physical object in the real environment. The imagery of the physical object has a first visibility value associated with the proximity of the electronic device to the physical object.
    Type: Grant
    Filed: September 6, 2019
    Date of Patent: October 17, 2023
    Assignee: Apple Inc.
    Inventors: Bertrand Nepveu, Sandy J. Carter, Vincent Chapdelaine-Couture, Marc-Andre Chenier, Yan Cote, Simon Fortin-Deschênes, Anthony Ghannoum, Tomlinson Holman, Marc-Olivier Lepage, Yves Millette
  • Publication number: 20230325965
    Abstract: In some implementations, a method includes: determining a complexity value for first image data associated with of a physical environment that corresponds to a first time period; determining an estimated composite setup time based on the complexity value for the first image data and virtual content for compositing with the first image data; in accordance with a determination that the estimated composite setup time exceeds the threshold time: forgoing rendering the virtual content from the perspective that corresponds to the camera pose of the device relative to the physical environment during the first time period; and compositing a previous render of the virtual content for a previous time period with the first image data to generate the graphical environment for the first time period.
    Type: Application
    Filed: June 14, 2023
    Publication date: October 12, 2023
    Inventors: Bertrand Nepveu, Marc-Andre Chenier, Yan Cote, Yves Millette
  • Patent number: 11778154
    Abstract: A Head-Mounted Display with camera sensors to perform chroma keying in a mixed reality context is presented. Low latency is achieved by embedding the processing in the HMD itself, specifically, format camera images, detect the selected color range and make a composite with the virtual content.
    Type: Grant
    Filed: August 8, 2018
    Date of Patent: October 3, 2023
    Assignee: APPLE INC.
    Inventors: Vincent Chapdelaine-Couture, Anthony Ghannoum, Yan Cote, Irving Lustigman, Marc-Andre Chenier, Simon Fortin-Deschenes, Bertrand Nepveu
  • Patent number: 11736822
    Abstract: A method includes obtaining a first plurality of image frames that are provided by an image sensor at a first output rate. The method includes determining a GPU temporal processing value characterizing the GPU processing of the first plurality of image frames. The GPU temporal processing value is a function of an amount of information associated with the first plurality of image frames. The method includes determining an operation value associated with the image sensor based on a function of the GPU temporal processing value. The operation value modifies the operation of the image sensor. The method includes changing, using a controller, the operation of the image sensor as a function of the operation value so that the image sensor provides a second plurality of image frames at a second output rate.
    Type: Grant
    Filed: February 16, 2021
    Date of Patent: August 22, 2023
    Assignee: APPLE INC.
    Inventors: Bertrand Nepveu, Marc-Andre Chenier, Yan Cote, Yves Millette
  • Patent number: 11704766
    Abstract: In some implementations, a method includes: determining a complexity value for first image data associated with of a physical environment that corresponds to a first time period; determining an estimated composite setup time based on the complexity value for the first image data and virtual content for compositing with the first image data; in accordance with a determination that the estimated composite setup time exceeds the threshold time: forgoing rendering the virtual content from the perspective that corresponds to the camera pose of the device relative to the physical environment during the first time period; and compositing a previous render of the virtual content for a previous time period with the first image data to generate the graphical environment for the first time period.
    Type: Grant
    Filed: November 2, 2022
    Date of Patent: July 18, 2023
    Assignee: APPLE INC.
    Inventors: Bertrand Nepveu, Marc-Andre Chenier, Yan Cote, Yves Millette
  • Publication number: 20230053205
    Abstract: In some implementations, a method includes: determining a complexity value for first image data associated with of a physical environment that corresponds to a first time period; determining an estimated composite setup time based on the complexity value for the first image data and virtual content for compositing with the first image data; in accordance with a determination that the estimated composite setup time exceeds the threshold time: forgoing rendering the virtual content from the perspective that corresponds to the camera pose of the device relative to the physical environment during the first time period; and compositing a previous render of the virtual content for a previous time period with the first image data to generate the graphical environment for the first time period.
    Type: Application
    Filed: November 2, 2022
    Publication date: February 16, 2023
    Inventors: Bertrand Nepveu, Marc-Andre Chenier, Yan Cote, Yves Millette
  • Patent number: 11521291
    Abstract: In some implementations, a method of reducing latency associated with an image read-out operation is performed at a device including one or more processors, non-transitory memory, an image processing architecture, and an image capture device. The method includes: obtaining first image data corresponding to a physical environment; reading a first slice of the first image data into an input buffer; performing processing operations on the first slice of the first image data to obtain a first portion of second image data; reading a second slice of the first image data into the input buffer; performing the image processing operations on the second slice of the first image data to obtain a second portion of the second image data; and generating an image frame of the physical environment based at least in part on the first and second portions of the second image data.
    Type: Grant
    Filed: March 29, 2021
    Date of Patent: December 6, 2022
    Assignee: APPLE INC.
    Inventors: Bertrand Nepveu, Marc-Andre Chenier, Yan Cote, Yves Millette
  • Patent number: 11496723
    Abstract: Generating a representation of a scene includes detecting an indication to capture sensor data to generate a virtual representation of a scene in a physical environment at a first time, in response to the indication obtaining first sensor data from a first capture device at the first time, obtaining second sensor data from a second capture device at the first time, and combining the first sensor data and the second sensor data to generate the virtual representation of the scene.
    Type: Grant
    Filed: September 24, 2019
    Date of Patent: November 8, 2022
    Assignee: Apple Inc.
    Inventors: Bertrand Nepveu, Yan Cote
  • Publication number: 20210352255
    Abstract: The present disclosure relates to techniques for transitioning between imagery and sounds of two different environments, such as a virtual environment and a real environment. A view of a first environment and audio associated with the first environment are provided. In response to detecting a transition event, a view of the first environment combined with a second environment is provided. The combined view includes imagery of the first environment at a first visibility value and imagery of the second environment at a second visibility value. In addition, in response to detecting a transition event, the first environment audio is mixed with audio associated with the second environment.
    Type: Application
    Filed: September 6, 2019
    Publication date: November 11, 2021
    Inventors: Bertrand NEPVEU, Sandy J. CARTER, Vincent CHAPDELAINE-COUTURE, Marc-Andre CHENIER, Yan COTE, Simon FORTIN-DESCHÊNES, Anthony GHANNOUM, Tomlinson HOLMAN, Marc-Olivier LEPAGE, Yves MILLETTE