Patents by Inventor Selim Ben Himane

Selim Ben Himane has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240098232
    Abstract: In one implementation, a method of performing perspective correction is performed by a device having a three-dimensional device coordinate system and including a first image sensor, a first display, one or more processors, and non-transitory memory. The method includes capturing, using the first image sensor, a first image of a physical environment. The method includes transforming the first image from a first perspective of the first image sensor to a second perspective based on a difference between the first perspective and the second perspective, wherein the second perspective is a first distance away from a location corresponding to a first eye of a user less than a second distance between the first perspective and the location corresponding to the first eye of the user. The method includes displaying, on the first display, the transformed first image of the physical environment.
    Type: Application
    Filed: September 18, 2023
    Publication date: March 21, 2024
    Inventors: Emmanuel Piuze-Phaneuf, Hermannus J. Damveld, Jean-Nicola F. Blanchet, Mohamed Selim Ben Himane, Vincent Chapdelaine-Couture, Xiaojin Shi
  • Patent number: 11915097
    Abstract: Various implementations disclosed herein include devices, systems, and methods that provide color visual markers that include colored markings that encode data, where the colors of the colored markings are determined by scanning (e.g., detecting the visual marker using a sensor of an electronic device) the visual marker itself. In some implementations, a visual marker is detected in an image of a physical environment. In some implementations, the visual marker is detected in the image by detecting a predefined shape of a first portion of the visual marker in the image. Then, a color-interpretation scheme is determined for interpreting colored markings of the visual marker that encode data by identifying a set of colors at a corresponding set of predetermined locations on the visual marker. Then, the data of the visual marker is decoded using the colored markings and the set of colors of the color-interpretation scheme.
    Type: Grant
    Filed: January 7, 2021
    Date of Patent: February 27, 2024
    Assignee: Apple Inc.
    Inventors: Mohamed Selim Ben Himane, Anselm Grundhoefer, Arun Srivatsan Rangaprasad, Jeffrey S. Norris, Paul Ewers, Scott G. Wade, Thomas G. Salter, Tom Sengelaub
  • Publication number: 20240062346
    Abstract: Various implementations disclosed herein include devices, systems, and methods that detect surfaces and reflections in such surfaces. Some implementations involve providing a CGR environment that includes virtual content that replaces the appearance of a user or the user's device in a mirror or other surface providing a reflection. For example, a CGR environment may be modified to include a reflection of the user that does not include the device that the user is holding or wearing. In another example, the CGR environment is modified so that virtual content, such as a newer version of the electronic device or a virtual wand, replaces the electronic device in the reflection. In another example, the CGR environment is modified so that virtual content, such as a user avatar, replaces the user in the reflection.
    Type: Application
    Filed: November 1, 2023
    Publication date: February 22, 2024
    Inventors: Peter MEIER, Daniel KURZ, Brian Chris CLARK, Mohamed Selim BEN HIMANE
  • Patent number: 11900569
    Abstract: Various implementations disclosed herein include devices, systems, and methods that detect surfaces and reflections in such surfaces. Some implementations involve providing a CGR environment that includes virtual content that replaces the appearance of a user or the user's device in a mirror or other surface providing a reflection. For example, a CGR environment may be modified to include a reflection of the user that does not include the device that the user is holding or wearing. In another example, the CGR environment is modified so that virtual content, such as a newer version of the electronic device or a virtual wand, replaces the electronic device in the reflection. In another example, the CGR environment is modified so that virtual content, such as a user avatar, replaces the user in the reflection.
    Type: Grant
    Filed: September 30, 2022
    Date of Patent: February 13, 2024
    Assignee: Apple Inc.
    Inventors: Peter Meier, Daniel Kurz, Brian Chris Clark, Mohamed Selim Ben Himane
  • Patent number: 11854242
    Abstract: Methods, systems, and computer readable media for providing personalized saliency models, e.g., for use in mixed reality environments, are disclosed herein, comprising: obtaining, from a server, a first saliency model for the characterization of captured images, wherein the first saliency model represents a global saliency model; capturing a first plurality of images by a first device; obtaining information indicative of a reaction of a first user of the first device to the capture of one or more images of the first plurality images; updating the first saliency model based, at least in part, on the obtained information to form a personalized, second saliency model; and transmitting at least a portion of the second saliency model to the server for inclusion into the global saliency model. In some embodiments, a user's personalized (i.e., updated) saliency model may be used to modify one or more characteristics of at least one subsequently captured image.
    Type: Grant
    Filed: September 22, 2021
    Date of Patent: December 26, 2023
    Assignee: Apple Inc.
    Inventors: Michele Stoppa, Mohamed Selim Ben Himane, Raffi A. Bedikian
  • Publication number: 20230297801
    Abstract: Various implementations disclosed herein include devices, systems, and methods that provide a visual marker including a plurality of markings arranged in a corresponding plurality of shapes. In some implementations, each marking is formed of a set of sub-markings separated by gaps and arranged according to a respective shape, and the gaps of the plurality of markings are configured to encode data and indicate orientation of the visual marker. In some implementations, the plurality of markings are arranged in a plurality of concentric rings of increasing size. In some implementations, the orientation is encoded in a first set of gaps and data in a second set of gaps of the gaps in the plurality of markings.
    Type: Application
    Filed: June 15, 2021
    Publication date: September 21, 2023
    Inventors: Arun Srivatsan RANGAPRASAD, Anselm GRUNDHOEFER, Mohamed Selim Ben HIMANE, Dhruv A. GOVIL, Joseph M. LUXTON, Jean-Charles Bernard Marcel BAZIN, Shubham AGRAWAL
  • Publication number: 20230288996
    Abstract: A method includes tracking, via a positional tracker, the paired peripheral input device in a first tracking mode. The method includes obtaining sensor data from the paired peripheral input device via a communication interface. The method includes determining that the paired peripheral input device satisfies a contact criterion based on the sensor data. The contact criterion is based on a contact between the paired peripheral input device and a physical object. The method includes, in response to determining that the paired peripheral input device satisfies the contact criterion, changing the positional tracker from the first tracking mode to a second tracking mode. Tracking in the second tracking mode is based in part on a depth that indicates a distance between the electronic device and the physical object.
    Type: Application
    Filed: May 23, 2023
    Publication date: September 14, 2023
    Inventors: Waleed Abdulla, Sree Harsha Kalli, Mohamed Selim Ben Himane
  • Patent number: 11693491
    Abstract: A method includes tracking, via a positional tracker, the paired peripheral input device in a first tracking mode. The method includes obtaining sensor data from the paired peripheral input device via a communication interface. The method includes determining that the paired peripheral input device satisfies a contact criterion based on the sensor data. The contact criterion is based on a contact between the paired peripheral input device and a physical object. The method includes, in response to determining that the paired peripheral input device satisfies the contact criterion, changing the positional tracker from the first tracking mode to a second tracking mode. Tracking in the second tracking mode is based in part on a depth that indicates a distance between the electronic device and the physical object.
    Type: Grant
    Filed: June 27, 2022
    Date of Patent: July 4, 2023
    Inventors: Waleed Abdulla, Sree Harsha Kalli, Mohamed Selim Ben Himane
  • Patent number: 11682138
    Abstract: The present disclosure relates generally to localization and mapping. In some examples, an electronic device obtains first image data and motion data using a motion sensor. The electronic device receives information corresponding to a second electronic device. The electronic device generates a representation of a first pose of the first electronic device using the first image data, the motion data, and the information corresponding to the second electronic device. The electronic device displays, on the display, a virtual object, wherein the displaying of the virtual object is based on the representation of the first pose of the first electronic device.
    Type: Grant
    Filed: November 11, 2021
    Date of Patent: June 20, 2023
    Assignee: Apple Inc.
    Inventor: Mohamed Selim Ben Himane
  • Patent number: 11652965
    Abstract: A method of projecting digital information on a real object in a real environment includes the steps of projecting digital information on a real object or part of a real object with a visible light projector, capturing at least one image of the real object with the projected digital information using a camera, providing a depth sensor registered with the camera, the depth sensor capturing depth data of the real object or part of the real object, and calculating a spatial transformation between the visible light projector and the real object based on the at least one image and the depth data. The invention is also concerned with a corresponding system.
    Type: Grant
    Filed: September 22, 2020
    Date of Patent: May 16, 2023
    Assignee: Apple Inc.
    Inventors: Peter Meier, Mohamed Selim Ben Himane, Daniel Kurz
  • Publication number: 20230131109
    Abstract: Various implementations disclosed herein include devices, systems, and methods that detect surfaces and reflections in such surfaces. Some implementations involve providing a CGR environment that includes virtual content that replaces the appearance of a user or the user's device in a mirror or other surface providing a reflection. For example, a CGR environment may be modified to include a reflection of the user that does not include the device that the user is holding or wearing. In another example, the CGR environment is modified so that virtual content, such as a newer version of the electronic device or a virtual wand, replaces the electronic device in the reflection. In another example, the CGR environment is modified so that virtual content, such as a user avatar, replaces the user in the reflection.
    Type: Application
    Filed: September 30, 2022
    Publication date: April 27, 2023
    Inventors: Peter MEIER, Daniel KURZ, Brian Chris CLARK, Mohamed Selim BEN HIMANE
  • Patent number: 11580652
    Abstract: One exemplary implementation facilitates object detection using multiple scans of an object in different lighting conditions. For example, a first scan of the object can be created by capturing images of the object by moving an image sensor on a first path in a first lighting condition, e.g., bright lighting. A second scan of the object can then be created by capturing additional images of the object by moving the image sensor on a second path in a second lighting condition, e.g., dim lighting. Implementations determine a transform that associates the scan data from these multiple scans to one another and use the transforms to generate a 3D model of the object in a single coordinate system. Augmented content can be positioned relative to that object in the single coordinate system and thus will be displayed in the appropriate location regardless of the lighting condition in which the physical object is later detected.
    Type: Grant
    Filed: May 21, 2021
    Date of Patent: February 14, 2023
    Assignee: Apple Inc.
    Inventors: Vincent Chapdelaine-Couture, Mohamed Selim Ben Himane
  • Patent number: 11462000
    Abstract: Various implementations disclosed herein include devices, systems, and methods that detect surfaces and reflections in such surfaces. Some implementations involve providing a CGR environment that includes virtual content that replaces the appearance of a user or the user's device in a mirror or other surface providing a reflection. For example, a CGR environment may be modified to include a reflection of the user that does not include the device that the user is holding or wearing. In another example, the CGR environment is modified so that virtual content, such as a newer version of the electronic device or a virtual wand, replaces the electronic device in the reflection. In another example, the CGR environment is modified so that virtual content, such as a user avatar, replaces the user in the reflection.
    Type: Grant
    Filed: August 25, 2020
    Date of Patent: October 4, 2022
    Assignee: Apple Inc.
    Inventors: Peter Meier, Daniel Kurz, Brian Chris Clark, Mohamed Selim Ben Himane
  • Publication number: 20220269338
    Abstract: Implementations use a first device (e.g., an HMD) to provide a CGR environment that augments the input and output capabilities of a second device, e.g., a laptop, smart speaker, etc. In some implementations, the first device communicates with a second device in its proximate physical environment to exchange input or output data. For example, an HMD may capture an image of a physical environment that includes a laptop. The HMD may detect the laptop, send a request the laptop's content, receive content from the laptop (e.g., the content that the laptop is currently displaying and additional content), identify the location of the laptop, and display a virtual object with the received content in the CGR environment on or near the laptop. The size, shape, orientation, or position of the virtual object (e.g., a virtual monitor or monitor extension) may also be configured to provide a better user experience.
    Type: Application
    Filed: May 12, 2022
    Publication date: August 25, 2022
    Inventors: Adam M. O'HERN, Eddie G. MENDOZA, Mohamed Selim BEN HIMANE, Timothy R. ORIOL
  • Patent number: 11379033
    Abstract: Implementations use a first device (e.g., an HMD) to provide a CGR environment that augments the input and output capabilities of a second device, e.g., a laptop, smart speaker, etc. In some implementations, the first device communicates with a second device in its proximate physical environment to exchange input or output data. For example, an HMD may capture an image of a physical environment that includes a laptop. The HMD may detect the laptop, send a request the laptop's content, receive content from the laptop (e.g., the content that the laptop is currently displaying and additional content), identify the location of the laptop, and display a virtual object with the received content in the CGR environment on or near the laptop. The size, shape, orientation, or position of the virtual object (e.g., a virtual monitor or monitor extension) may also be configured to provide a better user experience.
    Type: Grant
    Filed: September 14, 2020
    Date of Patent: July 5, 2022
    Assignee: Apple Inc.
    Inventors: Adam M. O'Hern, Eddie G. Mendoza, Mohamed Selim Ben Himane, Timothy R. Oriol
  • Patent number: 11373271
    Abstract: A method includes obtaining an image via an image sensor, and identifying, within the image, a physical object represented by a portion of the image. The method includes determining, based on the image, a visual feature characterizing the physical object. The method includes warping, based on the visual feature satisfying a first feature criterion, the portion of the image according to a first warping function that is based on the first feature criterion and a distance between the electronic device and a reference point. The method includes warping, based on the visual feature satisfying a second feature criterion that is different from the first feature criterion, the portion of the image according to a second warping function that is based on the second feature criterion and the distance between the electronic device and the reference point.
    Type: Grant
    Filed: March 16, 2021
    Date of Patent: June 28, 2022
    Assignee: APPLE INC.
    Inventors: Pedro Manuel Da Silva Quelhas, Moinul Khan, Raffi A. Bedikian, Katharina Buckl, Mohamed Selim Ben Himane
  • Publication number: 20220092331
    Abstract: Methods, systems, and computer readable media for providing personalized saliency models, e.g., for use in mixed reality environments, are disclosed herein, comprising: obtaining, from a server, a first saliency model for the characterization of captured images, wherein the first saliency model represents a global saliency model; capturing a first plurality of images by a first device; obtaining information indicative of a reaction of a first user of the first device to the capture of one or more images of the first plurality images; updating the first saliency model based, at least in part, on the obtained information to form a personalized, second saliency model; and transmitting at least a portion of the second saliency model to the server for inclusion into the global saliency model. In some embodiments, a user's personalized (i.e., updated) saliency model may be used to modify one or more characteristics of at least one subsequently captured image.
    Type: Application
    Filed: September 22, 2021
    Publication date: March 24, 2022
    Inventors: Michele Stoppa, Mohamed Selim Ben Himane, Raffi A. Bedikian
  • Patent number: 11281337
    Abstract: Touch detection using a mirror accessory may include obtaining first image data comprising a touching object and a target surface, wherein the first image data captures a scene comprising the touching object and the target surface from the viewpoint of a camera, obtaining second image data comprising the touching object and the target surface, wherein the second image data captures the touching object and the target surface as a reflection in a mirror that is separate from the target surface, determining a pose of the mirror in the scene, determining a pose of the touching object in the scene based on the first image data, the second image data, and the pose of the mirror, and estimating a touch status between the touching object and the target surface based on the determined pose of the touching object and a pose of the target surface.
    Type: Grant
    Filed: September 15, 2020
    Date of Patent: March 22, 2022
    Assignee: Apple Inc.
    Inventors: Mohamed Selim Ben Himane, Lejing Wang
  • Patent number: 11270409
    Abstract: A method includes obtaining an image via an image sensor. The method includes determining a first perceptual quality value that is associated with a first portion of the image. The method includes determining a first image perceptual quality warping function that is based on the first perceptual quality value and an image warping map. The first image perceptual quality warping function is characterized by a first warping granularity level that is a function of the first perceptual quality value. The method includes warping the first portion of the image according to the first image perceptual quality warping function.
    Type: Grant
    Filed: September 23, 2020
    Date of Patent: March 8, 2022
    Assignee: APPLE INC.
    Inventors: Raffi A. Bedikian, Mohamed Selim Ben Himane, Pedro Manuel Da Silva Quelhas, Moinul Khan, Katharina Buckl, Jim C. Chou, Julien Monat Rodier
  • Publication number: 20220067966
    Abstract: The present disclosure relates generally to localization and mapping. In some examples, an electronic device obtains first image data and motion data using a motion sensor. The electronic device receives information corresponding to a second electronic device. The electronic device generates a representation of a first pose of the first electronic device using the first image data, the motion data, and the information corresponding to the second electronic device. The electronic device displays, on the display, a virtual object, wherein the displaying of the virtual object is based on the representation of the first pose of the first electronic device.
    Type: Application
    Filed: November 11, 2021
    Publication date: March 3, 2022
    Inventor: Mohamed Selim BEN HIMANE