Patents by Inventor Bruno M. Sommer

Bruno M. Sommer has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230274504
    Abstract: Some examples of the disclosure are directed to selective display of avatars corresponding to users of electronic devices in a multi-user communication session. In some examples, when immersive content is shared in the communication session, the avatars remain displayed when presenting the content in the three-dimensional environment. In some examples, when perspective-limited immersive content is shared in the communication session, the avatars cease being displayed when presenting the content in the three-dimensional environment. In some examples, when content presented in a full-screen mode is shared in the communication session, the avatars remain displayed when presenting the content in the full-screen mode in the three-dimensional environment. In some examples, when object-bounded content is shared in the communication session, the avatars remain displayed when presenting the object-bounded content in the three-dimensional environment.
    Type: Application
    Filed: February 24, 2023
    Publication date: August 31, 2023
    Inventors: Miao REN, Connor A. SMITH, Hayden J. LEE, Bruno M. SOMMER
  • Patent number: 11733956
    Abstract: In one implementation, a method of providing display device sharing and interactivity in simulated reality is performed at a first electronic device including one or more processors and a non-transitory memory. The method includes obtaining a gesture input to a first display device in communication with the first electronic device from a first user, where the first display device includes a first display. The method further includes transmitting a representation of the first display to a second electronic device in response to obtaining the gesture input. The method additionally includes receiving an input message directed to the first display device from the second electronic device, where the input message includes an input directive obtained by the second electronic device from a second user. The method also includes transmitting the input message to the first display device for execution by the first display device.
    Type: Grant
    Filed: September 3, 2019
    Date of Patent: August 22, 2023
    Assignee: APPLE INC.
    Inventors: Bruno M. Sommer, Alexandre Da Veiga, Ioana Negoita
  • Patent number: 11709068
    Abstract: Methods and apparatus for spatial audio navigation that may, for example, be implemented by mobile multipurpose devices. A spatial audio navigation system provides navigational information in audio form to direct users to target locations. The system uses directionality of audio played through a binaural audio device to provide navigational cues to the user. A current location, target location, and map information may be input to pathfinding algorithms to determine a real world path between the user's current location and the target location. The system may then use directional audio played through a headset to guide the user on the path from the current location to the target location. The system may implement one or more of several different spatial audio navigation methods to direct a user when following a path using spatial audio-based cues.
    Type: Grant
    Filed: September 25, 2018
    Date of Patent: July 25, 2023
    Assignee: Apple Inc.
    Inventors: Bruno M. Sommer, Avi Bar-Zeev, Frank Angermann, Stephen E. Pinto, Lilli Ing-Marie Jonsson, Rahul Nair
  • Publication number: 20230055232
    Abstract: A method includes receiving one or more signals that each indicate a device type for a respective remote device, identifying one or more visible devices in one or more images, matching a first device from the one or more visible devices with a first signal from the one or more signals based on a device type of the first device matching a device type for the first signal and based on a visible output of the first device, pairing the first device with a second device, and controlling a function of the first device using the second device.
    Type: Application
    Filed: November 3, 2022
    Publication date: February 23, 2023
    Inventors: Jeffrey S. Norris, Bruno M. Sommer, Alexandre Da Veiga
  • Patent number: 11532227
    Abstract: A method includes obtaining a location and a device type for one or more remote devices, and identifying one or more visible devices in one or more images, the one or more visible devices having a location and a device type. The method also includes matching a first visible device from the one or more visible devices with a first remote device from the one or more remote devices based on a location and a device type of the first visible device matching a location and a device type of the first remote device, obtaining a user input, and controlling a function of the first remote device based on the user input.
    Type: Grant
    Filed: December 21, 2021
    Date of Patent: December 20, 2022
    Assignee: APPLE INC.
    Inventors: Jeffrey S. Norris, Bruno M. Sommer, Alexandre Da Veiga
  • Patent number: 11410328
    Abstract: This disclosure relates to maintaining a feature point map. The maintaining can include selectively updating feature points in the feature point map based on an assigned classification of the feature points. For example, when a feature points is assigned a first classification, the feature point is updated whenever information indicates that the feature point should be updated. In such an example, when the feature point is assigned a second classification different from the first classification, the feature point forgoes being updated whenever information indicates that the feature point should be updated. A classification can be assigned to a feature point using a classification system on one or more pixels of an image corresponding to the feature point.
    Type: Grant
    Filed: April 29, 2020
    Date of Patent: August 9, 2022
    Assignee: Apple Inc.
    Inventors: Bruno M. Sommer, Alexandre da Veiga
  • Publication number: 20220245906
    Abstract: Various implementations disclosed herein include devices, systems, and methods that enable a device to provide a view of virtual elements and a physical environment where the presentation of the virtual elements is based on positioning relative to the physical environment. In one example, a device is configured to detect a change in positioning of a virtual element, for example, when a virtual element is added, moved, or the physical environment around the virtual element is changed. The location of the virtual element in the physical environment is used to detect an attribute of the physical environment upon which the presentation of the virtual element depends. Thus, the device is further configured to detect an attribute (e.g., surface, table, mid-air, etc.) of the physical environment based on the placement of the virtual element and present the virtual element based on the detected attribute.
    Type: Application
    Filed: April 22, 2022
    Publication date: August 4, 2022
    Inventors: Aaron M. Burns, Bruno M. Sommer, Timothy R. Oriol
  • Patent number: 11348316
    Abstract: Various implementations disclosed herein include devices, systems, and methods that enable a device to provide a view of virtual elements and a physical environment where the presentation of the virtual elements is based on positioning relative to the physical environment. In one example, a device is configured to detect a change in positioning of a virtual element, for example, when a virtual element is added, moved, or the physical environment around the virtual element is changed. The location of the virtual element in the physical environment is used to detect an attribute of the physical environment upon which the presentation of the virtual element depends. Thus, the device is further configured to detect an attribute (e.g., surface, table, mid-air, etc.) of the physical environment based on the placement of the virtual element and present the virtual element based on the detected attribute.
    Type: Grant
    Filed: August 23, 2019
    Date of Patent: May 31, 2022
    Assignee: Apple Inc.
    Inventors: Aaron M. Burns, Bruno M. Sommer, Timothy R. Oriol
  • Publication number: 20220114882
    Abstract: A method includes obtaining a location and a device type for one or more remote devices, and identifying one or more visible devices in one or more images, the one or more visible devices having a location and a device type. The method also includes matching a first visible device from the one or more visible devices with a first remote device from the one or more remote devices based on a location and a device type of the first visible device matching a location and a device type of the first remote device, obtaining a user input, and controlling a function of the first remote device based on the user input.
    Type: Application
    Filed: December 21, 2021
    Publication date: April 14, 2022
    Inventors: Jeffrey S. Norris, Bruno M. Sommer, Alexandre Da Veiga
  • Publication number: 20220101058
    Abstract: Systems and methods for localization for mobile devices are described. Some implementations may include accessing motion data captured using one or more motion sensors; determining, based on the motion data, a coarse localization, wherein the coarse localization includes a first estimate of position; obtaining one or more feature point maps, wherein the feature point maps are associated with a position of the coarse localization; accessing images captured using one or more image sensors; determining, based on the images, a fine localization pose by localizing into a feature point map of the one or more feature point maps, wherein the fine localization pose includes a second estimate of position and an estimate of orientation; generating, based on the fine localization pose, a virtual object image including a view of a virtual object; and displaying the virtual object image.
    Type: Application
    Filed: October 4, 2021
    Publication date: March 31, 2022
    Inventors: Bruno M. Sommer, Alexandre Da Veiga
  • Patent number: 11210932
    Abstract: A method includes identifying remote devices, at a host device, based on received signals that indicate locations and device types for the remote devices. The method also includes identifying visible devices in images of a location and matching a first visible device to a first remote device. The first visible device is matched with the first remote device based on presence of the first visible device within a search area of the images, the search area of the images is determined based on the location for the first remote device, the first visible device is matched with the first remote device based on the device type for the first remote device, and the first visible device is matched with the first remote device based on a machine recognizable indicator that is output by the first visible device. The method also includes pairing the first remote device with the host device.
    Type: Grant
    Filed: May 19, 2020
    Date of Patent: December 28, 2021
    Assignee: Apple Inc.
    Inventors: Jeffrey S. Norris, Bruno M. Sommer, Alexandre Da Veiga
  • Publication number: 20210349676
    Abstract: In one implementation, a method of providing display device sharing and interactivity in simulated reality is performed at a first electronic device including one or more processors and a non-transitory memory. The method includes obtaining a gesture input to a first display device in communication with the first electronic device from a first user, where the first display device includes a first display. The method further includes transmitting a representation of the first display to a second electronic device in response to obtaining the gesture input. The method additionally includes receiving an input message directed to the first display device from the second electronic device, where the input message includes an input directive obtained by the second electronic device from a second user. The method also includes transmitting the input message to the first display device for execution by the first display device.
    Type: Application
    Filed: September 3, 2019
    Publication date: November 11, 2021
    Inventors: Bruno M. Sommer, Alexandre Da Veiga, Ioana Negoita
  • Publication number: 20210325974
    Abstract: Techniques for displaying a virtual object in an enhanced reality setting in accordance with a physical muting mode being active are described. In some examples, a system obtains context data for one or more physical elements in a physical setting, wherein the context data includes first context data and second context data that is different from the first context data. In some examples, in response to obtaining the context data for the one or more physical elements in the physical setting, a system causes display of a virtual object that represents the one or more physical elements using the first context data without using the second context data, in accordance with a determination that a physical muting mode is active.
    Type: Application
    Filed: June 29, 2021
    Publication date: October 21, 2021
    Inventors: Clément Pierre Nicolas BOISSIÈRE, Shaun BUDHRAM, Tucker Bull MORGAN, Bruno M. SOMMER, Connor A. SMITH
  • Publication number: 20210329044
    Abstract: A group communications platform facilitates that sharing of an application environment with other users. The platform may receive a request to initiate a group session for a local user and a remote user. An out-of-process network connection with a system communication channel between a local computing device associated with the local user and a remote computing device associated with the remote user may be established for the group session. A system call may be received from a local instance of a first application on the local computing device to transfer local data to a remote instance of the first application on the remote computing device via the out-of-process network connection. The local data may be transferred to the remote instance of the first application on the remote computing device via the out-of-process network connection and the system communication channel.
    Type: Application
    Filed: April 6, 2021
    Publication date: October 21, 2021
    Inventors: Bruno M. SOMMER, Leanid VOUK, Blerim CICI, Berkat S. TUNG
  • Publication number: 20210311608
    Abstract: A method includes displaying a home ER environment characterized by home ER world coordinates, including one or more diorama-view representation of one or more respective ER environments. Each diorama-view representation includes ER objects arranged in a spatial relationship according to corresponding ER world coordinates. In some implementations, in response to detecting an input directed to a first diorama-view representation, the method includes transforming the home ER environment. Transforming the home ER environment includes transforming the spatial relationship between a subset of the ER objects as a function of the home ER world coordinates and corresponding ER world coordinates.
    Type: Application
    Filed: June 17, 2021
    Publication date: October 7, 2021
    Inventors: Bruno M. Sommer, Tucker Bull Morgan, Shih Sang Chiu, Connor Alexander Smith
  • Patent number: 11138472
    Abstract: Systems and methods for localization for mobile devices are described. Some implementations may include accessing motion data captured using one or more motion sensors; determining, based on the motion data, a coarse localization, wherein the coarse localization includes a first estimate of position; obtaining one or more feature point maps, wherein the feature point maps are associated with a position of the coarse localization; accessing images captured using one or more image sensors; determining, based on the images, a fine localization pose by localizing into a feature point map of the one or more feature point maps, wherein the fine localization pose includes a second estimate of position and an estimate of orientation; generating, based on the fine localization pose, a virtual object image including a view of a virtual object; and displaying the virtual object image.
    Type: Grant
    Filed: September 19, 2019
    Date of Patent: October 5, 2021
    Assignee: Apple Inc.
    Inventors: Bruno M. Sommer, Alexandre da Veiga
  • Patent number: 11120600
    Abstract: Systems and methods for generating a video of an emoji that has been puppeted using inputs from image, depth, and audio. The inputs can capture facial expressions of a user, eye, eyebrow, mouth, and head movements. A pose, held by the user, can be detected that can be used to generate supplemental animation. The emoji can further be animated using physical properties associated with the emoji and captured movements. An emoji of a dog can have its ears move in response to an up-and-down movement, or a shaking of the head. The video can be sent in a message to one or more recipients. A sending device can render the puppeted video in accordance with hardware and software capabilities of a recipient's computer device.
    Type: Grant
    Filed: February 14, 2019
    Date of Patent: September 14, 2021
    Assignee: Apple Inc.
    Inventors: Justin D. Stoyles, Alexandre R. Moha, Nicolas V. Scapel, Guillaume P. Barlier, Aurelio Guzman, Bruno M. Sommer, Nina Damasky, Thibaut Weise, Thomas Goossens, Hoan Pham, Brian Amberg
  • Publication number: 20210097729
    Abstract: In one implementation, a method of resolving focal conflict in a computer-generated reality (CGR) environment is performed by a device including a processor, non-transitory memory, an image sensor, and a display. The method includes capturing, using the image sensor, an image of a scene including a real object in a particular direction at a first distance from the device. The method includes displaying, on the display, a CGR environment including a virtual object in the particular direction at a second distance from the device. In accordance with a determination that the second distance is less than the first distance, the CGR environment includes the virtual object overlaid on the scene. In accordance with a determination that the second distance is greater than the first distance, the CGR environment includes the virtual object with an obfuscation area that obfuscates at least a portion of the real object within the obfuscation area.
    Type: Application
    Filed: June 23, 2020
    Publication date: April 1, 2021
    Inventors: Alexis Henri Palangie, Shih Sang Chiu, Bruno M. Sommer, Connor Alexander Smith, Aaron Mackay Burns
  • Publication number: 20200372789
    Abstract: A method includes identifying remote devices, at a host device, based on received signals that indicate locations and device types for the remote devices. The method also includes identifying visible devices in images of a location and matching a first visible device to a first remote device. The first visible device is matched with the first remote device based on presence of the first visible device within a search area of the images, the search area of the images is determined based on the location for the first remote device, the first visible device is matched with the first remote device based on the device type for the first remote device, and the first visible device is matched with the first remote device based on a machine recognizable indicator that is output by the first visible device. The method also includes pairing the first remote device with the host device.
    Type: Application
    Filed: May 19, 2020
    Publication date: November 26, 2020
    Inventors: Jeffrey S. Norris, Bruno M. Sommer, Alexandre Da Veiga
  • Publication number: 20200264006
    Abstract: Methods and apparatus for spatial audio navigation that may, for example, be implemented by mobile multipurpose devices. A spatial audio navigation system provides navigational information in audio form to direct users to target locations. The system uses directionality of audio played through a binaural audio device to provide navigational cues to the user. A current location, target location, and map information may be input to pathfinding algorithms to determine a real world path between the user's current location and the target location. The system may then use directional audio played through a headset to guide the user on the path from the current location to the target location. The system may implement one or more of several different spatial audio navigation methods to direct a user when following a path using spatial audio-based cues.
    Type: Application
    Filed: September 25, 2018
    Publication date: August 20, 2020
    Applicant: Apple Inc.
    Inventors: Bruno M. Sommer, Avi Bar-Zeev, Frank Angermann, Stephen E. Pinto, Lilli Ing-Marie Jonsson, Rahul Nair