Patents by Inventor Jonathan PERRON

Jonathan PERRON has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12287913
    Abstract: The present disclosure generally relates to techniques and user interfaces for controlling and displaying representations of user in environments, such as during a live communication session and/or a live collaboration session.
    Type: Grant
    Filed: September 1, 2023
    Date of Patent: April 29, 2025
    Assignee: Apple Inc.
    Inventors: Jason D. Rickwald, Andrew R. Bacon, Kristi E. Bauerly, Rupert Burton, Jordan A. Cazamias, Tong Chen, Shih-Sang Chiu, Jonathan Perron, Giancarlo Yerkes
  • Publication number: 20250110607
    Abstract: Some examples of the disclosure are directed to systems and methods for displaying virtual presentations associated with a theater application in an augmented or fully-immersive three-dimensional environment. In one or more examples of the disclosure, the systems and methods include receiving a request to join a virtual presentation, and in response to receiving the request to join the virtual presentation, displaying a virtual presentation in a three-dimensional environment. The theater presentation is displayed in a manner that facilitates efficient communication between one or more presenter and one or more audience members who are part of the virtual presentation.
    Type: Application
    Filed: September 23, 2024
    Publication date: April 3, 2025
    Inventors: Jordan A. CAZAMIAS, Jonathan PERRON
  • Publication number: 20250111596
    Abstract: Generating a 3D representation of a subject includes obtaining image data of a subject, obtaining tracking data for the subject based on the image data, and determining, for a particular frame of the image data, a velocity of the subject in the image data. A transparency treatment is applied to a portion of the virtual representation in accordance with the determined velocity. The portion of the virtual representation to which the transparency treatment is applied includes a shoulder region of the subject.
    Type: Application
    Filed: September 26, 2024
    Publication date: April 3, 2025
    Inventors: Tong CHEN, Jonathan PERRON, Shih-Sang CHIU
  • Publication number: 20250103132
    Abstract: The present disclosure generally relates to techniques and user interfaces for controlling and displaying representations of user in environments, such as during a live communication session and/or a live collaboration session.
    Type: Application
    Filed: December 6, 2024
    Publication date: March 27, 2025
    Inventors: Jason D. RICKWALD, Andrew R. BACON, Kristi E. BAUERLY, Rupert BURTON, Jordan A. CAZAMIAS, Tong CHEN, Shih-Sang CHIU, Stephen O. LEMAY, Jonathan PERRON, William A. SORRENTINO, III, Giancarlo YERKES, Alan C. DYE
  • Patent number: 12254579
    Abstract: Various implementations disclosed herein include devices, systems, and methods that create a 3D video that includes determining first adjustments (e.g., first transforms) to video frames (e.g., one or more RGB images and depth images per frame) to align content in a coordinate system to remove the effects of capturing camera motion. Various implementations disclosed herein include devices, systems, and methods that playback a 3D video and includes determining second adjustments (e.g., second transforms) to remove the effects of movement of a viewing electronic device relative to a viewing environment during playback of the 3D video. Some implementations distinguish static content and moving content of the video frames to playback only moving objects or facilitate concurrent playback of multiple spatially related 3D videos. The 3D video may include images, audio, or 3D video of a video-capture-device user.
    Type: Grant
    Filed: November 10, 2022
    Date of Patent: March 18, 2025
    Assignee: Apple Inc.
    Inventors: Timothy R. Pease, Alexandre Da Veiga, Benjamin H. Boesel, David H. Huang, Jonathan Perron, Shih-Sang Chiu, Spencer H. Ray
  • Publication number: 20250008057
    Abstract: In some embodiments, a computer system changes visual appearance of visual representations of participants moving within a simulated threshold distance of a user of the computer system. In some embodiments, a computer system arranges representations of users according to templates. In some embodiments, a computer system arranges representations of users based on shared content. In some embodiments, a computer system changes a spatial arrangement of participants in accordance with a quantity of participants that are a first type of participant. In some embodiments, a computer system changes a spatial arrangement of elements of a real-time communication session to join a group of participants. In some embodiments, a computer system facilitates interaction with groups of spatial representations of participants of a communication session. In some embodiments, a computer system facilitates updates of a spatial arrangement of participants based on a spatial distribution of the participants.
    Type: Application
    Filed: June 3, 2024
    Publication date: January 2, 2025
    Inventors: Shih-Sang CHIU, Jason D. RICKWALD, Rupert BURTON, Giancarlo YERKES, Stephen O. LEMAY, Jonathan PERRON, Wei WANG
  • Publication number: 20240310907
    Abstract: In one implementation, a method of activating a user interface element is performed at a device including an input device, an eye tracker, a display, one or more processors, and non-transitory memory. The method includes displaying, on the display, a plurality of user interface elements and receiving, via the input device, a user input corresponding to an input location. The method includes determining, using the eye tracker, a gaze location. The method includes, in response to determining that the input location is at least a threshold distance from the gaze location, activating a first user interface element at the gaze location and, in response to determining that it is not, activating a second user interface element at the input location.
    Type: Application
    Filed: June 14, 2022
    Publication date: September 19, 2024
    Inventors: Shih-Sang Chiu, Benjamin H. Boesel, David H. Huang, Jonathan Perron, Jonathan Ravasz, Jordan A. Cazamias, Tyson Erze
  • Publication number: 20240241616
    Abstract: In one implementation, a method for navigating windows in 3D. The method includes: displaying a first content pane with a first appearance at a first z-depth within an extended reality (XR) environment, wherein the first content pane includes first content and an input field; detecting a user input directed to the input field; and, in response to detecting the user input directed to the input field: moving the content first pane to a second z-depth within the XR environment, wherein the second z-depth is different from the first z-depth; modifying the first content pane by changing the first content pane from the first appearance to a second appearance; and displaying a second content pane with the first appearance at the first z-depth within the XR environment.
    Type: Application
    Filed: May 11, 2022
    Publication date: July 18, 2024
    Inventors: Shih-Sang Chiu, Benjamin H. Boesel, David H. Huang, Jonathan Perron, Jonathan Ravasz, Jordan A. Cazamias, Tyson Erze
  • Publication number: 20240231569
    Abstract: In one implementation, a method of displaying content is performed at a device including a display, one or more processors, and non-transitory memory. The method includes displaying, in a first area, a first content pane including first content including a link to second content. The method includes, while displaying the first content pane in the first area, receiving a user input selecting the link to the second content and indicating a second area separate from the first area and not displaying a content pane. The method includes, in response to receiving the user input selecting the link to the second content and indicating the second area, displaying, in the second area, a second content pane including the second content.
    Type: Application
    Filed: May 31, 2022
    Publication date: July 11, 2024
    Inventors: Shih-Sang Chiu, Benjamin H. Boesel, David H. Huang, Jonathan Perron, Jonathan Ravasz, Jordan A. Cazamias, Tyson Erze
  • Patent number: 12008216
    Abstract: A method is performed at an electronic device including one or more processors, a non-transitory memory, and a display. The method includes obtaining a first volumetric object associated with a first content region. The first content region is associated with a first tab. The method includes generating a first volumetric representation of the first volumetric object based on a function of the first tab. The first volumetric representation is displayable within the first tab. The method includes concurrently displaying, on the display, the first content region and the first volumetric representation within the first tab. In some implementations, the method includes changing a view of the first volumetric representation, such as rotating the first volumetric representation or according to a positional change to the electronic device. In some implementations, the method includes generating a plurality of volumetric representations and classifying the plurality of volumetric representations.
    Type: Grant
    Filed: May 19, 2021
    Date of Patent: June 11, 2024
    Assignee: APPLE INC.
    Inventors: Benjamin Hunter Boesel, Jonathan Perron, Shih Sang Chiu, David H. Y. Huang, Jonathan Ravasz, Jordan Alexander Cazamias
  • Publication number: 20240112419
    Abstract: In one implementation, a method for dynamically determining presentation and transitional regions for content delivery. The method includes obtaining a first set of characteristics associated with a physical environment; and detecting a request to cause presentation of virtual content. In response to detecting the request, the method also includes obtaining a second set of characteristics associated with the virtual content; generating a presentation region for the virtual content based at least in part on the first and second sets of characteristics; and generating a transitional region provided to at least partially surround the presentation region based at least in part on the first and second sets of characteristics. The method further includes concurrently presenting the virtual content within the presentation region and the transitional region at least partially surrounding the presentation region.
    Type: Application
    Filed: March 20, 2023
    Publication date: April 4, 2024
    Inventors: Benjamin H. Boesel, David H. Huang, Jonathan Perron, Shih-Sang Chiu
  • Publication number: 20240077937
    Abstract: The present disclosure generally relates to techniques and user interfaces for controlling and displaying representations of user in environments, such as during a live communication session and/or a live collaboration session.
    Type: Application
    Filed: September 1, 2023
    Publication date: March 7, 2024
    Inventors: Jason D. RICKWALD, Andrew R. BACON, Kristi E. BAUERLY, Rupert BURTON, Jordan A. CAZAMIAS, Tong CHEN, Shih-Sang CHIU, Jonathan PERRON, Giancarlo YERKES
  • Publication number: 20240037886
    Abstract: Various implementations disclosed herein include devices, systems, and methods that generate and share/transmit a 3D representation of a physical environment during a communication session. Some of the elements (e.g., points) of the 3D representation may be replaced to improve the quality and/or efficiency of the modeling and transmitting processes. A user's device may provide a view and/or feedback during a scan of the physical environment during the communication session to facilitate accurate understanding of what is being transmitted. Additional information, e.g., a second representation of a portion of the physical environment, may also be transmitted during a communication session. The second representations may represent an aspect (e.g., more details, photo quality images, live, etc.) of a portion not represented by the 3D representation.
    Type: Application
    Filed: October 16, 2023
    Publication date: February 1, 2024
    Inventors: Shih-Sang Chiu, Alexandre Da Veiga, David H. Huang, Jonathan Perron, Jordan A. Cazamias
  • Publication number: 20230419625
    Abstract: Various implementations provide a representation of at least a portion of a user within a three-dimensional (3D) environment other than the user's physical environment. Based on detecting a condition, a representation of another object of the user's physical environment is shown to provide context. As examples, a representation of a sitting surface may be shown based on detecting that the user is sitting down, representations of a table and coffee cup may be shown based on detecting that the user is reaching out to pick up a coffee cup, a representation of a second user may be shown based on detecting a voice or the user turning his attention towards a moving object or sound, and a depiction of a puppy may be shown when the puppy's bark is detected.
    Type: Application
    Filed: September 13, 2023
    Publication date: December 28, 2023
    Inventors: Shih-Sang Chiu, Alexandre Da Veiga, David H. Huang, Jonathan Perron, Jordan A. Cazamias
  • Publication number: 20230343027
    Abstract: Various implementations disclosed herein include devices, systems, and methods for selecting multiple virtual objects within an environment. In some implementations, a method includes receiving a first gesture associated with a first virtual object in an environment. A movement of the first virtual object in the environment within a threshold distance of a second virtual object in the environment is detected. In response to detecting the movement of the first virtual object in the environment within the threshold distance of the second virtual object in the environment, a concurrent movement of the first virtual object and the second virtual object is displayed in the environment based on the first gesture.
    Type: Application
    Filed: March 20, 2023
    Publication date: October 26, 2023
    Inventors: Jordan A. Cazamias, Aaron M. Burns, David M. Schattel, Jonathan Perron, Jonathan Ravasz, Shih-Sang Chiu
  • Publication number: 20230333644
    Abstract: Various implementations disclosed herein include devices, systems, and methods for organizing virtual objects within an environment. In some implementations, a method includes obtaining a user input corresponding to a command to associate a virtual object with a region of an environment. A gaze input corresponding to a user focus location in the region is detected. A movement of the virtual object to an object placement location proximate the user focus location is displayed.
    Type: Application
    Filed: March 20, 2023
    Publication date: October 19, 2023
    Inventors: Jordan A. Cazamias, Aaron M. Burns, David M. Schattel, Jonathan Perron, Jonathan Ravasz, Shih-Sang Chiu
  • Publication number: 20230333641
    Abstract: In accordance with various implementations, a method is performed at an electronic device with one or more processors, a non-transitory memory, and a display. The method includes determining an engagement score associated with an object that is visible at the display. The engagement score characterizes a level of user engagement with respect to the object. The method includes, in response to determining that the engagement score satisfies an engagement criterion, determining an ambience vector associated with the object and presenting content based on the ambience vector. The ambience vector represents a target ambient environment.
    Type: Application
    Filed: December 23, 2022
    Publication date: October 19, 2023
    Inventors: Benjamin H. Boesel, David H. Huang, Jonathan Perron, Shih-Sang Chiu
  • Publication number: 20230334724
    Abstract: Various implementations disclosed herein include devices, systems, and methods for determining a placement of virtual objects in a collection of virtual objects when changing from a first viewing arrangement to a second viewing arrangement based on their respective positions in one of the viewing arrangements. In some implementations, a method includes displaying a set of virtual objects in a first viewing arrangement in a first region of an environment. The set of virtual objects are arranged in a first spatial arrangement. A user input corresponding to a request to change to a second viewing arrangement in a second region of the environment is obtained. A mapping is determined between the first spatial arrangement and a second spatial arrangement based on spatial relationships between the set of virtual objects. The set of virtual objects is displayed in the second viewing arrangement in the second region of the environment.
    Type: Application
    Filed: March 20, 2023
    Publication date: October 19, 2023
    Inventors: Jordan A. Cazamias, Aaron M. Burns, David M. Schattel, Jonathan Perron, Jonathan Ravasz, Shih-Sang Chiu
  • Publication number: 20230316634
    Abstract: In some embodiments, a computer system selectively recenters virtual content to a viewpoint of a user, in the presence of physical or virtual obstacles, and/or automatically recenters one or more virtual objects in response to the display generation component changing state, selectively recenters content associated with a communication session between multiple users in response detected user input, changes the visual prominence of content included in virtual objects based on viewpoint and/or based on a detected user attention of a user, modifies visual prominence of one or more virtual objects to resolve apparent obscuring of the one or more virtual objects, modifies visual prominence based on user viewpoint relative to virtual objects, concurrently modifies visual prominence based various types of user interaction, and/or changes an amount of visual impact of an environmental effect in response to detected user input.
    Type: Application
    Filed: January 19, 2023
    Publication date: October 5, 2023
    Inventors: Shih-Sang CHIU, Benjamin H. BOESEL, Jonathan PERRON, Stephen O. LEMAY, Christopher D. MCKENZIE, Dorian D. DARGAN, Jonathan RAVASZ, Nathan GITTER
  • Publication number: 20230281933
    Abstract: Various implementations disclosed herein include devices, systems, and methods that create a 3D video that includes determining first adjustments (e.g., first transforms) to video frames (e.g., one or more RGB images and depth images per frame) to align content in a coordinate system to remove the effects of capturing camera motion. Various implementations disclosed herein include devices, systems, and methods that playback a 3D video and includes determining second adjustments (e.g., second transforms) to remove the effects of movement of a viewing electronic device relative to a viewing environment during playback of the 3D video. Some implementations distinguish static content and moving content of the video frames to playback only moving objects or facilitate concurrent playback of multiple spatially related 3D videos. The 3D video may include images, audio, or 3D video of a video-capture-device user.
    Type: Application
    Filed: November 10, 2022
    Publication date: September 7, 2023
    Inventors: Timothy R. PEASE, Alexandre DA VEIGA, Benjamin H. BOESEL, David H. HUANG, Jonathan PERRON, Shih-Sang CHIU, Spencer H. RAY