Patents by Inventor Jonathan PERRON
Jonathan PERRON has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12287913Abstract: The present disclosure generally relates to techniques and user interfaces for controlling and displaying representations of user in environments, such as during a live communication session and/or a live collaboration session.Type: GrantFiled: September 1, 2023Date of Patent: April 29, 2025Assignee: Apple Inc.Inventors: Jason D. Rickwald, Andrew R. Bacon, Kristi E. Bauerly, Rupert Burton, Jordan A. Cazamias, Tong Chen, Shih-Sang Chiu, Jonathan Perron, Giancarlo Yerkes
-
Publication number: 20250110607Abstract: Some examples of the disclosure are directed to systems and methods for displaying virtual presentations associated with a theater application in an augmented or fully-immersive three-dimensional environment. In one or more examples of the disclosure, the systems and methods include receiving a request to join a virtual presentation, and in response to receiving the request to join the virtual presentation, displaying a virtual presentation in a three-dimensional environment. The theater presentation is displayed in a manner that facilitates efficient communication between one or more presenter and one or more audience members who are part of the virtual presentation.Type: ApplicationFiled: September 23, 2024Publication date: April 3, 2025Inventors: Jordan A. CAZAMIAS, Jonathan PERRON
-
Publication number: 20250111596Abstract: Generating a 3D representation of a subject includes obtaining image data of a subject, obtaining tracking data for the subject based on the image data, and determining, for a particular frame of the image data, a velocity of the subject in the image data. A transparency treatment is applied to a portion of the virtual representation in accordance with the determined velocity. The portion of the virtual representation to which the transparency treatment is applied includes a shoulder region of the subject.Type: ApplicationFiled: September 26, 2024Publication date: April 3, 2025Inventors: Tong CHEN, Jonathan PERRON, Shih-Sang CHIU
-
Publication number: 20250103132Abstract: The present disclosure generally relates to techniques and user interfaces for controlling and displaying representations of user in environments, such as during a live communication session and/or a live collaboration session.Type: ApplicationFiled: December 6, 2024Publication date: March 27, 2025Inventors: Jason D. RICKWALD, Andrew R. BACON, Kristi E. BAUERLY, Rupert BURTON, Jordan A. CAZAMIAS, Tong CHEN, Shih-Sang CHIU, Stephen O. LEMAY, Jonathan PERRON, William A. SORRENTINO, III, Giancarlo YERKES, Alan C. DYE
-
Patent number: 12254579Abstract: Various implementations disclosed herein include devices, systems, and methods that create a 3D video that includes determining first adjustments (e.g., first transforms) to video frames (e.g., one or more RGB images and depth images per frame) to align content in a coordinate system to remove the effects of capturing camera motion. Various implementations disclosed herein include devices, systems, and methods that playback a 3D video and includes determining second adjustments (e.g., second transforms) to remove the effects of movement of a viewing electronic device relative to a viewing environment during playback of the 3D video. Some implementations distinguish static content and moving content of the video frames to playback only moving objects or facilitate concurrent playback of multiple spatially related 3D videos. The 3D video may include images, audio, or 3D video of a video-capture-device user.Type: GrantFiled: November 10, 2022Date of Patent: March 18, 2025Assignee: Apple Inc.Inventors: Timothy R. Pease, Alexandre Da Veiga, Benjamin H. Boesel, David H. Huang, Jonathan Perron, Shih-Sang Chiu, Spencer H. Ray
-
Publication number: 20250008057Abstract: In some embodiments, a computer system changes visual appearance of visual representations of participants moving within a simulated threshold distance of a user of the computer system. In some embodiments, a computer system arranges representations of users according to templates. In some embodiments, a computer system arranges representations of users based on shared content. In some embodiments, a computer system changes a spatial arrangement of participants in accordance with a quantity of participants that are a first type of participant. In some embodiments, a computer system changes a spatial arrangement of elements of a real-time communication session to join a group of participants. In some embodiments, a computer system facilitates interaction with groups of spatial representations of participants of a communication session. In some embodiments, a computer system facilitates updates of a spatial arrangement of participants based on a spatial distribution of the participants.Type: ApplicationFiled: June 3, 2024Publication date: January 2, 2025Inventors: Shih-Sang CHIU, Jason D. RICKWALD, Rupert BURTON, Giancarlo YERKES, Stephen O. LEMAY, Jonathan PERRON, Wei WANG
-
Publication number: 20240310907Abstract: In one implementation, a method of activating a user interface element is performed at a device including an input device, an eye tracker, a display, one or more processors, and non-transitory memory. The method includes displaying, on the display, a plurality of user interface elements and receiving, via the input device, a user input corresponding to an input location. The method includes determining, using the eye tracker, a gaze location. The method includes, in response to determining that the input location is at least a threshold distance from the gaze location, activating a first user interface element at the gaze location and, in response to determining that it is not, activating a second user interface element at the input location.Type: ApplicationFiled: June 14, 2022Publication date: September 19, 2024Inventors: Shih-Sang Chiu, Benjamin H. Boesel, David H. Huang, Jonathan Perron, Jonathan Ravasz, Jordan A. Cazamias, Tyson Erze
-
Publication number: 20240241616Abstract: In one implementation, a method for navigating windows in 3D. The method includes: displaying a first content pane with a first appearance at a first z-depth within an extended reality (XR) environment, wherein the first content pane includes first content and an input field; detecting a user input directed to the input field; and, in response to detecting the user input directed to the input field: moving the content first pane to a second z-depth within the XR environment, wherein the second z-depth is different from the first z-depth; modifying the first content pane by changing the first content pane from the first appearance to a second appearance; and displaying a second content pane with the first appearance at the first z-depth within the XR environment.Type: ApplicationFiled: May 11, 2022Publication date: July 18, 2024Inventors: Shih-Sang Chiu, Benjamin H. Boesel, David H. Huang, Jonathan Perron, Jonathan Ravasz, Jordan A. Cazamias, Tyson Erze
-
Publication number: 20240231569Abstract: In one implementation, a method of displaying content is performed at a device including a display, one or more processors, and non-transitory memory. The method includes displaying, in a first area, a first content pane including first content including a link to second content. The method includes, while displaying the first content pane in the first area, receiving a user input selecting the link to the second content and indicating a second area separate from the first area and not displaying a content pane. The method includes, in response to receiving the user input selecting the link to the second content and indicating the second area, displaying, in the second area, a second content pane including the second content.Type: ApplicationFiled: May 31, 2022Publication date: July 11, 2024Inventors: Shih-Sang Chiu, Benjamin H. Boesel, David H. Huang, Jonathan Perron, Jonathan Ravasz, Jordan A. Cazamias, Tyson Erze
-
Patent number: 12008216Abstract: A method is performed at an electronic device including one or more processors, a non-transitory memory, and a display. The method includes obtaining a first volumetric object associated with a first content region. The first content region is associated with a first tab. The method includes generating a first volumetric representation of the first volumetric object based on a function of the first tab. The first volumetric representation is displayable within the first tab. The method includes concurrently displaying, on the display, the first content region and the first volumetric representation within the first tab. In some implementations, the method includes changing a view of the first volumetric representation, such as rotating the first volumetric representation or according to a positional change to the electronic device. In some implementations, the method includes generating a plurality of volumetric representations and classifying the plurality of volumetric representations.Type: GrantFiled: May 19, 2021Date of Patent: June 11, 2024Assignee: APPLE INC.Inventors: Benjamin Hunter Boesel, Jonathan Perron, Shih Sang Chiu, David H. Y. Huang, Jonathan Ravasz, Jordan Alexander Cazamias
-
Publication number: 20240112419Abstract: In one implementation, a method for dynamically determining presentation and transitional regions for content delivery. The method includes obtaining a first set of characteristics associated with a physical environment; and detecting a request to cause presentation of virtual content. In response to detecting the request, the method also includes obtaining a second set of characteristics associated with the virtual content; generating a presentation region for the virtual content based at least in part on the first and second sets of characteristics; and generating a transitional region provided to at least partially surround the presentation region based at least in part on the first and second sets of characteristics. The method further includes concurrently presenting the virtual content within the presentation region and the transitional region at least partially surrounding the presentation region.Type: ApplicationFiled: March 20, 2023Publication date: April 4, 2024Inventors: Benjamin H. Boesel, David H. Huang, Jonathan Perron, Shih-Sang Chiu
-
Publication number: 20240077937Abstract: The present disclosure generally relates to techniques and user interfaces for controlling and displaying representations of user in environments, such as during a live communication session and/or a live collaboration session.Type: ApplicationFiled: September 1, 2023Publication date: March 7, 2024Inventors: Jason D. RICKWALD, Andrew R. BACON, Kristi E. BAUERLY, Rupert BURTON, Jordan A. CAZAMIAS, Tong CHEN, Shih-Sang CHIU, Jonathan PERRON, Giancarlo YERKES
-
Publication number: 20240037886Abstract: Various implementations disclosed herein include devices, systems, and methods that generate and share/transmit a 3D representation of a physical environment during a communication session. Some of the elements (e.g., points) of the 3D representation may be replaced to improve the quality and/or efficiency of the modeling and transmitting processes. A user's device may provide a view and/or feedback during a scan of the physical environment during the communication session to facilitate accurate understanding of what is being transmitted. Additional information, e.g., a second representation of a portion of the physical environment, may also be transmitted during a communication session. The second representations may represent an aspect (e.g., more details, photo quality images, live, etc.) of a portion not represented by the 3D representation.Type: ApplicationFiled: October 16, 2023Publication date: February 1, 2024Inventors: Shih-Sang Chiu, Alexandre Da Veiga, David H. Huang, Jonathan Perron, Jordan A. Cazamias
-
Publication number: 20230419625Abstract: Various implementations provide a representation of at least a portion of a user within a three-dimensional (3D) environment other than the user's physical environment. Based on detecting a condition, a representation of another object of the user's physical environment is shown to provide context. As examples, a representation of a sitting surface may be shown based on detecting that the user is sitting down, representations of a table and coffee cup may be shown based on detecting that the user is reaching out to pick up a coffee cup, a representation of a second user may be shown based on detecting a voice or the user turning his attention towards a moving object or sound, and a depiction of a puppy may be shown when the puppy's bark is detected.Type: ApplicationFiled: September 13, 2023Publication date: December 28, 2023Inventors: Shih-Sang Chiu, Alexandre Da Veiga, David H. Huang, Jonathan Perron, Jordan A. Cazamias
-
Publication number: 20230343027Abstract: Various implementations disclosed herein include devices, systems, and methods for selecting multiple virtual objects within an environment. In some implementations, a method includes receiving a first gesture associated with a first virtual object in an environment. A movement of the first virtual object in the environment within a threshold distance of a second virtual object in the environment is detected. In response to detecting the movement of the first virtual object in the environment within the threshold distance of the second virtual object in the environment, a concurrent movement of the first virtual object and the second virtual object is displayed in the environment based on the first gesture.Type: ApplicationFiled: March 20, 2023Publication date: October 26, 2023Inventors: Jordan A. Cazamias, Aaron M. Burns, David M. Schattel, Jonathan Perron, Jonathan Ravasz, Shih-Sang Chiu
-
Publication number: 20230333644Abstract: Various implementations disclosed herein include devices, systems, and methods for organizing virtual objects within an environment. In some implementations, a method includes obtaining a user input corresponding to a command to associate a virtual object with a region of an environment. A gaze input corresponding to a user focus location in the region is detected. A movement of the virtual object to an object placement location proximate the user focus location is displayed.Type: ApplicationFiled: March 20, 2023Publication date: October 19, 2023Inventors: Jordan A. Cazamias, Aaron M. Burns, David M. Schattel, Jonathan Perron, Jonathan Ravasz, Shih-Sang Chiu
-
Publication number: 20230333641Abstract: In accordance with various implementations, a method is performed at an electronic device with one or more processors, a non-transitory memory, and a display. The method includes determining an engagement score associated with an object that is visible at the display. The engagement score characterizes a level of user engagement with respect to the object. The method includes, in response to determining that the engagement score satisfies an engagement criterion, determining an ambience vector associated with the object and presenting content based on the ambience vector. The ambience vector represents a target ambient environment.Type: ApplicationFiled: December 23, 2022Publication date: October 19, 2023Inventors: Benjamin H. Boesel, David H. Huang, Jonathan Perron, Shih-Sang Chiu
-
Publication number: 20230334724Abstract: Various implementations disclosed herein include devices, systems, and methods for determining a placement of virtual objects in a collection of virtual objects when changing from a first viewing arrangement to a second viewing arrangement based on their respective positions in one of the viewing arrangements. In some implementations, a method includes displaying a set of virtual objects in a first viewing arrangement in a first region of an environment. The set of virtual objects are arranged in a first spatial arrangement. A user input corresponding to a request to change to a second viewing arrangement in a second region of the environment is obtained. A mapping is determined between the first spatial arrangement and a second spatial arrangement based on spatial relationships between the set of virtual objects. The set of virtual objects is displayed in the second viewing arrangement in the second region of the environment.Type: ApplicationFiled: March 20, 2023Publication date: October 19, 2023Inventors: Jordan A. Cazamias, Aaron M. Burns, David M. Schattel, Jonathan Perron, Jonathan Ravasz, Shih-Sang Chiu
-
Publication number: 20230316634Abstract: In some embodiments, a computer system selectively recenters virtual content to a viewpoint of a user, in the presence of physical or virtual obstacles, and/or automatically recenters one or more virtual objects in response to the display generation component changing state, selectively recenters content associated with a communication session between multiple users in response detected user input, changes the visual prominence of content included in virtual objects based on viewpoint and/or based on a detected user attention of a user, modifies visual prominence of one or more virtual objects to resolve apparent obscuring of the one or more virtual objects, modifies visual prominence based on user viewpoint relative to virtual objects, concurrently modifies visual prominence based various types of user interaction, and/or changes an amount of visual impact of an environmental effect in response to detected user input.Type: ApplicationFiled: January 19, 2023Publication date: October 5, 2023Inventors: Shih-Sang CHIU, Benjamin H. BOESEL, Jonathan PERRON, Stephen O. LEMAY, Christopher D. MCKENZIE, Dorian D. DARGAN, Jonathan RAVASZ, Nathan GITTER
-
Publication number: 20230281933Abstract: Various implementations disclosed herein include devices, systems, and methods that create a 3D video that includes determining first adjustments (e.g., first transforms) to video frames (e.g., one or more RGB images and depth images per frame) to align content in a coordinate system to remove the effects of capturing camera motion. Various implementations disclosed herein include devices, systems, and methods that playback a 3D video and includes determining second adjustments (e.g., second transforms) to remove the effects of movement of a viewing electronic device relative to a viewing environment during playback of the 3D video. Some implementations distinguish static content and moving content of the video frames to playback only moving objects or facilitate concurrent playback of multiple spatially related 3D videos. The 3D video may include images, audio, or 3D video of a video-capture-device user.Type: ApplicationFiled: November 10, 2022Publication date: September 7, 2023Inventors: Timothy R. PEASE, Alexandre DA VEIGA, Benjamin H. BOESEL, David H. HUANG, Jonathan PERRON, Shih-Sang CHIU, Spencer H. RAY