Patents by Inventor Jonathan PERRON

Jonathan PERRON has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12008216
    Abstract: A method is performed at an electronic device including one or more processors, a non-transitory memory, and a display. The method includes obtaining a first volumetric object associated with a first content region. The first content region is associated with a first tab. The method includes generating a first volumetric representation of the first volumetric object based on a function of the first tab. The first volumetric representation is displayable within the first tab. The method includes concurrently displaying, on the display, the first content region and the first volumetric representation within the first tab. In some implementations, the method includes changing a view of the first volumetric representation, such as rotating the first volumetric representation or according to a positional change to the electronic device. In some implementations, the method includes generating a plurality of volumetric representations and classifying the plurality of volumetric representations.
    Type: Grant
    Filed: May 19, 2021
    Date of Patent: June 11, 2024
    Assignee: APPLE INC.
    Inventors: Benjamin Hunter Boesel, Jonathan Perron, Shih Sang Chiu, David H. Y. Huang, Jonathan Ravasz, Jordan Alexander Cazamias
  • Publication number: 20240112419
    Abstract: In one implementation, a method for dynamically determining presentation and transitional regions for content delivery. The method includes obtaining a first set of characteristics associated with a physical environment; and detecting a request to cause presentation of virtual content. In response to detecting the request, the method also includes obtaining a second set of characteristics associated with the virtual content; generating a presentation region for the virtual content based at least in part on the first and second sets of characteristics; and generating a transitional region provided to at least partially surround the presentation region based at least in part on the first and second sets of characteristics. The method further includes concurrently presenting the virtual content within the presentation region and the transitional region at least partially surrounding the presentation region.
    Type: Application
    Filed: March 20, 2023
    Publication date: April 4, 2024
    Inventors: Benjamin H. Boesel, David H. Huang, Jonathan Perron, Shih-Sang Chiu
  • Publication number: 20240077937
    Abstract: The present disclosure generally relates to techniques and user interfaces for controlling and displaying representations of user in environments, such as during a live communication session and/or a live collaboration session.
    Type: Application
    Filed: September 1, 2023
    Publication date: March 7, 2024
    Inventors: Jason D. RICKWALD, Andrew R. BACON, Kristi E. BAUERLY, Rupert BURTON, Jordan A. CAZAMIAS, Tong CHEN, Shih-Sang CHIU, Jonathan PERRON, Giancarlo YERKES
  • Publication number: 20240037886
    Abstract: Various implementations disclosed herein include devices, systems, and methods that generate and share/transmit a 3D representation of a physical environment during a communication session. Some of the elements (e.g., points) of the 3D representation may be replaced to improve the quality and/or efficiency of the modeling and transmitting processes. A user's device may provide a view and/or feedback during a scan of the physical environment during the communication session to facilitate accurate understanding of what is being transmitted. Additional information, e.g., a second representation of a portion of the physical environment, may also be transmitted during a communication session. The second representations may represent an aspect (e.g., more details, photo quality images, live, etc.) of a portion not represented by the 3D representation.
    Type: Application
    Filed: October 16, 2023
    Publication date: February 1, 2024
    Inventors: Shih-Sang Chiu, Alexandre Da Veiga, David H. Huang, Jonathan Perron, Jordan A. Cazamias
  • Publication number: 20230419625
    Abstract: Various implementations provide a representation of at least a portion of a user within a three-dimensional (3D) environment other than the user's physical environment. Based on detecting a condition, a representation of another object of the user's physical environment is shown to provide context. As examples, a representation of a sitting surface may be shown based on detecting that the user is sitting down, representations of a table and coffee cup may be shown based on detecting that the user is reaching out to pick up a coffee cup, a representation of a second user may be shown based on detecting a voice or the user turning his attention towards a moving object or sound, and a depiction of a puppy may be shown when the puppy's bark is detected.
    Type: Application
    Filed: September 13, 2023
    Publication date: December 28, 2023
    Inventors: Shih-Sang Chiu, Alexandre Da Veiga, David H. Huang, Jonathan Perron, Jordan A. Cazamias
  • Publication number: 20230343027
    Abstract: Various implementations disclosed herein include devices, systems, and methods for selecting multiple virtual objects within an environment. In some implementations, a method includes receiving a first gesture associated with a first virtual object in an environment. A movement of the first virtual object in the environment within a threshold distance of a second virtual object in the environment is detected. In response to detecting the movement of the first virtual object in the environment within the threshold distance of the second virtual object in the environment, a concurrent movement of the first virtual object and the second virtual object is displayed in the environment based on the first gesture.
    Type: Application
    Filed: March 20, 2023
    Publication date: October 26, 2023
    Inventors: Jordan A. Cazamias, Aaron M. Burns, David M. Schattel, Jonathan Perron, Jonathan Ravasz, Shih-Sang Chiu
  • Publication number: 20230333644
    Abstract: Various implementations disclosed herein include devices, systems, and methods for organizing virtual objects within an environment. In some implementations, a method includes obtaining a user input corresponding to a command to associate a virtual object with a region of an environment. A gaze input corresponding to a user focus location in the region is detected. A movement of the virtual object to an object placement location proximate the user focus location is displayed.
    Type: Application
    Filed: March 20, 2023
    Publication date: October 19, 2023
    Inventors: Jordan A. Cazamias, Aaron M. Burns, David M. Schattel, Jonathan Perron, Jonathan Ravasz, Shih-Sang Chiu
  • Publication number: 20230333641
    Abstract: In accordance with various implementations, a method is performed at an electronic device with one or more processors, a non-transitory memory, and a display. The method includes determining an engagement score associated with an object that is visible at the display. The engagement score characterizes a level of user engagement with respect to the object. The method includes, in response to determining that the engagement score satisfies an engagement criterion, determining an ambience vector associated with the object and presenting content based on the ambience vector. The ambience vector represents a target ambient environment.
    Type: Application
    Filed: December 23, 2022
    Publication date: October 19, 2023
    Inventors: Benjamin H. Boesel, David H. Huang, Jonathan Perron, Shih-Sang Chiu
  • Publication number: 20230334724
    Abstract: Various implementations disclosed herein include devices, systems, and methods for determining a placement of virtual objects in a collection of virtual objects when changing from a first viewing arrangement to a second viewing arrangement based on their respective positions in one of the viewing arrangements. In some implementations, a method includes displaying a set of virtual objects in a first viewing arrangement in a first region of an environment. The set of virtual objects are arranged in a first spatial arrangement. A user input corresponding to a request to change to a second viewing arrangement in a second region of the environment is obtained. A mapping is determined between the first spatial arrangement and a second spatial arrangement based on spatial relationships between the set of virtual objects. The set of virtual objects is displayed in the second viewing arrangement in the second region of the environment.
    Type: Application
    Filed: March 20, 2023
    Publication date: October 19, 2023
    Inventors: Jordan A. Cazamias, Aaron M. Burns, David M. Schattel, Jonathan Perron, Jonathan Ravasz, Shih-Sang Chiu
  • Publication number: 20230316634
    Abstract: In some embodiments, a computer system selectively recenters virtual content to a viewpoint of a user, in the presence of physical or virtual obstacles, and/or automatically recenters one or more virtual objects in response to the display generation component changing state, selectively recenters content associated with a communication session between multiple users in response detected user input, changes the visual prominence of content included in virtual objects based on viewpoint and/or based on a detected user attention of a user, modifies visual prominence of one or more virtual objects to resolve apparent obscuring of the one or more virtual objects, modifies visual prominence based on user viewpoint relative to virtual objects, concurrently modifies visual prominence based various types of user interaction, and/or changes an amount of visual impact of an environmental effect in response to detected user input.
    Type: Application
    Filed: January 19, 2023
    Publication date: October 5, 2023
    Inventors: Shih-Sang CHIU, Benjamin H. BOESEL, Jonathan PERRON, Stephen O. LEMAY, Christopher D. MCKENZIE, Dorian D. DARGAN, Jonathan RAVASZ, Nathan GITTER
  • Publication number: 20230281933
    Abstract: Various implementations disclosed herein include devices, systems, and methods that create a 3D video that includes determining first adjustments (e.g., first transforms) to video frames (e.g., one or more RGB images and depth images per frame) to align content in a coordinate system to remove the effects of capturing camera motion. Various implementations disclosed herein include devices, systems, and methods that playback a 3D video and includes determining second adjustments (e.g., second transforms) to remove the effects of movement of a viewing electronic device relative to a viewing environment during playback of the 3D video. Some implementations distinguish static content and moving content of the video frames to playback only moving objects or facilitate concurrent playback of multiple spatially related 3D videos. The 3D video may include images, audio, or 3D video of a video-capture-device user.
    Type: Application
    Filed: November 10, 2022
    Publication date: September 7, 2023
    Inventors: Timothy R. PEASE, Alexandre DA VEIGA, Benjamin H. BOESEL, David H. HUANG, Jonathan PERRON, Shih-Sang CHIU, Spencer H. RAY
  • Publication number: 20230273706
    Abstract: Some examples of the disclosure are directed to methods for spatial placement of avatars in a communication session. In some examples, while a first electronic device is presenting a three-dimensional environment, the first electronic device may receive an input corresponding to a request to enter a communication session with a second electronic device. In some examples, in response to receiving the input, the first electronic device may scan an environment surrounding the first electronic device. In some examples, the first electronic device may identify a placement location in the three-dimensional environment at which to display a virtual object representing a user of the second electronic device. In some examples, the first electronic device displays the virtual object representing the user of the second electronic device at the placement location in the three-dimensional environment. Some examples of the disclosure are directed to methods for spatial refinement in the communication session.
    Type: Application
    Filed: February 24, 2023
    Publication date: August 31, 2023
    Inventors: Connor A. SMITH, Benjamin H. BOESEL, David H. HUANG, Jeffrey S. NORRIS, Jonathan PERRON, Jordan A. CAZAMIAS, Miao REN, Shih-Sang CHIU
  • Patent number: 11557102
    Abstract: In some embodiments, an electronic device automatically updates the orientation of a virtual object in a three-dimensional environment based on a viewpoint of a user in the three-dimensional environment. In some embodiments, an electronic device automatically updates the orientation of a virtual object in a three-dimensional environment based on viewpoints of a plurality of users in the three-dimensional environment. In some embodiments, the electronic device modifies an appearance of a real object that is between a virtual object and the viewpoint of a user in a three-dimensional environment. In some embodiments, the electronic device automatically selects a location for a user in a three-dimensional environment that includes one or more virtual objects and/or other users.
    Type: Grant
    Filed: September 17, 2021
    Date of Patent: January 17, 2023
    Assignee: Apple Inc.
    Inventors: Alexis Henri Palangie, Peter D. Anton, Stephen O. Lemay, Christopher D. Mckenzie, Israel Pastrana Vicente, Dorian D. Dargan, Shih-Sang Chiu, Jonathan Perron, Tong Chen
  • Publication number: 20210405743
    Abstract: In one implementation, a method for dynamic media item delivery. The method includes: presenting, via the display device, a first set of media items associated with first metadata; obtaining user reaction information gathered by one or more input devices while presenting the first set of media items; obtaining, via a qualitative feedback classifier, an estimated user reaction state to the first set of media items based on the user reaction information; obtaining one or more target metadata characteristics based on the estimated user reaction state and the first metadata; obtaining a second set of media items associated with second metadata that corresponds to the one or more target metadata characteristics; and presenting, via the display device, the second set of media items associated with the second metadata.
    Type: Application
    Filed: May 18, 2021
    Publication date: December 30, 2021
    Inventors: Benjamin Hunter Boesel, Shih Sang Chiu, Jonathan Perron, David H. Y. Huang
  • Patent number: 10650578
    Abstract: Dynamic soft shadows may be generated without resorting to computationally-expensive multiple render passes and sampling, or lightmap generation. With disclosed systems and methods, a dynamic soft shadow may be rendered in a single pass, which is sufficiently efficient to run on an untethered virtual reality (VR) device, such as a head mounted device (HMD). Despite the efficiency, the shadow quality may be markedly superior to those generated with other methods. In some embodiments, a script may be used with a shader to render a shadow having a realistic size, shape, position, fading factor and sharpness, based on a position and size of a shadow casting element and a light vector.
    Type: Grant
    Filed: August 7, 2018
    Date of Patent: May 12, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Tong Chen, Jonathan Perron
  • Publication number: 20190347849
    Abstract: Dynamic soft shadows may be generated without resorting to computationally-expensive multiple render passes and sampling, or lightmap generation. With disclosed systems and methods, a dynamic soft shadow may be rendered in a single pass, which is sufficiently efficient to run on an untethered virtual reality (VR) device, such as a head mounted device (HMD). Despite the efficiency, the shadow quality may be markedly superior to those generated with other methods. In some embodiments, a script may be used with a shader to render a shadow having a realistic size, shape, position, fading factor and sharpness, based on a position and size of a shadow casting element and a light vector.
    Type: Application
    Filed: August 7, 2018
    Publication date: November 14, 2019
    Inventors: Tong CHEN, Jonathan PERRON