Patents by Inventor Jordan A. CAZAMIAS

Jordan A. CAZAMIAS has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11995301
    Abstract: Methods for displaying user interfaces in a computer-generated environment provide for an efficient and intuitive user experience. In some embodiments, user interfaces can have different immersion levels. In some embodiments, a user interface can have a respective immersion level based on its location in the three-dimensional environment or distance from the user. In some embodiments, a user interface can have a respective immersion level based on the state of the user interface. In some embodiments, a user interface can switch from one immersion level to another in response to the user's interaction with the user interface.
    Type: Grant
    Filed: March 10, 2023
    Date of Patent: May 28, 2024
    Assignee: Apple Inc.
    Inventors: Benjamin Hylak, Alexis H. Palangie, Jordan A. Cazamias, Nathan Gitter, Aaron M. Burns
  • Publication number: 20240152256
    Abstract: A computer system concurrently displays, via a display generation component, a browser toolbar, for a browser that includes a plurality of tabs and a window including first content associated with a first tab of the plurality of tabs. The browser toolbar and the window are overlaying a view of a three-dimensional environment. While displaying the browser toolbar and the window that includes the first content overlaying the view of the three-dimensional environment, the computer system detects a first air gesture that meets first gesture criteria, the air gesture comprising a gaze input directed at a location in the view of the three-dimensional environment that is occupied by the browser toolbar and a hand movement. In response to detecting the first air gesture that meets the first gesture criteria, the computer system displays second content in the window, the second content associated with a second tab of the plurality of tabs.
    Type: Application
    Filed: September 18, 2023
    Publication date: May 9, 2024
    Inventors: Jonathan R. Dascola, Nathan Gitter, Jay Moon, Stephen O. Lemay, Joseph M.W. Luxton, Angel Suet Y. Cheung, Danielle M. Price, Hugo D. Verweij, Kristi E.S. Bauerly, Katherine W. Kolombatovich, Jordan A. Cazamias
  • Patent number: 11960657
    Abstract: A method includes, while displaying a computer-generated object at a first position within an environment, obtaining extremity tracking data from an extremity tracker. The first position is outside of a drop region that is viewable using the display. The method includes moving the computer-generated object from the first position to a second position within the environment based on the extremity tracking data. The method includes, in response to determining that the second position satisfies a proximity threshold with respect to the drop region, detecting an input that is associated with a spatial region of the environment. The method includes moving the computer-generated object from the second position to a third position that is within the drop region, based on determining that the spatial region satisfies a focus criterion associated with the drop region.
    Type: Grant
    Filed: March 21, 2023
    Date of Patent: April 16, 2024
    Inventors: Aaron M. Burns, Adam G. Poulos, Arun Rakesh Yoganandan, Benjamin Hylak, Benjamin R. Blachnitzky, Jordan A. Cazamias, Nicolai Georg
  • Publication number: 20240077937
    Abstract: The present disclosure generally relates to techniques and user interfaces for controlling and displaying representations of user in environments, such as during a live communication session and/or a live collaboration session.
    Type: Application
    Filed: September 1, 2023
    Publication date: March 7, 2024
    Inventors: Jason D. RICKWALD, Andrew R. BACON, Kristi E. BAUERLY, Rupert BURTON, Jordan A. CAZAMIAS, Tong CHEN, Shih-Sang CHIU, Jonathan PERRON, Giancarlo YERKES
  • Publication number: 20240037886
    Abstract: Various implementations disclosed herein include devices, systems, and methods that generate and share/transmit a 3D representation of a physical environment during a communication session. Some of the elements (e.g., points) of the 3D representation may be replaced to improve the quality and/or efficiency of the modeling and transmitting processes. A user's device may provide a view and/or feedback during a scan of the physical environment during the communication session to facilitate accurate understanding of what is being transmitted. Additional information, e.g., a second representation of a portion of the physical environment, may also be transmitted during a communication session. The second representations may represent an aspect (e.g., more details, photo quality images, live, etc.) of a portion not represented by the 3D representation.
    Type: Application
    Filed: October 16, 2023
    Publication date: February 1, 2024
    Inventors: Shih-Sang Chiu, Alexandre Da Veiga, David H. Huang, Jonathan Perron, Jordan A. Cazamias
  • Publication number: 20230419625
    Abstract: Various implementations provide a representation of at least a portion of a user within a three-dimensional (3D) environment other than the user's physical environment. Based on detecting a condition, a representation of another object of the user's physical environment is shown to provide context. As examples, a representation of a sitting surface may be shown based on detecting that the user is sitting down, representations of a table and coffee cup may be shown based on detecting that the user is reaching out to pick up a coffee cup, a representation of a second user may be shown based on detecting a voice or the user turning his attention towards a moving object or sound, and a depiction of a puppy may be shown when the puppy's bark is detected.
    Type: Application
    Filed: September 13, 2023
    Publication date: December 28, 2023
    Inventors: Shih-Sang Chiu, Alexandre Da Veiga, David H. Huang, Jonathan Perron, Jordan A. Cazamias
  • Publication number: 20230384907
    Abstract: In some embodiments, a computer system facilitates manipulation of a three-dimensional environment relative to a viewpoint of a user of the computer system. In some embodiments, a computer system facilitates manipulation of virtual objects in a virtual environment. In some embodiments, a computer system facilitates manipulation of a three-dimensional environment relative to a reference point determined based on attention of a user of the computer system.
    Type: Application
    Filed: April 11, 2023
    Publication date: November 30, 2023
    Inventors: Benjamin H. BOESEL, Jonathan RAVASZ, Shih-Sang CHIU, Jordan A. CAZAMIAS, Stephen O. LEMAY, Christopher D. MCKENZIE, Dorian D. DARGAN, David H. HUANG
  • Publication number: 20230350537
    Abstract: Devices, methods, and graphical interfaces for content applications displayed in an XR environment provide for an efficient and intuitive user experience. In some embodiments, a content application is displayed in a three-dimensional computer-generated environment. In some embodiments, different viewing modes and user interfaces are available for a content application in a three-dimensional computer-generated environment. In some embodiments, different interactions are available with content items displayed in the XR environment.
    Type: Application
    Filed: December 25, 2022
    Publication date: November 2, 2023
    Inventors: Benjamin HYLAK, Aaron M. BURNS, Nathan GITTER, Jordan A. CAZAMIAS, Alexis H. PALANGIE, James J. OWEN
  • Publication number: 20230343027
    Abstract: Various implementations disclosed herein include devices, systems, and methods for selecting multiple virtual objects within an environment. In some implementations, a method includes receiving a first gesture associated with a first virtual object in an environment. A movement of the first virtual object in the environment within a threshold distance of a second virtual object in the environment is detected. In response to detecting the movement of the first virtual object in the environment within the threshold distance of the second virtual object in the environment, a concurrent movement of the first virtual object and the second virtual object is displayed in the environment based on the first gesture.
    Type: Application
    Filed: March 20, 2023
    Publication date: October 26, 2023
    Inventors: Jordan A. Cazamias, Aaron M. Burns, David M. Schattel, Jonathan Perron, Jonathan Ravasz, Shih-Sang Chiu
  • Publication number: 20230333644
    Abstract: Various implementations disclosed herein include devices, systems, and methods for organizing virtual objects within an environment. In some implementations, a method includes obtaining a user input corresponding to a command to associate a virtual object with a region of an environment. A gaze input corresponding to a user focus location in the region is detected. A movement of the virtual object to an object placement location proximate the user focus location is displayed.
    Type: Application
    Filed: March 20, 2023
    Publication date: October 19, 2023
    Inventors: Jordan A. Cazamias, Aaron M. Burns, David M. Schattel, Jonathan Perron, Jonathan Ravasz, Shih-Sang Chiu
  • Publication number: 20230334724
    Abstract: Various implementations disclosed herein include devices, systems, and methods for determining a placement of virtual objects in a collection of virtual objects when changing from a first viewing arrangement to a second viewing arrangement based on their respective positions in one of the viewing arrangements. In some implementations, a method includes displaying a set of virtual objects in a first viewing arrangement in a first region of an environment. The set of virtual objects are arranged in a first spatial arrangement. A user input corresponding to a request to change to a second viewing arrangement in a second region of the environment is obtained. A mapping is determined between the first spatial arrangement and a second spatial arrangement based on spatial relationships between the set of virtual objects. The set of virtual objects is displayed in the second viewing arrangement in the second region of the environment.
    Type: Application
    Filed: March 20, 2023
    Publication date: October 19, 2023
    Inventors: Jordan A. Cazamias, Aaron M. Burns, David M. Schattel, Jonathan Perron, Jonathan Ravasz, Shih-Sang Chiu
  • Publication number: 20230325003
    Abstract: Methods for displaying selectable options in a computer-generated environment provide for an efficient and intuitive user experience. In some embodiments, one or more selectable options are displayed in a three-dimensional computer-generated environment in accordance with the determination that the one or more criteria have been satisfied, including a criteria that a hand of the user is oriented in a predetermined manner with respect to an electronic device. In some embodiments, a user is able to perform one-handed actuation of a selectable option with a plurality of user inputs and/or gestures that satisfy one or more activation criteria.
    Type: Application
    Filed: March 10, 2023
    Publication date: October 12, 2023
    Inventors: Nathan GITTER, Jordan A. CAZAMIAS, Alexis H. PALANGIE, Aaron M. BURNS, Benjamin HYLAK
  • Publication number: 20230315270
    Abstract: Methods for displaying user interfaces in a computer-generated environment provide for an efficient and intuitive user experience. In some embodiments, user interfaces can have different immersion levels. In some embodiments, a user interface can have a respective immersion level based on its location in the three-dimensional environment or distance from the user. In some embodiments, a user interface can have a respective immersion level based on the state of the user interface. In some embodiments, a user interface can switch from one immersion level to another in response to the user's interaction with the user interface.
    Type: Application
    Filed: March 10, 2023
    Publication date: October 5, 2023
    Inventors: Benjamin HYLAK, Alexis H. PALANGIE, Jordan A. CAZAMIAS, Nathan GITTER, Aaron M. BURNS
  • Publication number: 20230316674
    Abstract: The present disclosure relates to techniques for improving a user experience for modifying display of an avatar in XR environments. In some embodiments, the techniques include modifying a portion of an avatar based on one or more heuristics. In some embodiments, the techniques include modifying a portion of an avatar after tracking is lost.
    Type: Application
    Filed: March 23, 2023
    Publication date: October 5, 2023
    Inventors: Benjamin H. BOESEL, Rupert BURTON, Jordan A. CAZAMIAS, Shih-Sang CHIU, Jason D. RICKWALD, William A. SORRENTINO, III, Nicolas V. SCAPEL, Giancarlo YERKES
  • Publication number: 20230297172
    Abstract: A method includes, while displaying a computer-generated object at a first position within an environment, obtaining extremity tracking data from an extremity tracker. The first position is outside of a drop region that is viewable using the display. The method includes moving the computer-generated object from the first position to a second position within the environment based on the extremity tracking data. The method includes, in response to determining that the second position satisfies a proximity threshold with respect to the drop region, detecting an input that is associated with a spatial region of the environment. The method includes moving the computer-generated object from the second position to a third position that is within the drop region, based on determining that the spatial region satisfies a focus criterion associated with the drop region.
    Type: Application
    Filed: March 21, 2023
    Publication date: September 21, 2023
    Inventors: Aaron M. Burns, Adam G. Poulos, Arun Rakesh Yoganandan, Benjamin Hylak, Benjamin R. Blachnitzky, Jordan A. Cazamias, Nicolai Georg
  • Publication number: 20230273706
    Abstract: Some examples of the disclosure are directed to methods for spatial placement of avatars in a communication session. In some examples, while a first electronic device is presenting a three-dimensional environment, the first electronic device may receive an input corresponding to a request to enter a communication session with a second electronic device. In some examples, in response to receiving the input, the first electronic device may scan an environment surrounding the first electronic device. In some examples, the first electronic device may identify a placement location in the three-dimensional environment at which to display a virtual object representing a user of the second electronic device. In some examples, the first electronic device displays the virtual object representing the user of the second electronic device at the placement location in the three-dimensional environment. Some examples of the disclosure are directed to methods for spatial refinement in the communication session.
    Type: Application
    Filed: February 24, 2023
    Publication date: August 31, 2023
    Inventors: Connor A. SMITH, Benjamin H. BOESEL, David H. HUANG, Jeffrey S. NORRIS, Jonathan PERRON, Jordan A. CAZAMIAS, Miao REN, Shih-Sang CHIU