Patents by Inventor Ryan S. Burgoyne

Ryan S. Burgoyne has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230325140
    Abstract: In an exemplary process for interacting with user interface objects using an eye gaze, a placement point for a selected object is designated at a first position based on a gaze position. In response to a user input, the placement point is moved to a second position that is not based on the gaze position, and the object is placed at the second position.
    Type: Application
    Filed: June 14, 2023
    Publication date: October 12, 2023
    Inventors: Avi BAR-ZEEV, Ryan S. BURGOYNE, Devin W. CHALMERS, Luis R. DELIZ CENTENO, Rahul NAIR, Timothy R. ORIOL, Alexis H. PALANGIE
  • Patent number: 11714592
    Abstract: In an exemplary process for interacting with user interface objects using an eye gaze, an affordance associated with a first object is displayed. A gaze direction or a gaze depth is determined. While the gaze direction or the gaze depth is determined to correspond to a gaze at the affordance, a first input representing user instruction to take action on the affordance is received, and the affordance is selected responsive to receiving the first input.
    Type: Grant
    Filed: September 27, 2021
    Date of Patent: August 1, 2023
    Assignee: Apple Inc.
    Inventors: Avi Bar-Zeev, Ryan S. Burgoyne, Devin W. Chalmers, Luis R. Deliz Centeno, Rahul Nair, Timothy R. Oriol, Alexis H. Palangie
  • Publication number: 20230031832
    Abstract: A three-dimensional preview of content can be generated and presented at an electronic device in a three-dimensional environment. The three-dimensional preview of content can be presented concurrently with a two-dimensional representation of the content in a content generation environment presented in the three-dimensional environment. While the three-dimensional preview of content is presented in the three-dimensional environment, one or more affordances can be provided for interacting with the one or more computer-generated virtual objects of the three-dimensional preview. The one or more affordances may be displayed with the three-dimensional preview of content in the three-dimensional environment. The three-dimensional preview of content may be presented on a three-dimensional tray and the one or more affordances may be presented in a control bar or other grouping of controls outside the perimeter of the tray and/or along the perimeter of the tray.
    Type: Application
    Filed: July 15, 2022
    Publication date: February 2, 2023
    Inventors: David A. LIPTON, Ryan S. BURGOYNE, Michelle CHUA, Zachary Z. BECKER, Karen N. WONG, Eric G. THIVIERGE, Mahdi NABIYOUNI, Eric CHIU, Tyler L. CASELLA
  • Patent number: 11315215
    Abstract: A magnified portion and an unmagnified portion of a computer-generated reality (CGR) environment are displayed from a first position. In response to receiving an input, a magnified portion of the CGR environment from a second position is displayed with a magnification less than that of the magnified portion of the CGR environment from the first position and a field of view greater than that of the magnified portion of the CGR environment from the first position. A first unmagnified portion of the CGR environment from a third position is displayed with a field of view greater than that of the magnified portion of the CGR environment from the second position. Then, a second unmagnified portion of the CGR environment from the third position is displayed with a field of view greater than that of the first unmagnified portion of the CGR environment from the third position.
    Type: Grant
    Filed: February 20, 2020
    Date of Patent: April 26, 2022
    Assignee: Apple Inc.
    Inventors: Ryan S. Burgoyne, Bradley Peebler, Philipp Rockel
  • Publication number: 20220012002
    Abstract: In an exemplary process for interacting with user interface objects using an eye gaze, an affordance associated with a first object is displayed. A gaze direction or a gaze depth is determined. While the gaze direction or the gaze depth is determined to correspond to a gaze at the affordance, a first input representing user instruction to take action on the affordance is received, and the affordance is selected responsive to receiving the first input.
    Type: Application
    Filed: September 27, 2021
    Publication date: January 13, 2022
    Inventors: Avi BAR-ZEEV, Ryan S. BURGOYNE, Devin W. CHALMERS, Luis R. DELIZ CENTENO, Rahul NAIR, Timothy R. ORIOL, Alexis H. PALANGIE
  • Publication number: 20210365107
    Abstract: Various implementations disclosed herein include devices, systems, and methods that enable more intuitive and efficient positioning of an object in a 3D layout, for example, in an enhanced reality (ER) setting providing on a device. In some implementations, objects are automatically positioned based on simulated physics that is selectively enabled during the positioning of the object. In some implementations, objects are automatically positioned based on simulated physics and alignment rules. In some implementations, objects are automatically grouped together based on criteria such that a first object that is grouped with a second object moves with the second object automatically in response to movement of the second object but is moveable independent of the second object.
    Type: Application
    Filed: August 3, 2021
    Publication date: November 25, 2021
    Inventors: Austin C. GERMER, Gregory DUQUESNE, Novaira MASOOD, Ryan S. BURGOYNE
  • Patent number: 11137967
    Abstract: In an exemplary process for interacting with user interface objects using an eye gaze, an affordance associated with a first object is displayed. A gaze direction or a gaze depth is determined. While the gaze direction or the gaze depth is determined to correspond to a gaze at the affordance, a first input representing user instruction to take action on the affordance is received, and the affordance is selected responsive to receiving the first input.
    Type: Grant
    Filed: March 24, 2020
    Date of Patent: October 5, 2021
    Assignee: Apple Inc.
    Inventors: Avi Bar-Zeev, Ryan S. Burgoyne, Devin W. Chalmers, Luis R. Deliz Centeno, Rahul Nair
  • Patent number: 11132162
    Abstract: In an exemplary process for interacting with user interface objects using an eye gaze, an affordance associated with a first object is displayed. A gaze direction or a gaze depth is determined. While the gaze direction or the gaze depth is determined to correspond to a gaze at the affordance, a first input representing user instruction to take action on the affordance is received, and the affordance is selected responsive to receiving the first input.
    Type: Grant
    Filed: March 24, 2020
    Date of Patent: September 28, 2021
    Assignee: Apple Inc.
    Inventors: Avi Bar-Zeev, Ryan S. Burgoyne, Devin W. Chalmers, Luis R. Deliz Centeno, Rahul Nair, Timothy R. Oriol, Alexis H. Palangie
  • Patent number: 11099634
    Abstract: In one implementation, a method of manipulating virtual objects using tracked physical objects is disclosed. The method involves presenting content including a virtual object and a virtual representation of a proxy device physically unassociated with an electronic device on a display of the electronic device. Input is received from the proxy device using an input device of the proxy device that represents a request to create a fixed alignment between the virtual object and the virtual representation in a three-dimensional (ā€œ3-Dā€) coordinate space defined for the content. The fixed alignment is created in response to receiving the input. A position and an orientation of the virtual object in the 3-D coordinate space is dynamically updated using position data that defines movement of the proxy device in the physical environment.
    Type: Grant
    Filed: January 17, 2020
    Date of Patent: August 24, 2021
    Assignee: Apple Inc.
    Inventors: Austin C. Germer, Ryan S. Burgoyne
  • Publication number: 20200273146
    Abstract: A magnified portion and an unmagnified portion of a computer-generated reality (CGR) environment are displayed from a first position. In response to receiving an input, a magnified portion of the CGR environment from a second position is displayed with a magnification less than that of the magnified portion of the CGR environment from the first position and a field of view greater than that of the magnified portion of the CGR environment from the first position. A first unmagnified portion of the CGR environment from a third position is displayed with a field of view greater than that of the magnified portion of the CGR environment from the second position. Then, a second unmagnified portion of the CGR environment from the third position is displayed with a field of view greater than that of the first unmagnified portion of the CGR environment from the third position.
    Type: Application
    Filed: February 20, 2020
    Publication date: August 27, 2020
    Inventors: Ryan S. BURGOYNE, Bradley PEEBLER, Philipp ROCKEL
  • Publication number: 20200241629
    Abstract: In one implementation, a method of manipulating virtual objects using tracked physical objects is disclosed. The method involves presenting content including a virtual object and a virtual representation of a proxy device physically unassociated with an electronic device on a display of the electronic device. Input is received from the proxy device using an input device of the proxy device that represents a request to create a fixed alignment between the virtual object and the virtual representation in a three-dimensional (ā€œ3-Dā€) coordinate space defined for the content. The fixed alignment is created in response to receiving the input. A position and an orientation of the virtual object in the 3-D coordinate space is dynamically updated using position data that defines movement of the proxy device in the physical environment.
    Type: Application
    Filed: January 17, 2020
    Publication date: July 30, 2020
    Inventors: Austin C. Germer, Ryan S. Burgoyne
  • Publication number: 20200225746
    Abstract: In an exemplary process for interacting with user interface objects using an eye gaze, an affordance associated with a first object is displayed. A gaze direction or a gaze depth is determined. While the gaze direction or the gaze depth is determined to correspond to a gaze at the affordance, a first input representing user instruction to take action on the affordance is received, and the affordance is selected responsive to receiving the first input.
    Type: Application
    Filed: March 24, 2020
    Publication date: July 16, 2020
    Inventors: Avi Bar-Zeev, Ryan S. Burgoyne, Devin W. Chalmers, Luis R. Deliz Centeno, Rahul Nair, Timothy R. Oriol, Alexis H. Palangie
  • Publication number: 20200225747
    Abstract: In an exemplary process for interacting with user interface objects using an eye gaze, an affordance associated with a first object is displayed. A gaze direction or a gaze depth is determined. While the gaze direction or the gaze depth is determined to correspond to a gaze at the affordance, a first input representing user instruction to take action on the affordance is received, and the affordance is selected responsive to receiving the first input.
    Type: Application
    Filed: March 24, 2020
    Publication date: July 16, 2020
    Inventors: Avi BAR-ZEEV, Ryan S. BURGOYNE, Devin W. CHALMERS, Luis R. DELIZ CENTENO, Rahul NAIR