Patents by Inventor Ryan S. Burgoyne
Ryan S. Burgoyne has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230325140Abstract: In an exemplary process for interacting with user interface objects using an eye gaze, a placement point for a selected object is designated at a first position based on a gaze position. In response to a user input, the placement point is moved to a second position that is not based on the gaze position, and the object is placed at the second position.Type: ApplicationFiled: June 14, 2023Publication date: October 12, 2023Inventors: Avi BAR-ZEEV, Ryan S. BURGOYNE, Devin W. CHALMERS, Luis R. DELIZ CENTENO, Rahul NAIR, Timothy R. ORIOL, Alexis H. PALANGIE
-
Patent number: 11714592Abstract: In an exemplary process for interacting with user interface objects using an eye gaze, an affordance associated with a first object is displayed. A gaze direction or a gaze depth is determined. While the gaze direction or the gaze depth is determined to correspond to a gaze at the affordance, a first input representing user instruction to take action on the affordance is received, and the affordance is selected responsive to receiving the first input.Type: GrantFiled: September 27, 2021Date of Patent: August 1, 2023Assignee: Apple Inc.Inventors: Avi Bar-Zeev, Ryan S. Burgoyne, Devin W. Chalmers, Luis R. Deliz Centeno, Rahul Nair, Timothy R. Oriol, Alexis H. Palangie
-
Publication number: 20230031832Abstract: A three-dimensional preview of content can be generated and presented at an electronic device in a three-dimensional environment. The three-dimensional preview of content can be presented concurrently with a two-dimensional representation of the content in a content generation environment presented in the three-dimensional environment. While the three-dimensional preview of content is presented in the three-dimensional environment, one or more affordances can be provided for interacting with the one or more computer-generated virtual objects of the three-dimensional preview. The one or more affordances may be displayed with the three-dimensional preview of content in the three-dimensional environment. The three-dimensional preview of content may be presented on a three-dimensional tray and the one or more affordances may be presented in a control bar or other grouping of controls outside the perimeter of the tray and/or along the perimeter of the tray.Type: ApplicationFiled: July 15, 2022Publication date: February 2, 2023Inventors: David A. LIPTON, Ryan S. BURGOYNE, Michelle CHUA, Zachary Z. BECKER, Karen N. WONG, Eric G. THIVIERGE, Mahdi NABIYOUNI, Eric CHIU, Tyler L. CASELLA
-
Patent number: 11315215Abstract: A magnified portion and an unmagnified portion of a computer-generated reality (CGR) environment are displayed from a first position. In response to receiving an input, a magnified portion of the CGR environment from a second position is displayed with a magnification less than that of the magnified portion of the CGR environment from the first position and a field of view greater than that of the magnified portion of the CGR environment from the first position. A first unmagnified portion of the CGR environment from a third position is displayed with a field of view greater than that of the magnified portion of the CGR environment from the second position. Then, a second unmagnified portion of the CGR environment from the third position is displayed with a field of view greater than that of the first unmagnified portion of the CGR environment from the third position.Type: GrantFiled: February 20, 2020Date of Patent: April 26, 2022Assignee: Apple Inc.Inventors: Ryan S. Burgoyne, Bradley Peebler, Philipp Rockel
-
Publication number: 20220012002Abstract: In an exemplary process for interacting with user interface objects using an eye gaze, an affordance associated with a first object is displayed. A gaze direction or a gaze depth is determined. While the gaze direction or the gaze depth is determined to correspond to a gaze at the affordance, a first input representing user instruction to take action on the affordance is received, and the affordance is selected responsive to receiving the first input.Type: ApplicationFiled: September 27, 2021Publication date: January 13, 2022Inventors: Avi BAR-ZEEV, Ryan S. BURGOYNE, Devin W. CHALMERS, Luis R. DELIZ CENTENO, Rahul NAIR, Timothy R. ORIOL, Alexis H. PALANGIE
-
Publication number: 20210365107Abstract: Various implementations disclosed herein include devices, systems, and methods that enable more intuitive and efficient positioning of an object in a 3D layout, for example, in an enhanced reality (ER) setting providing on a device. In some implementations, objects are automatically positioned based on simulated physics that is selectively enabled during the positioning of the object. In some implementations, objects are automatically positioned based on simulated physics and alignment rules. In some implementations, objects are automatically grouped together based on criteria such that a first object that is grouped with a second object moves with the second object automatically in response to movement of the second object but is moveable independent of the second object.Type: ApplicationFiled: August 3, 2021Publication date: November 25, 2021Inventors: Austin C. GERMER, Gregory DUQUESNE, Novaira MASOOD, Ryan S. BURGOYNE
-
Patent number: 11137967Abstract: In an exemplary process for interacting with user interface objects using an eye gaze, an affordance associated with a first object is displayed. A gaze direction or a gaze depth is determined. While the gaze direction or the gaze depth is determined to correspond to a gaze at the affordance, a first input representing user instruction to take action on the affordance is received, and the affordance is selected responsive to receiving the first input.Type: GrantFiled: March 24, 2020Date of Patent: October 5, 2021Assignee: Apple Inc.Inventors: Avi Bar-Zeev, Ryan S. Burgoyne, Devin W. Chalmers, Luis R. Deliz Centeno, Rahul Nair
-
Patent number: 11132162Abstract: In an exemplary process for interacting with user interface objects using an eye gaze, an affordance associated with a first object is displayed. A gaze direction or a gaze depth is determined. While the gaze direction or the gaze depth is determined to correspond to a gaze at the affordance, a first input representing user instruction to take action on the affordance is received, and the affordance is selected responsive to receiving the first input.Type: GrantFiled: March 24, 2020Date of Patent: September 28, 2021Assignee: Apple Inc.Inventors: Avi Bar-Zeev, Ryan S. Burgoyne, Devin W. Chalmers, Luis R. Deliz Centeno, Rahul Nair, Timothy R. Oriol, Alexis H. Palangie
-
Patent number: 11099634Abstract: In one implementation, a method of manipulating virtual objects using tracked physical objects is disclosed. The method involves presenting content including a virtual object and a virtual representation of a proxy device physically unassociated with an electronic device on a display of the electronic device. Input is received from the proxy device using an input device of the proxy device that represents a request to create a fixed alignment between the virtual object and the virtual representation in a three-dimensional (ā3-Dā) coordinate space defined for the content. The fixed alignment is created in response to receiving the input. A position and an orientation of the virtual object in the 3-D coordinate space is dynamically updated using position data that defines movement of the proxy device in the physical environment.Type: GrantFiled: January 17, 2020Date of Patent: August 24, 2021Assignee: Apple Inc.Inventors: Austin C. Germer, Ryan S. Burgoyne
-
Publication number: 20200273146Abstract: A magnified portion and an unmagnified portion of a computer-generated reality (CGR) environment are displayed from a first position. In response to receiving an input, a magnified portion of the CGR environment from a second position is displayed with a magnification less than that of the magnified portion of the CGR environment from the first position and a field of view greater than that of the magnified portion of the CGR environment from the first position. A first unmagnified portion of the CGR environment from a third position is displayed with a field of view greater than that of the magnified portion of the CGR environment from the second position. Then, a second unmagnified portion of the CGR environment from the third position is displayed with a field of view greater than that of the first unmagnified portion of the CGR environment from the third position.Type: ApplicationFiled: February 20, 2020Publication date: August 27, 2020Inventors: Ryan S. BURGOYNE, Bradley PEEBLER, Philipp ROCKEL
-
Publication number: 20200241629Abstract: In one implementation, a method of manipulating virtual objects using tracked physical objects is disclosed. The method involves presenting content including a virtual object and a virtual representation of a proxy device physically unassociated with an electronic device on a display of the electronic device. Input is received from the proxy device using an input device of the proxy device that represents a request to create a fixed alignment between the virtual object and the virtual representation in a three-dimensional (ā3-Dā) coordinate space defined for the content. The fixed alignment is created in response to receiving the input. A position and an orientation of the virtual object in the 3-D coordinate space is dynamically updated using position data that defines movement of the proxy device in the physical environment.Type: ApplicationFiled: January 17, 2020Publication date: July 30, 2020Inventors: Austin C. Germer, Ryan S. Burgoyne
-
Publication number: 20200225746Abstract: In an exemplary process for interacting with user interface objects using an eye gaze, an affordance associated with a first object is displayed. A gaze direction or a gaze depth is determined. While the gaze direction or the gaze depth is determined to correspond to a gaze at the affordance, a first input representing user instruction to take action on the affordance is received, and the affordance is selected responsive to receiving the first input.Type: ApplicationFiled: March 24, 2020Publication date: July 16, 2020Inventors: Avi Bar-Zeev, Ryan S. Burgoyne, Devin W. Chalmers, Luis R. Deliz Centeno, Rahul Nair, Timothy R. Oriol, Alexis H. Palangie
-
Publication number: 20200225747Abstract: In an exemplary process for interacting with user interface objects using an eye gaze, an affordance associated with a first object is displayed. A gaze direction or a gaze depth is determined. While the gaze direction or the gaze depth is determined to correspond to a gaze at the affordance, a first input representing user instruction to take action on the affordance is received, and the affordance is selected responsive to receiving the first input.Type: ApplicationFiled: March 24, 2020Publication date: July 16, 2020Inventors: Avi BAR-ZEEV, Ryan S. BURGOYNE, Devin W. CHALMERS, Luis R. DELIZ CENTENO, Rahul NAIR