Patents by Inventor Aaron M. Burns
Aaron M. Burns has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240144562Abstract: Systems and processes for operating a digital assistant are provided. An example method includes, at an electronic device having one or more processors and memory, receiving an audio input including an utterance, determining, based on a speaker profile, an identity of a speaker of the utterance, determining whether the identity of the speaker matches a predetermined identity, and in accordance with a determination that the identity of the speaker matches the predetermined identity selectively adjusting a volume of the utterance relative to a volume of other sound of the audio input and providing an output of the adjusted utterance.Type: ApplicationFiled: January 8, 2024Publication date: May 2, 2024Inventors: Shiraz AKMAL, Aaron M. BURNS, Brad K. HERMAN
-
Patent number: 11966510Abstract: A method includes displaying a plurality of computer-generated objects, and obtaining finger manipulation data from a finger-wearable device via a communication interface. In some implementations, the method includes receiving an untethered input vector that includes a plurality of untethered input indicator values. Each of the plurality of untethered input indicator values is associated with one of a plurality of untethered input modalities. In some implementations, the method includes obtaining proxy object manipulation data from a physical proxy object via the communication interface. The proxy object manipulation data corresponds to sensor data associated with one or more sensors integrated in the physical proxy object. The method includes registering an engagement event with respect to a first one of the plurality of computer-generated objects based on a combination of the finger manipulation data, the untethered input vector, and the proxy object manipulation data.Type: GrantFiled: February 27, 2023Date of Patent: April 23, 2024Assignee: APPLE INC.Inventors: Adam G. Poulos, Aaron M. Burns, Arun Rakesh Yoganandan, Benjamin R. Blachnitzky, Nicolai Georg
-
Publication number: 20240126362Abstract: In accordance with some embodiments, an exemplary process for controlling representations of virtual objects based on respective user contexts that each correspond to different respective locations in a computer-generated reality (CGR) environment is described.Type: ApplicationFiled: December 27, 2023Publication date: April 18, 2024Inventors: Aaron M. BURNS, Nathan GITTER, Alexis H. PALANGIE, Pol PLA I. CONESA, David M. SCHATTEL
-
Patent number: 11960641Abstract: The present disclosure relates to determining when the head position of a user viewing user interfaces in a computer-generated reality environment is not in a comfortable and/or ergonomic position and repositioning the displayed user interface so that the user will reposition her/his head to view the user interface at a more comfortable and/or ergonomic head position.Type: GrantFiled: June 21, 2022Date of Patent: April 16, 2024Assignee: Apple Inc.Inventor: Aaron M. Burns
-
Patent number: 11960657Abstract: A method includes, while displaying a computer-generated object at a first position within an environment, obtaining extremity tracking data from an extremity tracker. The first position is outside of a drop region that is viewable using the display. The method includes moving the computer-generated object from the first position to a second position within the environment based on the extremity tracking data. The method includes, in response to determining that the second position satisfies a proximity threshold with respect to the drop region, detecting an input that is associated with a spatial region of the environment. The method includes moving the computer-generated object from the second position to a third position that is within the drop region, based on determining that the spatial region satisfies a focus criterion associated with the drop region.Type: GrantFiled: March 21, 2023Date of Patent: April 16, 2024Inventors: Aaron M. Burns, Adam G. Poulos, Arun Rakesh Yoganandan, Benjamin Hylak, Benjamin R. Blachnitzky, Jordan A. Cazamias, Nicolai Georg
-
Publication number: 20240112649Abstract: Exemplary processes are described, including processes to move and/or resize user interface elements in a computer-generated reality environment.Type: ApplicationFiled: December 14, 2023Publication date: April 4, 2024Inventors: Aaron Mackay BURNS, Alexis H. PALANGIE, Pol PLA I CONESA, David M. SCHATTEL
-
Patent number: 11935168Abstract: Systems and processes for operating a digital assistant are provided. An example method includes, at an electronic device having one or more processors and memory, receiving an audio input including an utterance, determining, based on a speaker profile, an identity of a speaker of the utterance, determining whether the identity of the speaker matches a predetermined identity, and in accordance with a determination that the identity of the speaker matches the predetermined identity selectively adjusting a volume of the utterance relative to a volume of other sound of the audio input and providing an output of the adjusted utterance.Type: GrantFiled: April 6, 2022Date of Patent: March 19, 2024Assignee: Apple Inc.Inventors: Shiraz Akmal, Aaron M. Burns, Brad K. Herman
-
Publication number: 20240086032Abstract: Methods for displaying and manipulating user interfaces in a computer-generated environment provide for an efficient and intuitive user experience. In some embodiments, user interfaces can be grouped together into a container. In some embodiments, a user interface that is a member of a container can be manipulated. In some embodiments, manipulating a user interface that is a member of a container can cause the other user interfaces in the same container to be manipulated. In some embodiments, manipulating user interfaces in a container can cause the user interfaces to change one or more orientation and/or rotate about one or more axes.Type: ApplicationFiled: November 20, 2023Publication date: March 14, 2024Inventors: Alexis H. PALANGIE, Aaron M. BURNS
-
Publication number: 20240086031Abstract: Methods for displaying and organizing user interfaces in a computer-generated environment provide for an efficient and intuitive user experience. In some embodiments, user interfaces can be grouped together into a container. In some embodiments, user interfaces can be added to a container, removed from a container, or moved from one location in the container to another location. In some embodiments, a visual indication is displayed before a user interface is added to a container. In some embodiments, a user interface can replace an existing user interface in a container. In some embodiments, when moving a user interface in a computer-generated environment, the transparency of a user interface that is obscured can be modified.Type: ApplicationFiled: November 20, 2023Publication date: March 14, 2024Inventors: Alexis H. PALANGIE, Aaron M. BURNS, Benjamin HYLAK
-
Publication number: 20240056492Abstract: An electronic device such as a head-mounted device may present extended reality content such as a representation of a three-dimensional environment. The representation of the three-dimensional environment may be changed between different viewing modes having different immersion levels in response to user input. The three-dimensional environment may represent a multiuser communication session. A multiuser communication session may be saved and subsequently viewed as a replay. There may be an interactive virtual object within the replay of the multiuser communication session. The pose of the interactive virtual object may be manipulated by a user while the replay is paused. Some multiuser communication sessions may be hierarchical multiuser communication sessions with a presenter and audience members. The presenter and audience members may receive generalized feedback based on the audience members during the presentation.Type: ApplicationFiled: June 30, 2023Publication date: February 15, 2024Inventors: Aaron M Burns, Adam G Poulos, Alexis H Palangie, Benjamin R Blachnitzky, Charilaos Papadopoulos, David M Schattel, Ezgi Demirayak, Jia Wang, Reza Abbasian, Ryan S Carlin
-
Publication number: 20240054746Abstract: An electronic device such as a head-mounted device may present extended reality content such as a representation of a three-dimensional environment. The representation of the three-dimensional environment may be changed between different viewing modes having different immersion levels in response to user input. The three-dimensional environment may represent a multiuser communication session. A multiuser communication session may be saved and subsequently viewed as a replay. There may be an interactive virtual object within the replay of the multiuser communication session. The pose of the interactive virtual object may be manipulated by a user while the replay is paused. Some multiuser communication sessions may be hierarchical multiuser communication sessions with a presenter and audience members. The presenter and audience members may receive generalized feedback based on the audience members during the presentation.Type: ApplicationFiled: June 30, 2023Publication date: February 15, 2024Inventors: Aaron M. Burns, Adam G. Poulos, Alexis H. Palangie, Benjamin R. Blachnitzky, Charilaos Papadopoulos, David M. Schattel, Ezgi Demirayak, Jia Wang, Reza Abbasian, Ryan S. Carlin
-
Publication number: 20240054736Abstract: An electronic device such as a head-mounted device may present extended reality content such as a representation of a three-dimensional environment. The representation of the three-dimensional environment may be changed between different viewing modes having different immersion levels in response to user input. The three-dimensional environment may represent a multiuser communication session. A multiuser communication session may be saved and subsequently viewed as a replay. There may be an interactive virtual object within the replay of the multiuser communication session. The pose of the interactive virtual object may be manipulated by a user while the replay is paused. Some multiuser communication sessions may be hierarchical multiuser communication sessions with a presenter and audience members. The presenter and audience members may receive generalized feedback based on the audience members during the presentation.Type: ApplicationFiled: June 30, 2023Publication date: February 15, 2024Inventors: Aaron M. Burns, Adam G. Poulos, Alexis H. Palangie, Benjamin R. Blachnitzky, Charilaos Papadopoulos, David M. Schattel, Ezgi Demirayak, Jia Wang, Reza Abbasian, Ryan S. Carlin
-
Publication number: 20240045579Abstract: A representation displayed in a three-dimensional environment may be selected with different types of selection inputs. When a representation displayed in the three-dimensional environment is selected, an application corresponding to the representation may be launched in the three-dimensional environment in accordance with the type of selection received. In such embodiments, when the representation is selected with a first type of selection, the application corresponding to the selected representation may be launched in a first manner, and when the representation is selected with a second type of selection, the application corresponding to the selected representation may be launched in a second manner. In some embodiments, the manner in which the application is launched may determine whether one or more previously launched applications continue to be displayed or cease to be displayed in the three-dimensional environment.Type: ApplicationFiled: December 21, 2021Publication date: February 8, 2024Inventors: Nathan GITTER, Aaron M. BURNS, Benjamin HYLAK, Jonathan R. DASCOLA, Alexis H. PALANGIE
-
Publication number: 20240029371Abstract: Various implementations disclosed herein include devices, systems, and methods that enable presenting environments including visual representations of multiple applications. In one implementation, a method includes presenting a view of an environment at an electronic device on a display. The view includes visual representations of a plurality of applications. The method further includes determining to provide a first application with access to a control parameter. The control parameter is configured to modify at least a portion of the view of the environment with virtual content, and the portion of the view includes at least a portion of content outside of a view of a visual representation associated with the first application. The method further includes restricting access to the control parameter by other applications which prevents the other applications from modifying the at least the portion of the view of the environment via the control parameter.Type: ApplicationFiled: September 29, 2023Publication date: January 25, 2024Inventors: Aaron M Burns, Alexis H. Palangie, Nathan Gitter, Pol Pla I. Conesa
-
Patent number: 11861056Abstract: In accordance with some embodiments, an exemplary process for controlling representations of virtual objects based on respective user contexts that each correspond to different respective locations in a computer-generated reality (CGR) environment is described.Type: GrantFiled: August 6, 2021Date of Patent: January 2, 2024Assignee: Apple Inc.Inventors: Aaron M. Burns, Nathan Gitter, Alexis H. Palangie, Pol Pla I Conesa, David M. Schattel
-
Publication number: 20230376110Abstract: A method is performed at an electronic device with one or more processors, a non-transitory memory, a display, and an extremity tracker. The method includes obtaining extremity tracking data via the extremity tracker. The method includes displaying a computer-generated representation of a trackpad that is spatially associated with a physical surface. The physical surface is viewable within the display along with a content manipulation region that is separate from the computer-generated representation of the trackpad. The method includes identifying a first location within the computer-generated representation of the trackpad based on the extremity tracking data. The method includes mapping the first location to a corresponding location within the content manipulation region. The method includes displaying an indicator indicative of the mapping. The indicator may overlap the corresponding location within the content manipulation region.Type: ApplicationFiled: February 27, 2023Publication date: November 23, 2023Inventors: Adam G. Poulos, Aaron M. Burns, Arun Rakesh Yoganandan, Benjamin R. Blachnitzky, Nicolai Georg
-
Publication number: 20230350537Abstract: Devices, methods, and graphical interfaces for content applications displayed in an XR environment provide for an efficient and intuitive user experience. In some embodiments, a content application is displayed in a three-dimensional computer-generated environment. In some embodiments, different viewing modes and user interfaces are available for a content application in a three-dimensional computer-generated environment. In some embodiments, different interactions are available with content items displayed in the XR environment.Type: ApplicationFiled: December 25, 2022Publication date: November 2, 2023Inventors: Benjamin HYLAK, Aaron M. BURNS, Nathan GITTER, Jordan A. CAZAMIAS, Alexis H. PALANGIE, James J. OWEN
-
Publication number: 20230343027Abstract: Various implementations disclosed herein include devices, systems, and methods for selecting multiple virtual objects within an environment. In some implementations, a method includes receiving a first gesture associated with a first virtual object in an environment. A movement of the first virtual object in the environment within a threshold distance of a second virtual object in the environment is detected. In response to detecting the movement of the first virtual object in the environment within the threshold distance of the second virtual object in the environment, a concurrent movement of the first virtual object and the second virtual object is displayed in the environment based on the first gesture.Type: ApplicationFiled: March 20, 2023Publication date: October 26, 2023Inventors: Jordan A. Cazamias, Aaron M. Burns, David M. Schattel, Jonathan Perron, Jonathan Ravasz, Shih-Sang Chiu
-
Publication number: 20230333645Abstract: In one implementation, a method of processing input for multiple devices is performed by a first electronic device one or more processors and non-transitory memory. The method includes determining a gaze direction. The method includes selecting a target electronic device based on determining that the gaze direction is directed to the target electronic device. The method includes receiving, via an input device, one or more inputs. The method includes processing the one or more inputs based on the target electronic device.Type: ApplicationFiled: May 12, 2023Publication date: October 19, 2023Inventors: Alexis H. Palangie, Aaron M. Burns, Arun Rakesh Yoganandan, Benjamin R. Blachnitzky
-
Publication number: 20230334724Abstract: Various implementations disclosed herein include devices, systems, and methods for determining a placement of virtual objects in a collection of virtual objects when changing from a first viewing arrangement to a second viewing arrangement based on their respective positions in one of the viewing arrangements. In some implementations, a method includes displaying a set of virtual objects in a first viewing arrangement in a first region of an environment. The set of virtual objects are arranged in a first spatial arrangement. A user input corresponding to a request to change to a second viewing arrangement in a second region of the environment is obtained. A mapping is determined between the first spatial arrangement and a second spatial arrangement based on spatial relationships between the set of virtual objects. The set of virtual objects is displayed in the second viewing arrangement in the second region of the environment.Type: ApplicationFiled: March 20, 2023Publication date: October 19, 2023Inventors: Jordan A. Cazamias, Aaron M. Burns, David M. Schattel, Jonathan Perron, Jonathan Ravasz, Shih-Sang Chiu