Patents by Inventor Aaron Mackay Burns
Aaron Mackay Burns has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240112649Abstract: Exemplary processes are described, including processes to move and/or resize user interface elements in a computer-generated reality environment.Type: ApplicationFiled: December 14, 2023Publication date: April 4, 2024Inventors: Aaron Mackay BURNS, Alexis H. PALANGIE, Pol PLA I CONESA, David M. SCHATTEL
-
Publication number: 20240062487Abstract: In an exemplary process, a computer-generated reality environment comprising a virtual object is presented and user movement that occurs in a physical environment is detected. In response to determining that the detected user movement is toward the virtual object and that the virtual object obstructs a real object from the physical environment, a determination is made whether the detected user movement is directed to the virtual object or the real object. In accordance with a determination that the detected user movement is directed to the real object, a visual appearance of the virtual object is modified, where modifying the visual appearance of the virtual object comprises displaying presenting at least a portion of the real object. In accordance with a determination that the detected user movement is directed to the virtual object, the presentation of the virtual object is maintained to obstruct the real object.Type: ApplicationFiled: October 31, 2023Publication date: February 22, 2024Inventors: Alexis PALANGIE, Aaron Mackay BURNS
-
Patent number: 11893964Abstract: In accordance with some embodiments, an exemplary process for dynamically controlling the size of a display based on a moving of a visual object meeting a criterion in a computer-generated reality (CGR) environment is described.Type: GrantFiled: January 20, 2023Date of Patent: February 6, 2024Assignee: Apple Inc.Inventors: Aaron Mackay Burns, Alexis H. Palangie, Pol Pla I Conesa, David M. Schattel
-
Patent number: 11842449Abstract: In an exemplary process, a computer-generated reality environment comprising a virtual object is presented and user movement that occurs in a physical environment is detected. In response to determining that the detected user movement is toward the virtual object and that the virtual object obstructs a real object from the physical environment, a determination is made whether the detected user movement is directed to the virtual object or the real object. In accordance with a determination that the detected user movement is directed to the real object, a visual appearance of the virtual object is modified, where modifying the visual appearance of the virtual object comprises displaying presenting at least a portion of the real object. In accordance with a determination that the detected user movement is directed to the virtual object, the presentation of the virtual object is maintained to obstruct the real object.Type: GrantFiled: August 25, 2020Date of Patent: December 12, 2023Assignee: Apple Inc.Inventors: Alexis Palangie, Aaron Mackay Burns
-
Publication number: 20230334793Abstract: In accordance with some embodiments, an exemplary process for dynamically controlling the size of a display based on a moving of a visual object meeting a criterion in a computer-generated reality (CGR) environment is described.Type: ApplicationFiled: January 20, 2023Publication date: October 19, 2023Inventors: Aaron Mackay BURNS, Alexis H. PALANGIE, Pol PLA I CONESA, David M. SCHATTEL
-
Patent number: 11768546Abstract: In one implementation, a method for visually indicating positional/rotational information of a finger-wearable device. The method includes: determining a set of translational values and a set of rotational values for the finger-wearable device, wherein the finger-wearable device is worn on a finger of a user; displaying, via a display, a visual representation of a location of the finger-wearable device based on the set of translational values; generating a visual representation of a grasp region of the user based on the set of translational values and the set of rotational values; and concurrently displaying, via the display, the visual representation of the grasp region with the visual representation of the location of the finger-wearable device.Type: GrantFiled: August 22, 2021Date of Patent: September 26, 2023Assignee: APPLE INC.Inventors: Adam Gabriel Poulos, Benjamin Rolf Blachnitzky, Nicolai Philip Georg, Arun Rakesh Yoganandan, Aaron Mackay Burns
-
Publication number: 20230035941Abstract: Systems and processes for speech interpretation based on environmental context are provided. For example, a user gaze direction is detected, and a speech input is received from a first user of the electronic device. In accordance with a determination that the user gaze is directed at a digital assistant object, the speech input is processed by the digital assistant. In accordance with a determination that the user gaze is not directed at a digital assistant object, contextual information associated with the electronic device is obtained, wherein the contextual information includes speech from a second user. Determination is made whether the speech input is directed to a digital assistant of the electronic device. In accordance with a determination that the speech input is directed to a digital assistant of the electronic device, the speech input is processed by the digital assistant.Type: ApplicationFiled: October 13, 2021Publication date: February 2, 2023Inventors: Brad Kenneth HERMAN, Shiraz AKMAL, Aaron Mackay BURNS
-
Patent number: 11521581Abstract: In accordance with some embodiments, an exemplary process for dynamically controlling the size of a display based on a moving of a visual object meeting a criterion in a computer-generated reality (CGR) environment is described.Type: GrantFiled: August 9, 2021Date of Patent: December 6, 2022Assignee: Apple Inc.Inventors: Aaron Mackay Burns, Alexis H. Palangie, Pol Pla I Conesa, David M. Schattel
-
Publication number: 20220121344Abstract: In some embodiments, an electronic device enhances interactions with virtual objects in a three-dimensional environment. In some embodiments, an electronic device enhances interactions with selectable user interface elements. In some embodiments, an electronic device enhances interactions with slider user interface elements. In some embodiments, an electronic device moves virtual objects in a three-dimensional environment and facilitates accessing actions associated with virtual objects.Type: ApplicationFiled: September 25, 2021Publication date: April 21, 2022Inventors: Israel PASTRANA VICENTE, Jonathan R. DASCOLA, Wesley M. HOLDER, Alexis Henri PALANGIE, Aaron Mackay BURNS, Pol PLA I CONESA, William A. SORRENTINO, III, Stephen O. LEMAY, Christopher D. MCKENZIE, Shih-Sang Chiu, Benjamin Hunter Boesel, Jonathan Ravasz
-
Publication number: 20210365108Abstract: In accordance with some embodiments, an exemplary process for controlling representations of virtual objects based on respective user contexts that each correspond to different respective locations in a computer-generated reality (CGR) environment is described.Type: ApplicationFiled: August 6, 2021Publication date: November 25, 2021Inventors: Aaron Mackay BURNS, Nathan GITTER, Alexis H. PALANGIE, Pol PLA I CONESA, David M. SCHATTEL
-
Publication number: 20210366440Abstract: In accordance with some embodiments, an exemplary process for dynamically controlling the size of a display based on a moving of a visual object meeting a criterion in a computer-generated reality (CGR) environment is described.Type: ApplicationFiled: August 9, 2021Publication date: November 25, 2021Inventors: Aaron Mackay BURNS, Alexis H. PALANGIE, Pol PLA I CONESA, David M. SCHATTEL
-
Publication number: 20210216146Abstract: A method includes detecting, via a first one of a plurality of input devices, a primary input directed to a first candidate virtual spatial location of a computer-generated reality (CGR) environment. The first candidate virtual spatial location is an output of an extremity tracking function based on the primary input. The method includes detecting, via a second one of the plurality of input devices, a secondary input directed to a second candidate virtual spatial location of the CGR environment. The second candidate virtual spatial location is an output of an eye tracking function based on the secondary input. The method includes positioning a user-controlled spatial selector to a virtual spatial location of the CGR environment as a function of the first and second candidate virtual spatial locations.Type: ApplicationFiled: January 13, 2021Publication date: July 15, 2021Inventors: Aaron Mackay Burns, Jordan Alexander Cazamias, Nicolai Philip Georg
-
Publication number: 20210097729Abstract: In one implementation, a method of resolving focal conflict in a computer-generated reality (CGR) environment is performed by a device including a processor, non-transitory memory, an image sensor, and a display. The method includes capturing, using the image sensor, an image of a scene including a real object in a particular direction at a first distance from the device. The method includes displaying, on the display, a CGR environment including a virtual object in the particular direction at a second distance from the device. In accordance with a determination that the second distance is less than the first distance, the CGR environment includes the virtual object overlaid on the scene. In accordance with a determination that the second distance is greater than the first distance, the CGR environment includes the virtual object with an obfuscation area that obfuscates at least a portion of the real object within the obfuscation area.Type: ApplicationFiled: June 23, 2020Publication date: April 1, 2021Inventors: Alexis Henri Palangie, Shih Sang Chiu, Bruno M. Sommer, Connor Alexander Smith, Aaron Mackay Burns
-
Publication number: 20210097766Abstract: In an exemplary process, a computer-generated reality environment comprising a virtual object is presented and user movement that occurs in a physical environment is detected. In response to determining that the detected user movement is toward the virtual object and that the virtual object obstructs a real object from the physical environment, a determination is made whether the detected user movement is directed to the virtual object or the real object. In accordance with a determination that the detected user movement is directed to the real object, a visual appearance of the virtual object is modified, where modifying the visual appearance of the virtual object comprises displaying presenting at least a portion of the real object. In accordance with a determination that the detected user movement is directed to the virtual object, the presentation of the virtual object is maintained to obstruct the real object.Type: ApplicationFiled: August 25, 2020Publication date: April 1, 2021Inventors: Alexis PALANGIE, Aaron Mackay BURNS
-
Patent number: 10754496Abstract: Examples are disclosed herein that relate to receiving virtual reality input. An example provides a head-mounted display device comprising a sensor system, a display, a logic machine, and a storage machine holding instructions executable by the logic machine. The instructions are executable to execute a 3D virtual reality experience on the head-mounted display device, track, via the sensor system, a touch-sensitive input device, render, on the display, in a 3D location in the 3D virtual reality experience based on the tracking of the touch-sensitive input device, a user interface, receive, via a touch sensor of the touch-sensitive input device, a user input, and, in response to receiving the user input, control the 3D virtual reality experience to thereby vary visual content being rendered on the display.Type: GrantFiled: August 24, 2017Date of Patent: August 25, 2020Assignee: Microsoft Technology Licensing, LLCInventors: Kenneth Liam Kiemele, Michael Robert Thomas, Alexandre Da Veiga, Christian Michael Sadak, Bryant Daniel Hawthorne, Aaron D. Krauss, Aaron Mackay Burns
-
Patent number: 10545900Abstract: In various embodiments, methods and systems are provide for detecting a physical configuration of a device based on sensor data from one or more configuration sensors. The physical configuration includes a position of a first display region of the device with respect to a second display region of the device, where the position is physically adjustable. A configuration profile is selected from a plurality of configuration profiles based on the detected physical configuration of the device. Each configuration profile is a representation of at least one respective physical configuration of the device. An interaction mode corresponding to the selected configuration profile is activated, where the interaction mode includes a set of mode input/output (I/O) features available while the interaction mode is active. Device interfaces of the device are managed using at least some mode I/O features in the set of mode I/O features based on the activating of the interaction mode.Type: GrantFiled: September 23, 2016Date of Patent: January 28, 2020Assignee: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Aaron Mackay Burns, Riccardo Giraldi, Christian Klein, Roger Sebastian Kevin Sylvan, John Benjamin George Hesketh, Scott G Wade
-
Publication number: 20190251884Abstract: In many computing scenarios, multiple users share a display to view and/or interact with content. Typically, one user provides input that interacts with the content; a second user can interact with the view only if the first user cedes control. Some interfaces permit split views, but typically support only a single input device that manipulates both panes. In the present disclosure, when a second user desires a different view of content during the first user's interaction, the device inserts a second view of the content into the display. Each view is associated with and manipulated by a particular user (e.g., via input devices associated with individual users) without altering the views of other users. The device may automatically manage the concurrent views, such as positioning and resizing; reflecting each user's perspective in other users' views; merging content changes; and terminating a view due to idleness or merging with another view.Type: ApplicationFiled: February 14, 2018Publication date: August 15, 2019Inventors: Aaron Mackay Burns, John Benjamin HESKETH, Donna Katherine LONG, Jamie Ruth CABACCANG, Kathleen Patricia MULCAHY, Timothy David KVIZ
-
Publication number: 20190065026Abstract: Examples are disclosed herein that relate to receiving virtual reality input. An example provides a head-mounted display device comprising a sensor system, a display, a logic machine, and a storage machine holding instructions executable by the logic machine. The instructions are executable to execute a 3D virtual reality experience on the head-mounted display device, track, via the sensor system, a touch-sensitive input device, render, on the display, in a 3D location in the 3D virtual reality experience based on the tracking of the touch-sensitive input device, a user interface, receive, via a touch sensor of the touch-sensitive input device, a user input, and, in response to receiving the user input, control the 3D virtual reality experience to thereby vary visual content being rendered on the display.Type: ApplicationFiled: August 24, 2017Publication date: February 28, 2019Applicant: Microsoft Technology Licensing, LLCInventors: Kenneth Liam KIEMELE, Michael Robert THOMAS, Alexandre DA VEIGA, Christian Michael SADAK, Bryant Daniel HAWTHORNE, Aaron D. KRAUSS, Aaron Mackay BURNS
-
Publication number: 20180348518Abstract: Tracking a user head position detects a change to a new head position and, in response, a remote camera is instructed to move to a next camera position. A camera image frame, having an indication of camera position, is received from the camera. Upon the camera position not aligning with the next camera position, an assembled image frame is formed, using image data from past views, and rendered to appear to the user as if the camera moved in 1:1 alignment with the user's head to the next camera position.Type: ApplicationFiled: June 5, 2017Publication date: December 6, 2018Applicant: MICROSOFT TECHNOLOGY LICENSING, LLCInventors: Alexandre DA VEIGA, Roger Sebastian Kevin SYLVAN, Kenneth Liam KIEMELE, Nikolai Michael FAALAND, Aaron Mackay BURNS
-
Patent number: 10139631Abstract: Tracking a user head position detects a change to a new head position and, in response, a remote camera is instructed to move to a next camera position. A camera image frame, having an indication of camera position, is received from the camera. Upon the camera position not aligning with the next camera position, an assembled image frame is formed, using image data from past views, and rendered to appear to the user as if the camera moved in 1:1 alignment with the user's head to the next camera position.Type: GrantFiled: June 5, 2017Date of Patent: November 27, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Alexandre Da Veiga, Roger Sebastian Kevin Sylvan, Kenneth Liam Kiemele, Nikolai Michael Faaland, Aaron Mackay Burns