Patents by Inventor Gregory Lutter
Gregory Lutter has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12366917Abstract: Various implementations disclosed herein include devices, systems, and methods for modifying display of virtual content. In some implementations, the method is performed by an electronic device including a non-transitory memory, one or more processors, a display, an eye tracker and a movement sensor. In some implementations, the method includes displaying, via the display, virtual content. In some implementations, the method includes detecting, via the eye tracker, a movement of a gaze in a first direction. In some implementations, the method includes detecting, via the movement sensor, a movement of a head in a second direction while detecting the movement of the gaze in the first direction. In some implementations, the method includes, in accordance with a determination that the movement of the gaze in the first direction and the movement of the head in the second direction are corresponding opposing movements, modifying a display characteristic of the virtual content.Type: GrantFiled: September 1, 2023Date of Patent: July 22, 2025Assignee: APPLE INC.Inventors: Luis R. Deliz Centeno, Gregory Lutter
-
Patent number: 12333643Abstract: In accordance with some embodiments, exemplary processes for changing (e.g., non-linearly) the size of a virtual object are described.Type: GrantFiled: February 14, 2023Date of Patent: June 17, 2025Assignee: Apple Inc.Inventors: Rahul Nair, Gregory Lutter
-
Publication number: 20250147578Abstract: Various implementations disclosed herein include devices, systems, and methods for using a gaze vector and head pose information to activate a display interface in an environment. In some implementations, a device includes a sensor for sensing a head pose of a user, a display, one or more processors, and a memory. In various implementations, a method includes displaying an environment comprising a field of view. Based on a gaze vector, it is determined that a gaze of the user is directed to a first location within the field of view. A head pose value corresponding to the head pose of the user is obtained. On a condition that the head pose value corresponds to a motion of the head of the user toward the first location, a user interface is displayed in the environment.Type: ApplicationFiled: May 13, 2022Publication date: May 8, 2025Inventors: Thomas G. Salter, Bart Trzynadlowski, Bryce L. Schmidtchen, Devin W. Chalmers, Gregory Lutter
-
Publication number: 20250117081Abstract: One or more techniques for managing virtual objects between one or more displays are described. In accordance with some embodiments, exemplary techniques for displaying a virtual object are described.Type: ApplicationFiled: December 16, 2024Publication date: April 10, 2025Inventors: Devin W. CHALMERS, William D. LINDMEIER, Gregory LUTTER, Jonathan C. MOISANT-THOMPSON, Rahul NAIR
-
Publication number: 20250111472Abstract: Some examples of the disclosure are directed to systems and methods for changing a level of zoom of displayed content. In some examples, an electronic device displays visual content. In some examples, in response to detecting a change in position and/or orientation of the user of the electronic device, in accordance with a determination that the position and/or orientation of the user satisfies the one or more criteria, the electronic device increases the size of the content and/or zooms in on the content.Type: ApplicationFiled: September 27, 2024Publication date: April 3, 2025Inventors: Gregory LUTTER, Ioana NEGOITA, Thomas G. SALTER
-
Publication number: 20250106582Abstract: Some examples of the disclosure are directed to systems and methods for dynamically updating simulated source locations of audio sources within spatialized audio content based on detection of a change in the location and/or orientation of an audio output device (e.g., earbuds or speakers that are optionally worn by a user and are optionally included in a headset). In some examples, the simulated source locations are updated if the change in the location and/or orientation of the audio output device is greater than a threshold amount of change, and the simulated source locations are not updated if the change in the location and/or orientation of the audio output device is not greater than the threshold amount of change. In some examples, a simulated source location is identified in a three-dimensional environment. In some examples, a mode of outputting audio is changed in response to movement of an electronic device.Type: ApplicationFiled: September 18, 2024Publication date: March 27, 2025Inventors: Gregory LUTTER, Robert T. HELD, Luis R. DELIZ CENTENO, Sam D. SMITH, Sylvain J. CHOISEL, Shai MESSINGHER LANG
-
Publication number: 20250106581Abstract: Some examples of the disclosure are directed to systems and methods for dynamically updating simulated source locations of audio sources within spatialized audio content based on detection of a change in the location and/or orientation of an audio output device (e.g., earbuds or speakers that are optionally worn by a user and are optionally included in a headset). In some examples, the simulated source locations are updated if the change in the location and/or orientation of the audio output device is greater than a threshold amount of change, and the simulated source locations are not updated if the change in the location and/or orientation of the audio output device is not greater than the threshold amount of change. In some examples, a simulated source location is identified in a three-dimensional environment. In some examples, a mode of outputting audio is changed in response to movement of an electronic device.Type: ApplicationFiled: September 18, 2024Publication date: March 27, 2025Inventors: Gregory LUTTER, Robert T. HELD, Luis R. DELIZ CENTENO
-
Publication number: 20250094016Abstract: Some examples of the disclosure are directed to systems and methods for moving virtual objects in three-dimensional environments in accordance with detected movement of the electronic device. In some examples, the electronic device detects movement according to a first or second movement pattern described in more detail herein. In some examples, in response to detecting the first movement pattern, the electronic device applies a first correction factor to movement of a virtual object in the environment. In some examples, in response to detecting the first movement pattern, the electronic device applies a first correction factor to movement of a virtual object in the environment.Type: ApplicationFiled: September 4, 2024Publication date: March 20, 2025Inventors: Ioana NEGOITA, Ian PERRY, Trent A. GREENE, Thomas J. MOORE, David LOEWENTHAL, Brian W. TEMPLE, Gregory LUTTER, Allison W. DRYER, Thomas G. SALTER
-
Publication number: 20250077066Abstract: Some examples of the disclosure are directed to systems and methods for scrolling a user interface element. A user interface element including a plurality of selectable options scrollable along a first axis can be presented in an environment by a device. In some examples, the device detects an input corresponding to a request to scroll the user interface element. In accordance with a determination that the input satisfies one or more first criteria, including a criterion that is satisfied when the input includes movement of the device that corresponds to movement of a head of a user that exceeds a threshold amount of movement over a period of time, the device scrolls the user interface element in a first mode of operation. In accordance with a determination that the input satisfies one or more second criteria, the device scrolls the user interface element in a second mode of operation.Type: ApplicationFiled: August 28, 2024Publication date: March 6, 2025Inventor: Gregory LUTTER
-
Patent number: 12222512Abstract: A head-mounted device may determine contextual information by analyzing sensor data. The head-mounted device may use computer vision analysis to determine contextual information from images of the physical environment around the head-mounted device. Instead or in addition, the head-mounted device may determine contextual information by receiving state information directly from external equipment within the physical environment. Based on the received state information, the head-mounted device may display content, play audio, change a device setting on the head-mounted device, the external equipment, and/or additional external equipment, and/or may open an application. The head-mounted device may receive the state information in accordance with identifying the external equipment in images of the physical environment.Type: GrantFiled: June 21, 2023Date of Patent: February 11, 2025Assignee: Apple Inc.Inventor: Gregory Lutter
-
Patent number: 12189848Abstract: One or more techniques for managing virtual objects between one or more displays are described. In accordance with some embodiments, exemplary techniques for displaying a virtual object are described.Type: GrantFiled: December 22, 2023Date of Patent: January 7, 2025Assignee: Apple Inc.Inventors: Devin W. Chalmers, William D. Lindmeier, Gregory Lutter, Jonathan C. Moisant-Thompson, Rahul Nair
-
Publication number: 20240370082Abstract: A head-mounted device may be used to control one or more external electronic devices. Gaze input and camera images may be used to determine a point of gaze relative to a display for an external electronic device. The external electronic device may receive information regarding the user's gaze input from the head-mounted device and may highlight one out of multiple user interface elements that is targeted by the gaze input. The head-mounted device may receive input such as keystroke information from an accessory device and relay the input to an external electronic device that is being viewed. The head-mounted device may receive a display configuration request, determine layout information for displays in the physical environment of the head-mounted device, and transmit the layout information to an external device associated with the displays.Type: ApplicationFiled: April 24, 2024Publication date: November 7, 2024Inventors: Guilherme Klink, Andrew Muehlhausen, Gregory Lutter, Luis R. Deliz Centeno, Paulo R. Jansen dos Reis, Peter Burgner, Rahul Nair, Yutaka Yokokawa, Anshu K. Chimalamarri, Ashwin K. Vijay, Benjamin S. Phipps, Brandon K. Schmuck
-
Publication number: 20240319789Abstract: Various implementations disclosed herein include devices, systems, and methods that determine whether the user is reading text or intends an interaction with a portion of the text in order to initiate an interaction event during the presentation of the content. For example, an example process may include obtaining physiological data associated with an eye of a user during presentation of content, wherein the content includes text, determining whether the user is reading a portion of the text or intends an interaction with the portion of the text based on an assessment of the physiological data with respect to a reading characteristic, and in accordance with a determination that the user intends the interaction with the portion of the text, initiating an interaction event associated with the portion of the text.Type: ApplicationFiled: June 3, 2024Publication date: September 26, 2024Inventors: Richard Ignatius Punsal LOZADA, Anshu K. CHIMALAMARRI, Bryce L. SCHMIDTCHEN, Gregory LUTTER, Paul EWERS, Peter BURGNER, Thomas J. MOORE, Trevor J. MCINTYRE
-
Publication number: 20240302898Abstract: A method is performed at an electronic device with one or more processors, a non-transitory memory, and a display. The method includes displaying, on the display, a first user interface (UI) element that is associated with a first selection region and a second selection region. The method includes, while displaying the first UI element, determining, based on eye tracking data, that a targeting criterion is satisfied with respect to the first selection region or the second selection region. The eye tracking data is associated with one or more eyes of a user of the electronic device. The method includes, while displaying the first UI element, selecting the first UI element based at least in part on determining that the targeting criterion is satisfied.Type: ApplicationFiled: April 30, 2024Publication date: September 12, 2024Inventors: Bryce L. Schmidtchen, Ioana Negoita, Anshu K. Chimalamarri, Gregory Lutter, Thomas J. Moore, Trevor J. McIntyre
-
Publication number: 20240256039Abstract: In one implementation, a method is performed for selecting a UI element with eye tracking-based attention accumulators. The method includes: while a first UI element is currently selected, detecting a gaze direction directed to a second UI element; in response to detecting the gaze direction directed to the second UI element, decreasing a first attention accumulator value associated with the first UI element and increasing a second attention accumulator value associated with the second UI element; in accordance with a determination that the second attention accumulator value exceeds the first attention accumulator value, deselecting the first UI element and selecting the second UI element; and in accordance with a determination that the second attention accumulator value does not exceed the first attention accumulator value, maintaining selection of the first UI element.Type: ApplicationFiled: April 10, 2024Publication date: August 1, 2024Inventors: Gregory Lutter, Bryce L. Schmidtchen
-
Publication number: 20240248532Abstract: In one implementation, a method for visualizing multi-modal inputs includes: displaying a first user interface element within an extended reality (XR) environment; determining a gaze direction based on first input data; in response to determining that the gaze direction is directed to the first user interface element, displaying a focus indicator with a first appearance in association with the first user interface element; detecting a change in pose of at least one of a head pose or a body pose of a user of the computing system; and, in response to detecting the change of pose, modifying the focus indicator from the first appearance to a second appearance different from the first appearance.Type: ApplicationFiled: January 11, 2022Publication date: July 25, 2024Inventors: Thomas G. Salter, Brian W. Temple, Gregory Lutter
-
Patent number: 12026302Abstract: A head-mounted device may use head pose changes for user input. In particular, a display in the head-mounted device may display a slider with an indicator. The slider may be a visual representation of a scalar quantity of a device setting such as volume or brightness. Based on head pose changes, the scalar quantity of the device setting and the position of the indicator on the slider may be updated. The direction of a head movement may correspond to the direction of movement of the indicator in the slider. The scalar quantity of a device setting may only be updated when gaze input from a user targets the slider. The slider may be displayed in response to gaze input targeting an icon associated with the slider.Type: GrantFiled: April 4, 2023Date of Patent: July 2, 2024Assignee: Apple Inc.Inventor: Gregory Lutter
-
Publication number: 20240192773Abstract: Some examples of the disclosure are directed to systems and methods for minimizing display of an object from a maximized state to a minimized state in a three-dimensional environment. In some examples, an electronic device presents a computer-generated environment that includes a virtual object displayed in a maximized state in the three-dimensional environment. While displaying the three-dimensional environment, the electronic device detects that a first event has occurred. In response to detecting that the first event has occurred, in accordance with a determination that the first event satisfies one or more criteria, the electronic device displays the virtual object in a minimized state in the three-dimensional environment from a viewpoint of a user of the electronic device. In accordance with a determination that the first event does not satisfy the one or more criteria, the electronic device maintains display of the virtual object in the maximized state from the viewpoint.Type: ApplicationFiled: December 1, 2023Publication date: June 13, 2024Inventors: Gregory LUTTER, Peter BURGNER, Manuel C. CLEMENT, In Young YANG, Thomas G. SALTER, Alesha UNPINGCO, Lee SPARKS
-
Publication number: 20240192772Abstract: Some examples of the disclosure are directed to systems and methods for transitioning an object between orientations in a three-dimensional environment. In some examples, an electronic device presents a three-dimensional environment including a virtual object that is displayed in a first orientation in the three-dimensional environment. While displaying the three-dimensional environment, the electronic device detects movement of a viewpoint of a user of the electronic device. In response to detecting the movement of the viewpoint, the electronic device moves the virtual object based on the movement of the viewpoint. In accordance with a determination that the movement of the viewpoint exceeds a threshold movement, the electronic device displays the virtual object in a second orientation in the three-dimensional environment.Type: ApplicationFiled: November 29, 2023Publication date: June 13, 2024Inventors: Gregory LUTTER, Manuel C. CLEMENT
-
Publication number: 20240193892Abstract: Some examples of the disclosure are directed to systems and methods for correlating rotation of a three-dimensional object to rotation of a viewpoint of a user. In some examples, an electronic device presents a computer-generated environment that includes an object. In some examples, while presenting the computer-generated environment, the electronic device detects an input that includes rotation of a viewpoint of a user of the electronic device relative to the computer-generated environment. In response to detecting the input, in accordance with a determination that the rotation of the viewpoint is in a first direction, the electronic device rotates the object in a first respective direction, based on the first direction, relative to the viewpoint. In accordance with a determination that the rotation of the viewpoint is in a second direction the electronic device rotates the object in a second respective direction, based on the second direction, relative to the viewpoint.Type: ApplicationFiled: November 29, 2023Publication date: June 13, 2024Inventors: Gregory LUTTER, Manuel C. CLEMENT