Patents by Inventor Anshu K. Chimalamarri
Anshu K. Chimalamarri has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250103196Abstract: A method includes displaying a user interface (UI) including a first UI element that is associated with a first UI operation and a second UI element that is associated with a second UI operation. The method includes, in response to detecting a first gaze input directed to one of the first UI element and the second UI element, displaying a visual indicator at a fixed location that is separate from the first UI element and the second UI element. The method includes, in response to detecting, via the eye tracker, a second gaze input directed to the visual indicator: performing the first UI operation in response to the first gaze input being directed to the first UI element, and performing the second UI operation in response to the first gaze input being directed to the second UI element.Type: ApplicationFiled: September 25, 2024Publication date: March 27, 2025Inventors: Anshu K. Chimalamarri, Allison W. Dryer, Benjamin S. Phipps, Jessica Trinh, Nahckjoon Kim
-
Publication number: 20250104549Abstract: This relates generally to systems and methods of presenting extended reality environments and, more particularly, to displaying notifications while operating in different modes while presenting an extended reality environment. In some situations, the electronic device detects an event that satisfies one or more first criteria including a criterion that is satisfied when detecting an interaction with a person using at least an optical sensor. In some examples, in response to detecting the event, the electronic device transitions from a first mode to a second mode. In some examples, the electronic device receives a notification while operating in the second mode. In some examples, if one or more second criteria are satisfied, then the electronic device does not present the notification while in the second mode.Type: ApplicationFiled: September 23, 2024Publication date: March 27, 2025Inventors: Thomas G. SALTER, Jeffrey S. NORRIS, Christopher I. WORD, Alexandria G. HESTON, Thomas J. MOORE, Michael J. ROCKWELL, Anshu K. CHIMALAMARRI, Benjamin S. PHIPPS, Jessica TRINH, Sanjana WADHWA
-
Patent number: 12236634Abstract: In accordance with some implementations, a method is performed at an electronic device with one or more processors, a non-transitory memory, an image sensor, and a positional sensor. The method includes capturing image data of an eye using the image sensor. The method includes, while the electronic device is in a first position, determining a gaze vector based on the image data. The method includes detecting, based on positional sensor data from the positional sensor, a positional change of the electronic device from the first position to a second position. The method includes, in response to detecting the positional change, updating the gaze vector based on the positional sensor data. In some implementations, updating the gaze vector includes repositioning the gaze vector. In some implementations, updating the gaze vector includes increasing a targeting tolerance associated with the gaze vector.Type: GrantFiled: April 26, 2023Date of Patent: February 25, 2025Assignee: APPLE INC.Inventor: Anshu K. Chimalamarri
-
Publication number: 20240393996Abstract: A head-mounted device may be used to perform a visual search on a physical environment around the head-mounted device. A user may wish to visually search one out of multiple physical objects in the physical environment. To clearly show the user which physical object was the target of a visual search, the head-mounted device may present a thumbnail of the candidate physical object on a display. The thumbnail may be cropped and/or zoomed using an image from a camera on the head-mounted device. By displaying a thumbnail that is taken by a camera on the head-mounted device, the thumbnail will directly match the physical object in the user's physical environment, eliminating ambiguity regarding which physical object is the subject of the visual search.Type: ApplicationFiled: April 19, 2024Publication date: November 28, 2024Inventors: Thomas J. Moore, Alesha Unpingco, Anshu K. Chimalamarri, Ashwin K. Vijay, Christopher D. Fu, Guilherme Klink, In Young Yang, Paul Ewers, Paulo R. Jansen dos Reis, Peter Burgner, Thomas G. Salter, Tigran Khachatryan
-
Publication number: 20240370082Abstract: A head-mounted device may be used to control one or more external electronic devices. Gaze input and camera images may be used to determine a point of gaze relative to a display for an external electronic device. The external electronic device may receive information regarding the user's gaze input from the head-mounted device and may highlight one out of multiple user interface elements that is targeted by the gaze input. The head-mounted device may receive input such as keystroke information from an accessory device and relay the input to an external electronic device that is being viewed. The head-mounted device may receive a display configuration request, determine layout information for displays in the physical environment of the head-mounted device, and transmit the layout information to an external device associated with the displays.Type: ApplicationFiled: April 24, 2024Publication date: November 7, 2024Inventors: Guilherme Klink, Andrew Muehlhausen, Gregory Lutter, Luis R. Deliz Centeno, Paulo R. Jansen dos Reis, Peter Burgner, Rahul Nair, Yutaka Yokokawa, Anshu K. Chimalamarri, Ashwin K. Vijay, Benjamin S. Phipps, Brandon K. Schmuck
-
Publication number: 20240338104Abstract: A drive unit for driving a load, like a centrifugal compressor, a pump, or the like, comprising a driving shaft, is connected to the load to be driven. The drive unit comprises a plurality of electric motors connected to the driving shaft and a plurality of variable frequency drives electrically connected to the power grid (G) used to feed each electric motor.Type: ApplicationFiled: January 13, 2022Publication date: October 10, 2024Inventors: Thomas G. Salter, Anshu K. Chimalamarri, Bryce L. Schmidtchen, Devin W. Chalmers
-
Publication number: 20240338160Abstract: Various implementations disclosed herein include devices, systems, and methods for displaying presentation notes at varying positions within a presenter's field of view. In some implementations, a device includes a display, one or more processors, and a memory. A first portion of a media content item corresponding to a presentation is displayed at a first location in a three-dimensional environment. Audience engagement data corresponding to an engagement level of a member of an audience is received. A second portion of the media content item is displayed at a second location in the three-dimensional environment. The second location is selected based on the audience engagement data.Type: ApplicationFiled: August 22, 2022Publication date: October 10, 2024Inventors: Thomas G. Salter, Anshu K. Chimalamarri, Brian W. Temple, Paul Ewers
-
Publication number: 20240319789Abstract: Various implementations disclosed herein include devices, systems, and methods that determine whether the user is reading text or intends an interaction with a portion of the text in order to initiate an interaction event during the presentation of the content. For example, an example process may include obtaining physiological data associated with an eye of a user during presentation of content, wherein the content includes text, determining whether the user is reading a portion of the text or intends an interaction with the portion of the text based on an assessment of the physiological data with respect to a reading characteristic, and in accordance with a determination that the user intends the interaction with the portion of the text, initiating an interaction event associated with the portion of the text.Type: ApplicationFiled: June 3, 2024Publication date: September 26, 2024Inventors: Richard Ignatius Punsal LOZADA, Anshu K. CHIMALAMARRI, Bryce L. SCHMIDTCHEN, Gregory LUTTER, Paul EWERS, Peter BURGNER, Thomas J. MOORE, Trevor J. MCINTYRE
-
Publication number: 20240302898Abstract: A method is performed at an electronic device with one or more processors, a non-transitory memory, and a display. The method includes displaying, on the display, a first user interface (UI) element that is associated with a first selection region and a second selection region. The method includes, while displaying the first UI element, determining, based on eye tracking data, that a targeting criterion is satisfied with respect to the first selection region or the second selection region. The eye tracking data is associated with one or more eyes of a user of the electronic device. The method includes, while displaying the first UI element, selecting the first UI element based at least in part on determining that the targeting criterion is satisfied.Type: ApplicationFiled: April 30, 2024Publication date: September 12, 2024Inventors: Bryce L. Schmidtchen, Ioana Negoita, Anshu K. Chimalamarri, Gregory Lutter, Thomas J. Moore, Trevor J. McIntyre
-
Patent number: 12008160Abstract: A method is performed at an electronic device with one or more processors, a non-transitory memory, an eye tracker, and a display. The eye tracker receives eye tracking data associated with one or more eyes of a user of the electronic device. The method includes displaying, on the display, a first user interface (UI) element that is associated with a first selection region and a second selection region. While displaying the first UI element, the method includes determining, based on the eye tracking data, that a first targeting criterion is satisfied with respect to the first selection region, and determining, based on the eye tracking data, that a second targeting criterion is satisfied with respect to the second selection region. The method includes selecting the first UI element based at least in part on determining that the first targeting criterion is satisfied and the second targeting criterion is satisfied.Type: GrantFiled: March 6, 2023Date of Patent: June 11, 2024Assignee: APPLE INC.Inventors: Bryce L. Schmidtchen, Ioana Negoita, Anshu K. Chimalamarri, Gregory Lutter, Thomas J. Moore, Trevor J. McIntyre
-
Publication number: 20240103678Abstract: The present disclosure generally relates to user interfaces for electronic devices, including user interfaces for navigating between and/or interacting with extended reality user interfaces.Type: ApplicationFiled: September 15, 2023Publication date: March 28, 2024Inventors: Allison W. DRYER, Anshu K. CHIMALAMARRI, Giancarlo YERKES, Nahckjoon KIM, Stephen O. LEMAY, Jessica TRINH
-
Publication number: 20240019928Abstract: Various implementations disclosed herein include devices, systems, and methods for using a gaze vector and head pose information to effectuate a user interaction with a virtual object. In some implementations, a device includes a sensor for sensing a head pose of a user, a display, one or more processors, and a memory. In various implementations, a method includes displaying a set of virtual objects. Based on a gaze vector, it is determined that a gaze of the user is directed to a first virtual object of the set of virtual objects. A head pose value corresponding to the head pose of the user is obtained. An action relative to the first virtual object is performed based on the head pose value satisfying a head pose criterion.Type: ApplicationFiled: September 28, 2023Publication date: January 18, 2024Inventors: Thomas G. Salter, Anshu K. Chimalamarri, Bryce L. Schmidtchen, Devin W. Chalmers, Gregory Lutter
-
Publication number: 20230333643Abstract: A method is performed at an electronic device with one or more processors, a non-transitory memory, an eye tracker, and a display. The eye tracker receives eye tracking data associated with one or more eyes of a user of the electronic device. The method includes displaying, on the display, a first user interface (UI) element that is associated with a first selection region and a second selection region. While displaying the first UI element, the method includes determining, based on the eye tracking data, that a first targeting criterion is satisfied with respect to the first selection region, and determining, based on the eye tracking data, that a second targeting criterion is satisfied with respect to the second selection region. The method includes selecting the first UI element based at least in part on determining that the first targeting criterion is satisfied and the second targeting criterion is satisfied.Type: ApplicationFiled: March 6, 2023Publication date: October 19, 2023Inventors: Bryce L. Schmidtchen, Ioana Negoita, Anshu K. Chimalamarri, Gregory Lutter, Thomas J. Moore, Trevor J. McIntyre
-
Publication number: 20230333642Abstract: A method includes displaying a plurality of visual elements. The method includes determining, based on respective characteristic values of the plurality of visual elements, an expected gaze target that indicates a first display region where a user of the device intends to gaze while the plurality of visual elements is being displayed. The method includes obtaining, via the image sensor, an image that includes a set of pixels corresponding to a pupil of the user of the device. The method includes determining, by a gaze tracker, based on the set of pixels corresponding to the pupil, a measured gaze target that indicates a second display region where the user is measuredly gazing. The method includes adjusting a calibration parameter of the gaze tracker based on a difference between the first display region indicated by the expected gaze target and the second display region indicated by the measured gaze target.Type: ApplicationFiled: March 1, 2023Publication date: October 19, 2023Inventors: Anshu K. Chimalamarri, Bryce L. Schmidtchen, Gregory Lutter