Patents by Inventor Devin W. Chalmers
Devin W. Chalmers has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250147578Abstract: Various implementations disclosed herein include devices, systems, and methods for using a gaze vector and head pose information to activate a display interface in an environment. In some implementations, a device includes a sensor for sensing a head pose of a user, a display, one or more processors, and a memory. In various implementations, a method includes displaying an environment comprising a field of view. Based on a gaze vector, it is determined that a gaze of the user is directed to a first location within the field of view. A head pose value corresponding to the head pose of the user is obtained. On a condition that the head pose value corresponds to a motion of the head of the user toward the first location, a user interface is displayed in the environment.Type: ApplicationFiled: May 13, 2022Publication date: May 8, 2025Inventors: Thomas G. Salter, Bart Trzynadlowski, Bryce L. Schmidtchen, Devin W. Chalmers, Gregory Lutter
-
Patent number: 12294812Abstract: An electronic device is described. In some embodiments, the electronic device includes instructions for: while presenting an extended reality environment, receiving, by the first electronic device, a request to present a virtual representation of a remote participant of a communication session, where the first electronic device is connected to the communication session; obtaining a capability of the remote participant of the communication session; and in response to receiving the request to present the virtual representation of the remote participant of the communication session, presenting the virtual representation of the remote participant of the communication session based on the obtained capability of the remote participant of the communication session.Type: GrantFiled: April 23, 2024Date of Patent: May 6, 2025Assignee: Apple Inc.Inventors: Devin W. Chalmers, Jae Hwang Lee
-
Publication number: 20250111614Abstract: Various multilayer handling techniques for head-mounted display devices may smooth motion of a viewing area that results from head movement, may restrict the viewing area to a defined display boundary, and may variously apply different motion criteria to the content and the viewing area of the head-mounted display devices. This may shift the content and the viewing area of the head-mounted display devices differently, which may in turn cause the content presentation to appear less shaky than if the content were fully head-locked, resulting in a more pleasant and usable viewing experience.Type: ApplicationFiled: September 18, 2024Publication date: April 3, 2025Inventors: Elena J. Nattinger, Devin W. Chalmers, Anna L. Brewer
-
Publication number: 20250111163Abstract: A head-mounted device may include one or more cameras that detect text in a physical environment surrounding the head-mounted device. The head-mounted device may send information regarding the text in the physical environment, contextual information, response length parameters, and/or user questions associated with the text in the physical environment to a trained model. The trained model may be a large language model. The head-mounted device may receive a text summary from the trained model that is based on the information regarding the text, contextual information, response length parameters, and user questions. The head-mounted device may present the text summary on one or more displays.Type: ApplicationFiled: August 8, 2024Publication date: April 3, 2025Inventors: Anna L Brewer, Anshu K Chimalamarri, Devin W Chalmers, Thomas G Salter
-
Publication number: 20250111471Abstract: Embodiments disclosed herein are directed to devices, systems, and methods for presenting a magnified view in an extended reality environment. Specifically, a magnified view includes a zoom reticle that is presented at a display location of a display. The zoom reticle includes magnified content that includes a magnified portion of a user's field of view. For example, the magnified content may be generated from image data selected from a corresponding portion of a field of view of a camera. The position of the zoom reticle on the display, as well as the portion of the field of view that is magnified, may vary in different circumstances such as described herein.Type: ApplicationFiled: September 5, 2024Publication date: April 3, 2025Inventors: Elena J. Nattinger, Michael J. Rockwell, Christopher I. Word, Devin W. Chalmers, Paulo R. Jansen dos Reis, Paul Ewers, Peter Burgner, Anna L. Brewer, Jeffrey S. Norris, Allison W. Dryer, Andrew Muehlhausen, Luis R. Deliz Centeno, Thomas J. Moore, Alesha Unpingco, Thomas G. Salter
-
Publication number: 20250093642Abstract: In a head-mounted device, position and motion sensors may be included to determine the orientation of the head-mounted device. A motion sensor may experience error that accumulates over time, sometimes referred to as drift. To mitigate the effect of drift in a motion sensor, a reference orientation for the motion sensor may be reset when a qualifying motion is detected. The qualifying motion may be detected using one or more criteria such as a total change in angular orientation or rate of change in angular orientation. The reference orientation for the motion sensor may also be reset when a duration of time elapses without a qualifying motion being detected.Type: ApplicationFiled: August 8, 2024Publication date: March 20, 2025Inventors: Luis R. Deliz Centeno, Devin W. Chalmers
-
Patent number: 12249033Abstract: In some embodiments, the present disclosure includes techniques and user interfaces for interacting with virtual objects in an extended reality environment. In some embodiments, the techniques and user interfaces are for interacting with virtual objects in an extended reality environment, including repositioning virtual objects relative to the environment. In some embodiments, the techniques and user interfaces are for interacting with virtual objects, in an extended reality environment, including virtual objects that aid a user in navigating within the environment. In some embodiments, the techniques and user interfaces are for interacting with virtual objects, including objects displayed based on changes in a field-of-view of a user, in an extended reality environment, including repositioning virtual objects relative to the environment.Type: GrantFiled: September 6, 2023Date of Patent: March 11, 2025Assignee: Apple Inc.Inventors: Yiqiang Nie, Giovanni Agnoli, Devin W. Chalmers, Allison W. Dryer, Thomas G. Salter, Giancarlo Yerkes
-
Publication number: 20250060821Abstract: One exemplary implementation provides an improved user experience on a device by using physiological data to initiate a user interaction for the user experience based on an identified interest or intention of a user. For example, a sensor may obtain physiological data (e.g., pupil diameter) of a user during a user experience in which content is displayed on a display. The physiological data varies over time during the user experience and a pattern is detected. The detected pattern is used to identify an interest of the user in the content or an intention of the user regarding the content. The user interaction is then initiated based on the identified interest or the identified intention.Type: ApplicationFiled: November 4, 2024Publication date: February 20, 2025Inventors: Grant H. Mulliken, Avi Bar-Zeev, Devin W. Chalmers, Fletcher R. Rothkopf, Holly Gerhard, Lilli I. Jonsson
-
Publication number: 20250045324Abstract: In one implementation, a method of storing object information in association with contextual information is performed at a device including an image sensor, one or more processors, and non-transitory memory. The method includes capturing, using the image sensor, an image of an environment. The method includes detecting a user engagement with an object in the environment based on the image of the environment. The method includes, in response to detecting the user engagement with the object, obtaining information regarding the object, obtaining contextual information, and storing, in a database, an entry including the information regarding the object in association with the contextual information.Type: ApplicationFiled: September 16, 2022Publication date: February 6, 2025Inventors: Christopher D. Fu, Devin W. Chalmers, Matthias Dantone, Paulo R. Jansen dos Reis
-
Patent number: 12189848Abstract: One or more techniques for managing virtual objects between one or more displays are described. In accordance with some embodiments, exemplary techniques for displaying a virtual object are described.Type: GrantFiled: December 22, 2023Date of Patent: January 7, 2025Assignee: Apple Inc.Inventors: Devin W. Chalmers, William D. Lindmeier, Gregory Lutter, Jonathan C. Moisant-Thompson, Rahul Nair
-
Publication number: 20250005873Abstract: Various implementations disclosed herein include devices, systems, and methods that present virtual content based on detecting a reflective object and determining a three-dimensional (3D) position of the reflective object in a physical environment. For example, an example process may include obtaining sensor data in a physical environment that includes one or more objects. The method may further include detecting a reflection of a first object of the one or more objects upon a reflective surface of a reflective object based on the sensor data. The method may further include determining a 3D position of the reflective object in the physical environment based on determining a 3D position of the reflection of the first object. The method may further include presenting virtual content in a view of the physical environment. The virtual content may be positioned at a 3D location based on the 3D position of the reflective object.Type: ApplicationFiled: September 10, 2024Publication date: January 2, 2025Inventors: Yutaka Yokokawa, Devin W. Chalmers, Brian W. Temple, Rahul Nair, Thomas G. Salter
-
Publication number: 20240412516Abstract: In one implementation, a method of tracking contexts is performed at a device including an image sensor, one or more processors, and non-transitory memory. The method includes capturing, using the image sensor, an image of an environment at a particular time. The method includes detecting a context based at least in part on the image of the environment. The method includes, in accordance with a determination that the context is included within a predefined set of contexts, storing, in a database, an entry including data indicating detection of the context in association with data indicating the particular time. The method includes receiving a query regarding the context. The method includes providing a response to the query based on the data indicating the particular time.Type: ApplicationFiled: September 16, 2022Publication date: December 12, 2024Inventors: Elizabeth V. Petrov, Devin W. Chalmers, Ioana Negoita
-
Patent number: 12164687Abstract: One exemplary implementation provides an improved user experience on a device by using physiological data to initiate a user interaction for the user experience based on an identified interest or intention of a user. For example, a sensor may obtain physiological data (e.g., pupil diameter) of a user during a user experience in which content is displayed on a display. The physiological data varies over time during the user experience and a pattern is detected. The detected pattern is used to identify an interest of the user in the content or an intention of the user regarding the content. The user interaction is then initiated based on the identified interest or the identified intention.Type: GrantFiled: August 6, 2021Date of Patent: December 10, 2024Assignee: Apple Inc.Inventors: Avi Bar-Zeev, Devin W. Chalmers, Fletcher R. Rothkopf, Grant H. Mulliken, Holly E. Gerhard, Lilli I. Jonsson
-
Publication number: 20240402798Abstract: Systems and methods for controlling an electronic device using the gaze of a user. Movement of the gaze of the user to activation regions within a gaze field of view may activate a function of the electronic device. The activation regions may be dynamically modified to prevent accidental triggering of functions associated therewith.Type: ApplicationFiled: May 16, 2024Publication date: December 5, 2024Inventors: Elena J Nattinger, Devin W. Chalmers, Trent A. Greene, Luis R. Deliz Centeno, Robert T. Held, Allison W. Dryer
-
Publication number: 20240393919Abstract: Systems and methods for facilitating selection of a target object required by a request from a user from a set of candidate objects include identifying the set of candidate objects in a gaze region corresponding to a gaze of the user and generating a graphical user interface allowing a user to select the target object from the set of candidate objects.Type: ApplicationFiled: May 16, 2024Publication date: November 28, 2024Inventors: Andrew Muehlhausen, Elena J. Nattinger, Devin W. Chalmers, Paul Ewers, Paulo R. Jansen dos Reis, Peter Burgner, Christopher D. Fu, Richard P. Lozada
-
Publication number: 20240353891Abstract: Various implementations disclosed herein include devices, systems, and methods for associating chronology with a physical article. In some implementations, a device includes a display, one or more processors, and a memory. The method may include presenting an environment comprising a representation of a physical article. An amount of time since a previous event associated with the physical article may be monitored. An indicator of the amount of time may be displayed proximate the representation of the physical article.Type: ApplicationFiled: July 26, 2022Publication date: October 24, 2024Inventor: Devin W. Chalmers
-
Publication number: 20240354177Abstract: An electronic device that is in communication with one or more wearable audio output devices detects occurrence of one or more first event while the one or more wearable audio output devices are being worn by a user. In response, the electronic device outputs, via the one or more wearable audio output devices, audio content corresponding to the one or more first events. After outputting the audio content corresponding to the one or more first events, the electronic device detects movement of the one or more wearable audio output devices, and in response to detecting the movement, and in accordance with a determination that a first movement of the one or more wearable audio output devices meets first movement criteria, outputs, via the one or more wearable audio output devices additional audio content corresponding to one or more events.Type: ApplicationFiled: April 17, 2024Publication date: October 24, 2024Inventors: Devin W. Chalmers, Sean B. Kelly, Karlin Y. Bark
-
Publication number: 20240338104Abstract: A drive unit for driving a load, like a centrifugal compressor, a pump, or the like, comprising a driving shaft, is connected to the load to be driven. The drive unit comprises a plurality of electric motors connected to the driving shaft and a plurality of variable frequency drives electrically connected to the power grid (G) used to feed each electric motor.Type: ApplicationFiled: January 13, 2022Publication date: October 10, 2024Inventors: Thomas G. Salter, Anshu K. Chimalamarri, Bryce L. Schmidtchen, Devin W. Chalmers
-
Patent number: 12112441Abstract: Various implementations disclosed herein include devices, systems, and methods that present virtual content based on detecting a reflective object and determining a three-dimensional (3D) position of the reflective object in a physical environment. For example, an example process may include obtaining sensor data (e.g., image, sound, motion, etc.) from a sensor of an electronic device in a physical environment that includes one or more objects. The method may further include detecting a reflective object amongst the one or more objects based on the sensor data. The method may further include determining a 3D position of the reflective object in the physical environment (e.g., where the plane of the mirror is located). The method may further include presenting virtual content in a view of the physical environment. The virtual content may be positioned at a 3D location based on the 3D position of the reflective object.Type: GrantFiled: June 27, 2023Date of Patent: October 8, 2024Assignee: Apple Inc.Inventors: Yutaka Yokokawa, Devin W. Chalmers, Brian W. Temple, Rahul Nair, Thomas G. Salter
-
Publication number: 20240256215Abstract: A method performed by an audio system comprising a headset. The method sends a playback signal containing user-desired audio content to drive a speaker of the headset that is being worn by a user, receives a microphone signal from a microphone that is arranged to capture sounds within an ambient environment in which the user is located, performs a speech detection algorithm upon the microphone signal to detect speech contained therein, in response to a detection of speech, determines that the user intends to engage in a conversation with a person who is located within the ambient environment, and, in response to determining that the user intends to engage in the conversation, adjusts the playback signal based on the user-desired audio content.Type: ApplicationFiled: October 16, 2023Publication date: August 1, 2024Inventors: Christopher T. Eubank, Devin W. Chalmers, Kirill Kalinichev, Rahul Nair, Thomas G. Salter