Patents by Inventor Devin W. Chalmers

Devin W. Chalmers has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250117081
    Abstract: One or more techniques for managing virtual objects between one or more displays are described. In accordance with some embodiments, exemplary techniques for displaying a virtual object are described.
    Type: Application
    Filed: December 16, 2024
    Publication date: April 10, 2025
    Inventors: Devin W. CHALMERS, William D. LINDMEIER, Gregory LUTTER, Jonathan C. MOISANT-THOMPSON, Rahul NAIR
  • Publication number: 20250111614
    Abstract: Various multilayer handling techniques for head-mounted display devices may smooth motion of a viewing area that results from head movement, may restrict the viewing area to a defined display boundary, and may variously apply different motion criteria to the content and the viewing area of the head-mounted display devices. This may shift the content and the viewing area of the head-mounted display devices differently, which may in turn cause the content presentation to appear less shaky than if the content were fully head-locked, resulting in a more pleasant and usable viewing experience.
    Type: Application
    Filed: September 18, 2024
    Publication date: April 3, 2025
    Inventors: Elena J. Nattinger, Devin W. Chalmers, Anna L. Brewer
  • Publication number: 20250111163
    Abstract: A head-mounted device may include one or more cameras that detect text in a physical environment surrounding the head-mounted device. The head-mounted device may send information regarding the text in the physical environment, contextual information, response length parameters, and/or user questions associated with the text in the physical environment to a trained model. The trained model may be a large language model. The head-mounted device may receive a text summary from the trained model that is based on the information regarding the text, contextual information, response length parameters, and user questions. The head-mounted device may present the text summary on one or more displays.
    Type: Application
    Filed: August 8, 2024
    Publication date: April 3, 2025
    Inventors: Anna L Brewer, Anshu K Chimalamarri, Devin W Chalmers, Thomas G Salter
  • Publication number: 20250111626
    Abstract: Some examples of the disclosure are directed to systems and methods for presenting content associated with a real-world user interface in an environment. A user interface of a first object in a physical environment can be detected by an electronic device. In some examples, in response to detecting the user interface of the first object, in accordance with one or more criteria being satisfied, the electronic device presents content associated with the user interface of the first object in the environment independent of a location of the first object in the physical environment. In some examples, the content associated with the user interface of the first object includes a timer. In some examples, the content associated with the user interface of the first object includes information corresponding to video content. The environment is optionally a computer-generated environment, and the electronic device optionally includes a head-mounted display.
    Type: Application
    Filed: September 23, 2024
    Publication date: April 3, 2025
    Inventors: Luis R. DELIZ CENTENO, Devin W. CHALMERS, Thomas G. SALTER, Michael J. ROCKWELL, Christopher I. WORD, Jeffrey S. NORRIS
  • Publication number: 20250111471
    Abstract: Embodiments disclosed herein are directed to devices, systems, and methods for presenting a magnified view in an extended reality environment. Specifically, a magnified view includes a zoom reticle that is presented at a display location of a display. The zoom reticle includes magnified content that includes a magnified portion of a user's field of view. For example, the magnified content may be generated from image data selected from a corresponding portion of a field of view of a camera. The position of the zoom reticle on the display, as well as the portion of the field of view that is magnified, may vary in different circumstances such as described herein.
    Type: Application
    Filed: September 5, 2024
    Publication date: April 3, 2025
    Inventors: Elena J. Nattinger, Michael J. Rockwell, Christopher I. Word, Devin W. Chalmers, Paulo R. Jansen dos Reis, Paul Ewers, Peter Burgner, Anna L. Brewer, Jeffrey S. Norris, Allison W. Dryer, Andrew Muehlhausen, Luis R. Deliz Centeno, Thomas J. Moore, Alesha Unpingco, Thomas G. Salter
  • Publication number: 20250106356
    Abstract: Some examples of the disclosure are directed to systems and methods for augmenting and/or minimizing environment audio based on video characteristics associated with a video communication session facilitated by a video communications application. The video characteristics include activation of an outward facing camera. In response to detecting the activation of an outward facing camera, an electronic device augments an environment audio stream associated with the video communication session and attenuates a first person audio stream associated with the video communication session such that the user listening to the audio stream hears audio that has the environmental audio emphasized while the first person audio is deemphasized. In response to detecting the activation of an inward facing camera, the device emphasizes the first person audio stream and deemphasizes the environmental audio stream.
    Type: Application
    Filed: September 24, 2024
    Publication date: March 27, 2025
    Inventors: Luis R. DELIZ CENTENO, Ronald J. GUGLIELMONE, JR., Devin W. CHALMERS
  • Publication number: 20250093642
    Abstract: In a head-mounted device, position and motion sensors may be included to determine the orientation of the head-mounted device. A motion sensor may experience error that accumulates over time, sometimes referred to as drift. To mitigate the effect of drift in a motion sensor, a reference orientation for the motion sensor may be reset when a qualifying motion is detected. The qualifying motion may be detected using one or more criteria such as a total change in angular orientation or rate of change in angular orientation. The reference orientation for the motion sensor may also be reset when a duration of time elapses without a qualifying motion being detected.
    Type: Application
    Filed: August 8, 2024
    Publication date: March 20, 2025
    Inventors: Luis R. Deliz Centeno, Devin W. Chalmers
  • Patent number: 12249033
    Abstract: In some embodiments, the present disclosure includes techniques and user interfaces for interacting with virtual objects in an extended reality environment. In some embodiments, the techniques and user interfaces are for interacting with virtual objects in an extended reality environment, including repositioning virtual objects relative to the environment. In some embodiments, the techniques and user interfaces are for interacting with virtual objects, in an extended reality environment, including virtual objects that aid a user in navigating within the environment. In some embodiments, the techniques and user interfaces are for interacting with virtual objects, including objects displayed based on changes in a field-of-view of a user, in an extended reality environment, including repositioning virtual objects relative to the environment.
    Type: Grant
    Filed: September 6, 2023
    Date of Patent: March 11, 2025
    Assignee: Apple Inc.
    Inventors: Yiqiang Nie, Giovanni Agnoli, Devin W. Chalmers, Allison W. Dryer, Thomas G. Salter, Giancarlo Yerkes
  • Publication number: 20250060821
    Abstract: One exemplary implementation provides an improved user experience on a device by using physiological data to initiate a user interaction for the user experience based on an identified interest or intention of a user. For example, a sensor may obtain physiological data (e.g., pupil diameter) of a user during a user experience in which content is displayed on a display. The physiological data varies over time during the user experience and a pattern is detected. The detected pattern is used to identify an interest of the user in the content or an intention of the user regarding the content. The user interaction is then initiated based on the identified interest or the identified intention.
    Type: Application
    Filed: November 4, 2024
    Publication date: February 20, 2025
    Inventors: Grant H. Mulliken, Avi Bar-Zeev, Devin W. Chalmers, Fletcher R. Rothkopf, Holly Gerhard, Lilli I. Jonsson
  • Publication number: 20250045324
    Abstract: In one implementation, a method of storing object information in association with contextual information is performed at a device including an image sensor, one or more processors, and non-transitory memory. The method includes capturing, using the image sensor, an image of an environment. The method includes detecting a user engagement with an object in the environment based on the image of the environment. The method includes, in response to detecting the user engagement with the object, obtaining information regarding the object, obtaining contextual information, and storing, in a database, an entry including the information regarding the object in association with the contextual information.
    Type: Application
    Filed: September 16, 2022
    Publication date: February 6, 2025
    Inventors: Christopher D. Fu, Devin W. Chalmers, Matthias Dantone, Paulo R. Jansen dos Reis
  • Patent number: 12189848
    Abstract: One or more techniques for managing virtual objects between one or more displays are described. In accordance with some embodiments, exemplary techniques for displaying a virtual object are described.
    Type: Grant
    Filed: December 22, 2023
    Date of Patent: January 7, 2025
    Assignee: Apple Inc.
    Inventors: Devin W. Chalmers, William D. Lindmeier, Gregory Lutter, Jonathan C. Moisant-Thompson, Rahul Nair
  • Publication number: 20250005873
    Abstract: Various implementations disclosed herein include devices, systems, and methods that present virtual content based on detecting a reflective object and determining a three-dimensional (3D) position of the reflective object in a physical environment. For example, an example process may include obtaining sensor data in a physical environment that includes one or more objects. The method may further include detecting a reflection of a first object of the one or more objects upon a reflective surface of a reflective object based on the sensor data. The method may further include determining a 3D position of the reflective object in the physical environment based on determining a 3D position of the reflection of the first object. The method may further include presenting virtual content in a view of the physical environment. The virtual content may be positioned at a 3D location based on the 3D position of the reflective object.
    Type: Application
    Filed: September 10, 2024
    Publication date: January 2, 2025
    Inventors: Yutaka Yokokawa, Devin W. Chalmers, Brian W. Temple, Rahul Nair, Thomas G. Salter
  • Publication number: 20240412516
    Abstract: In one implementation, a method of tracking contexts is performed at a device including an image sensor, one or more processors, and non-transitory memory. The method includes capturing, using the image sensor, an image of an environment at a particular time. The method includes detecting a context based at least in part on the image of the environment. The method includes, in accordance with a determination that the context is included within a predefined set of contexts, storing, in a database, an entry including data indicating detection of the context in association with data indicating the particular time. The method includes receiving a query regarding the context. The method includes providing a response to the query based on the data indicating the particular time.
    Type: Application
    Filed: September 16, 2022
    Publication date: December 12, 2024
    Inventors: Elizabeth V. Petrov, Devin W. Chalmers, Ioana Negoita
  • Patent number: 12164687
    Abstract: One exemplary implementation provides an improved user experience on a device by using physiological data to initiate a user interaction for the user experience based on an identified interest or intention of a user. For example, a sensor may obtain physiological data (e.g., pupil diameter) of a user during a user experience in which content is displayed on a display. The physiological data varies over time during the user experience and a pattern is detected. The detected pattern is used to identify an interest of the user in the content or an intention of the user regarding the content. The user interaction is then initiated based on the identified interest or the identified intention.
    Type: Grant
    Filed: August 6, 2021
    Date of Patent: December 10, 2024
    Assignee: Apple Inc.
    Inventors: Avi Bar-Zeev, Devin W. Chalmers, Fletcher R. Rothkopf, Grant H. Mulliken, Holly E. Gerhard, Lilli I. Jonsson
  • Publication number: 20240402798
    Abstract: Systems and methods for controlling an electronic device using the gaze of a user. Movement of the gaze of the user to activation regions within a gaze field of view may activate a function of the electronic device. The activation regions may be dynamically modified to prevent accidental triggering of functions associated therewith.
    Type: Application
    Filed: May 16, 2024
    Publication date: December 5, 2024
    Inventors: Elena J Nattinger, Devin W. Chalmers, Trent A. Greene, Luis R. Deliz Centeno, Robert T. Held, Allison W. Dryer
  • Publication number: 20240393919
    Abstract: Systems and methods for facilitating selection of a target object required by a request from a user from a set of candidate objects include identifying the set of candidate objects in a gaze region corresponding to a gaze of the user and generating a graphical user interface allowing a user to select the target object from the set of candidate objects.
    Type: Application
    Filed: May 16, 2024
    Publication date: November 28, 2024
    Inventors: Andrew Muehlhausen, Elena J. Nattinger, Devin W. Chalmers, Paul Ewers, Paulo R. Jansen dos Reis, Peter Burgner, Christopher D. Fu, Richard P. Lozada
  • Publication number: 20240354177
    Abstract: An electronic device that is in communication with one or more wearable audio output devices detects occurrence of one or more first event while the one or more wearable audio output devices are being worn by a user. In response, the electronic device outputs, via the one or more wearable audio output devices, audio content corresponding to the one or more first events. After outputting the audio content corresponding to the one or more first events, the electronic device detects movement of the one or more wearable audio output devices, and in response to detecting the movement, and in accordance with a determination that a first movement of the one or more wearable audio output devices meets first movement criteria, outputs, via the one or more wearable audio output devices additional audio content corresponding to one or more events.
    Type: Application
    Filed: April 17, 2024
    Publication date: October 24, 2024
    Inventors: Devin W. Chalmers, Sean B. Kelly, Karlin Y. Bark
  • Publication number: 20240353891
    Abstract: Various implementations disclosed herein include devices, systems, and methods for associating chronology with a physical article. In some implementations, a device includes a display, one or more processors, and a memory. The method may include presenting an environment comprising a representation of a physical article. An amount of time since a previous event associated with the physical article may be monitored. An indicator of the amount of time may be displayed proximate the representation of the physical article.
    Type: Application
    Filed: July 26, 2022
    Publication date: October 24, 2024
    Inventor: Devin W. Chalmers
  • Publication number: 20240338104
    Abstract: A drive unit for driving a load, like a centrifugal compressor, a pump, or the like, comprising a driving shaft, is connected to the load to be driven. The drive unit comprises a plurality of electric motors connected to the driving shaft and a plurality of variable frequency drives electrically connected to the power grid (G) used to feed each electric motor.
    Type: Application
    Filed: January 13, 2022
    Publication date: October 10, 2024
    Inventors: Thomas G. Salter, Anshu K. Chimalamarri, Bryce L. Schmidtchen, Devin W. Chalmers
  • Patent number: 12112441
    Abstract: Various implementations disclosed herein include devices, systems, and methods that present virtual content based on detecting a reflective object and determining a three-dimensional (3D) position of the reflective object in a physical environment. For example, an example process may include obtaining sensor data (e.g., image, sound, motion, etc.) from a sensor of an electronic device in a physical environment that includes one or more objects. The method may further include detecting a reflective object amongst the one or more objects based on the sensor data. The method may further include determining a 3D position of the reflective object in the physical environment (e.g., where the plane of the mirror is located). The method may further include presenting virtual content in a view of the physical environment. The virtual content may be positioned at a 3D location based on the 3D position of the reflective object.
    Type: Grant
    Filed: June 27, 2023
    Date of Patent: October 8, 2024
    Assignee: Apple Inc.
    Inventors: Yutaka Yokokawa, Devin W. Chalmers, Brian W. Temple, Rahul Nair, Thomas G. Salter