Patents by Inventor Devin W. Chalmers

Devin W. Chalmers has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240353891
    Abstract: Various implementations disclosed herein include devices, systems, and methods for associating chronology with a physical article. In some implementations, a device includes a display, one or more processors, and a memory. The method may include presenting an environment comprising a representation of a physical article. An amount of time since a previous event associated with the physical article may be monitored. An indicator of the amount of time may be displayed proximate the representation of the physical article.
    Type: Application
    Filed: July 26, 2022
    Publication date: October 24, 2024
    Inventor: Devin W. Chalmers
  • Publication number: 20240354177
    Abstract: An electronic device that is in communication with one or more wearable audio output devices detects occurrence of one or more first event while the one or more wearable audio output devices are being worn by a user. In response, the electronic device outputs, via the one or more wearable audio output devices, audio content corresponding to the one or more first events. After outputting the audio content corresponding to the one or more first events, the electronic device detects movement of the one or more wearable audio output devices, and in response to detecting the movement, and in accordance with a determination that a first movement of the one or more wearable audio output devices meets first movement criteria, outputs, via the one or more wearable audio output devices additional audio content corresponding to one or more events.
    Type: Application
    Filed: April 17, 2024
    Publication date: October 24, 2024
    Inventors: Devin W. Chalmers, Sean B. Kelly, Karlin Y. Bark
  • Publication number: 20240338104
    Abstract: A drive unit for driving a load, like a centrifugal compressor, a pump, or the like, comprising a driving shaft, is connected to the load to be driven. The drive unit comprises a plurality of electric motors connected to the driving shaft and a plurality of variable frequency drives electrically connected to the power grid (G) used to feed each electric motor.
    Type: Application
    Filed: January 13, 2022
    Publication date: October 10, 2024
    Inventors: Thomas G. Salter, Anshu K. Chimalamarri, Bryce L. Schmidtchen, Devin W. Chalmers
  • Patent number: 12112441
    Abstract: Various implementations disclosed herein include devices, systems, and methods that present virtual content based on detecting a reflective object and determining a three-dimensional (3D) position of the reflective object in a physical environment. For example, an example process may include obtaining sensor data (e.g., image, sound, motion, etc.) from a sensor of an electronic device in a physical environment that includes one or more objects. The method may further include detecting a reflective object amongst the one or more objects based on the sensor data. The method may further include determining a 3D position of the reflective object in the physical environment (e.g., where the plane of the mirror is located). The method may further include presenting virtual content in a view of the physical environment. The virtual content may be positioned at a 3D location based on the 3D position of the reflective object.
    Type: Grant
    Filed: June 27, 2023
    Date of Patent: October 8, 2024
    Assignee: Apple Inc.
    Inventors: Yutaka Yokokawa, Devin W. Chalmers, Brian W. Temple, Rahul Nair, Thomas G. Salter
  • Publication number: 20240275922
    Abstract: An electronic device is described. In some embodiments, the electronic device includes instructions for: while presenting an extended reality environment, receiving, by the first electronic device, a request to present a virtual representation of a remote participant of a communication session, where the first electronic device is connected to the communication session; obtaining a capability of the remote participant of the communication session; and in response to receiving the request to present the virtual representation of the remote participant of the communication session, presenting the virtual representation of the remote participant of the communication session based on the obtained capability of the remote participant of the communication session.
    Type: Application
    Filed: April 23, 2024
    Publication date: August 15, 2024
    Inventors: Devin W. CHALMERS, Jae Hwang LEE
  • Publication number: 20240256215
    Abstract: A method performed by an audio system comprising a headset. The method sends a playback signal containing user-desired audio content to drive a speaker of the headset that is being worn by a user, receives a microphone signal from a microphone that is arranged to capture sounds within an ambient environment in which the user is located, performs a speech detection algorithm upon the microphone signal to detect speech contained therein, in response to a detection of speech, determines that the user intends to engage in a conversation with a person who is located within the ambient environment, and, in response to determining that the user intends to engage in the conversation, adjusts the playback signal based on the user-desired audio content.
    Type: Application
    Filed: October 16, 2023
    Publication date: August 1, 2024
    Inventors: Christopher T. Eubank, Devin W. Chalmers, Kirill Kalinichev, Rahul Nair, Thomas G. Salter
  • Publication number: 20240219998
    Abstract: In one implementation, a method for dynamically changing sensory and/or input modes associated with content based on a current contextual state. The method includes: while in a first contextual state, presenting extended reality (XR) content, via the display device, according to a first presentation N mode and enabling a first set of input modes to be directed to the XR content; detecting a change from the first contextual state to a second contextual state; and in response to detecting the change from the first contextual state to the second contextual state, presenting, via the display device, the XR content according to a second presentation mode different from the first presentation mode and enabling a second set of input modes to be directed to the XR content that are different from the first set of input modes.
    Type: Application
    Filed: July 13, 2022
    Publication date: July 4, 2024
    Inventors: Bryce L. Schmidtchen, Brian W. Temple, Devin W. Chalmers
  • Publication number: 20240211200
    Abstract: Various implementations disclosed herein include devices, systems, and methods that sense, assess, measure, or otherwise determine user attention to selectively transmit or deliver audio from an audio source (e.g., a talking user's voice captured by their device, a TV, etc.) to one or more listening users' devices and/or adjusts audio cancellation/transparency of environmental noise on the one or more listening users' devices in a multi-person setting.
    Type: Application
    Filed: December 20, 2023
    Publication date: June 27, 2024
    Inventors: Robert D. Silfvast, Izzet B. Yildiz, Daniel Javaheri Zadeh, Grant H. Mulliken, Srinath Nizampatnam, Devin W. Chalmers
  • Publication number: 20240212272
    Abstract: Various implementations disclosed herein include devices, systems, and methods that present virtual content based on detecting a reflection and determining the context associated with a use of the electronic device in the physical environment. For example, an example process may include obtaining sensor data from one or more sensors of the electronic device in a physical environment that includes one or more objects, detecting a reflected image amongst the one or more objects based on the sensor data, and in accordance with detecting the reflected image, determining a context associated with a use of the electronic device in the physical environment based on the sensor data, and presenting virtual content based on the context, wherein the virtual content is positioned at a three-dimensional (3D) location based on a 3D position of the reflected image.
    Type: Application
    Filed: March 8, 2024
    Publication date: June 27, 2024
    Inventors: Brian W. TEMPLE, Devin W. CHALMERS, Rahul NAIR, Thomas G. SALTER
  • Publication number: 20240200962
    Abstract: Various implementations disclosed herein include devices, systems, and methods that provides directional awareness indicators based on context detected in a physical environment. For example, an example process may include obtaining sensor data from one or more sensors of the device in a physical environment, detecting a context associated with a use of the device in the physical environment based on the sensor data, determining whether to present a directional awareness indicator based on determining that the context represents a state in which the user would benefit from the directional awareness indicator, and in accordance with determining to present the directional awareness indicator, identifying a direction for the directional awareness indicator, wherein the direction corresponds to a cardinal direction or a direction towards an anchored location or an anchored device, and presenting the directional awareness indicator based on the identified direction.
    Type: Application
    Filed: March 4, 2024
    Publication date: June 20, 2024
    Inventors: Brian W. TEMPLE, Devin W. CHALMERS, Thomas G. SALTER, Yiqiang NIE
  • Publication number: 20240184986
    Abstract: In one implementation, a method of displaying text suggestions is performed at a device including an input device, a display, an image sensor, one or more processors, and non-transitory memory. The method includes obtaining, using the image sensor, one or more images of a physical environment. The method includes obtaining one or more semantic labels associated with the physical environment based on the one or more images of the physical environment. The method includes receiving, via the input device, text. The method includes determining one or more text suggestions based on the one or more semantic labels associated with the physical environment and the text. The method includes displaying, on the display, the one or more text suggestions.
    Type: Application
    Filed: April 26, 2022
    Publication date: June 6, 2024
    Inventors: Christopher D. Fu, Devin W. Chalmers, Paulo R. Jansen dos Reis
  • Patent number: 12003890
    Abstract: An electronic device is described. In some embodiments, the electronic device includes instructions for: while presenting an extended reality environment, receiving, by the first electronic device, a request to present a virtual representation of a remote participant of a communication session, where the first electronic device is connected to the communication session; obtaining a capability of the remote participant of the communication session; and in response to receiving the request to present the virtual representation of the remote participant of the communication session, presenting the virtual representation of the remote participant of the communication session based on the obtained capability of the remote participant of the communication session.
    Type: Grant
    Filed: August 8, 2023
    Date of Patent: June 4, 2024
    Assignee: Apple Inc.
    Inventors: Devin W. Chalmers, Jae Hwang Lee
  • Patent number: 11995483
    Abstract: An electronic device that is in communication with one or more wearable audio output devices detects occurrence of an event while the one or more wearable audio output devices are being worn by a user. In response to detecting the occurrence of the event, the electronic device outputs, via the one or more wearable audio output devices, one or more audio notifications corresponding to the event, including: in accordance with a determination that the user of the electronic device is currently engaged in a conversation, delaying outputting the one or more audio notifications corresponding to the event until the conversation has ended; and, in accordance with a determination that the user of the electronic device is not currently engaged in a conversation, outputting the one or more audio notifications corresponding to the event without delaying the outputting.
    Type: Grant
    Filed: December 13, 2021
    Date of Patent: May 28, 2024
    Assignee: APPLE INC.
    Inventors: Devin W. Chalmers, Sean B. Kelly, Karlin Y. Bark
  • Publication number: 20240143071
    Abstract: One or more techniques for managing virtual objects between one or more displays are described. In accordance with some embodiments, exemplary techniques for displaying a virtual object are described.
    Type: Application
    Filed: December 22, 2023
    Publication date: May 2, 2024
    Inventors: Devin W. CHALMERS, William D. LINDMEIER, Gregory LUTTER, Jonathan C. MOISANT-THOMPSON, Rahul NAIR
  • Publication number: 20240103614
    Abstract: In some embodiments, the present disclosure includes techniques and user interfaces for interacting with graphical user interfaces using gaze. In some embodiments, the present disclosure includes techniques and user interfaces for repositioning virtual objects. In some embodiments, the present disclosure includes techniques and user interfaces for transitioning modes of a camera capture user interface.
    Type: Application
    Filed: September 20, 2023
    Publication date: March 28, 2024
    Inventors: Allison W. DRYER, Giancarlo YERKES, Gregory LUTTER, Brian W. TEMPLE, Devin W. CHALMERS, Luis R. DELIZ CENTENO, Elena J. NATTINGER, Anna L. BREWER
  • Publication number: 20240104862
    Abstract: Videos are presented in an extended reality environment in a spatially aware manner such that a position of each frame of the video in the extended reality environment is based on one or more of a position, orientation, and field of view of a camera that captured the video.
    Type: Application
    Filed: September 20, 2023
    Publication date: March 28, 2024
    Inventors: Gregory Lutter, Anna L. Brewer, Devin W. Chalmers, Elena J. Nattinger
  • Publication number: 20240104863
    Abstract: Images of a physical environment are evaluated to determine candidate presentation locations for extended reality content items in an extended reality environment, each having associated presentation criteria. Extended reality content items satisfying the associated presentation criteria are presented in the extended reality environment at the candidate presentation location.
    Type: Application
    Filed: September 20, 2023
    Publication date: March 28, 2024
    Inventors: Anna L Brewer, Elena J Nattinger, Devin W Chalmers
  • Publication number: 20240104871
    Abstract: Electronic devices provide extended reality experiences. In some embodiments, a media capture user interface is displayed, including a capture guide. In some embodiments, gaze information is used for targeting. In some embodiments, a virtual object is manipulated.
    Type: Application
    Filed: September 19, 2023
    Publication date: March 28, 2024
    Inventors: Anna L. BREWER, Devin W. CHALMERS, Allison W. DRYER, Elena J. NATTINGER, Giancarlo YERKES
  • Publication number: 20240107257
    Abstract: One or more sound components are identified, isolated, and processed such that they are relocated to a different location in a sound field of spatial audio content. The one or more sounds may be voices, and in particular voices of a predetermined user.
    Type: Application
    Filed: September 20, 2023
    Publication date: March 28, 2024
    Inventors: Anna L. Brewer, Elena J. Nattinger, Devin w. Chalmers
  • Patent number: 11907420
    Abstract: One or more techniques for managing virtual objects between one or more displays are described. In accordance with some embodiments, exemplary techniques for displaying a virtual object are described.
    Type: Grant
    Filed: August 9, 2021
    Date of Patent: February 20, 2024
    Assignee: Apple Inc.
    Inventors: Devin W. Chalmers, Gregory L. Lutter, Jonathan C. Moisant-Thompson, Rahul Nair