Patents by Inventor Devin W. Chalmers
Devin W. Chalmers has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230297607Abstract: In one implementation, a method of presenting virtual content is performed by a device including an image sensor, one or more processors, and non-transitory memory. The method includes obtaining, using the image sensor, an image of a physical environment. The method includes detecting, in the image of the physical environment, machine-readable content associated with an object. The method includes determining an object type of the object. The method includes obtaining virtual content based on a search query creating using the machine-readable content and the object type. The method includes displaying the virtual content.Type: ApplicationFiled: March 15, 2023Publication date: September 21, 2023Inventors: Thomas G. Salter, Christopher D. Fu, Devin W. Chalmers, Paulo R. Jansen dos Reis
-
Publication number: 20230290270Abstract: Devices, systems, and methods that facilitate learning a language in an extended reality (XR) environment. This may involve identifying objects or activities in the environment, identifying a context associated with the user or the environment, and providing language teaching content based on the objects, activities, or contexts. In one example, the language teaching content provides individual words, phrases, or sentences corresponding to the objects, activities, or contexts. In another example, the language teaching content requests user interaction (e.g., via quiz questions or educational games) corresponding to the objects, activities, or contexts. Context may be used to determine whether or how to provide the language teaching content. For example, based on a user's current course of language study (e.g., this week's vocabulary list), corresponding object or activities may be identified in the environment for use in providing the language teaching content.Type: ApplicationFiled: February 21, 2023Publication date: September 14, 2023Inventors: Brian W. Temple, Devin W. Chalmers, Thomas G. Salter
-
Publication number: 20230102686Abstract: In one implementation, a method of localizing a device is performed at a device including one or more processors and non-transitory memory. The method includes obtaining an estimate of a pose of the device in an environment. The method includes obtaining an environmental model of the environment including a spatial feature in the environment defined by a first spatial feature location. The method includes determining a second spatial feature location of the spatial feature based on the estimate of the pose of the device. The method includes determining an updated estimate of the pose of the device based on the first spatial feature location and the second spatial feature location.Type: ApplicationFiled: June 28, 2022Publication date: March 30, 2023Inventors: Anna L. Brewer, Devin W. Chalmers, Thomas G. Salter
-
Publication number: 20220291743Abstract: Various implementations disclosed herein include devices, systems, and methods that determine that a user is interested in audio content by determining that a movement (e.g., a user's head bob) has a time-based relationship with detected audio content (e.g., the beat of music playing in the background). Some implementations involve obtaining first sensor data and second sensor data corresponding to a physical environment, the first sensor data corresponding to audio in the physical environment and the second sensor data corresponding to a body movement in the physical environment. A time-based relationship between one or more elements of the audio and one or more aspects of the body movement is identified based on the first sensor data and the second sensor data. An interest in content of the audio is identified based on identifying the time-based relationship. Various actions may be performed proactively based on identifying the interest in the content.Type: ApplicationFiled: March 8, 2022Publication date: September 15, 2022Inventors: Brian W. Temple, Devin W. Chalmers, Thomas G. Salter
-
Patent number: 11297371Abstract: A method includes obtaining a video based on images detected with cameras mounted on a vehicle and displaying a portion of the video corresponding to a time offset and a viewing angle.Type: GrantFiled: April 23, 2019Date of Patent: April 5, 2022Assignee: Apple Inc.Inventors: Devin W. Chalmers, Sean B. Kelly
-
Publication number: 20220100590Abstract: An electronic device that is in communication with one or more wearable audio output devices detects occurrence of an event while the one or more wearable audio output devices are being worn by a user. In response to detecting the occurrence of the event, the electronic device outputs, via the one or more wearable audio output devices, one or more audio notifications corresponding to the event, including: in accordance with a determination that the user of the electronic device is currently engaged in a conversation, delaying outputting the one or more audio notifications corresponding to the event until the conversation has ended; and, in accordance with a determination that the user of the electronic device is not currently engaged in a conversation, outputting the one or more audio notifications corresponding to the event without delaying the outputting.Type: ApplicationFiled: December 13, 2021Publication date: March 31, 2022Inventors: Devin W. Chalmers, Sean B. Kelly, Karlin Y. Bark
-
Patent number: 11231975Abstract: An electronic device is in communication with one or more wearable audio output devices. The electronic device detects occurrence of an event and outputs, via the one or more wearable audio output devices, one or more audio notifications corresponding to the event. After beginning to output the one or more audio notifications, the electronic device detects an input directed to the one or more wearable audio output devices. In response, if the input is detected within a predefined time period with respect to the one or more audio notifications corresponding to the event, the electronic device performs a first operation associated with the one or more audio notifications corresponding to the event; and, if the input is detected after the predefined time period has elapsed, the electronic device performs a second operation not associated with the one or more audio notifications corresponding to the first event.Type: GrantFiled: September 18, 2019Date of Patent: January 25, 2022Assignee: APPLE INC.Inventors: Devin W. Chalmers, Sean B. Kelly, Karlin Y. Bark
-
Publication number: 20220012002Abstract: In an exemplary process for interacting with user interface objects using an eye gaze, an affordance associated with a first object is displayed. A gaze direction or a gaze depth is determined. While the gaze direction or the gaze depth is determined to correspond to a gaze at the affordance, a first input representing user instruction to take action on the affordance is received, and the affordance is selected responsive to receiving the first input.Type: ApplicationFiled: September 27, 2021Publication date: January 13, 2022Inventors: Avi BAR-ZEEV, Ryan S. BURGOYNE, Devin W. CHALMERS, Luis R. DELIZ CENTENO, Rahul NAIR, Timothy R. ORIOL, Alexis H. PALANGIE
-
Publication number: 20210397407Abstract: A method performed by an audio system comprising a headset. The method sends a playback signal containing user-desired audio content to drive a speaker of the headset that is being worn by a user, receives a microphone signal from a microphone that is arranged to capture sounds within an ambient environment in which the user is located, performs a speech detection algorithm upon the microphone signal to detect speech contained therein, in response to a detection of speech, determines that the user intends to engage in a conversation with a person who is located within the ambient environment, and, in response to determining that the user intends to engage in the conversation, adjusts the playback signal based on the user-desired audio content.Type: ApplicationFiled: May 17, 2021Publication date: December 23, 2021Inventors: Christopher T. Eubank, Devin W. Chalmers, Kirill Kalinichev, Rahul Nair, Thomas G. Salter
-
Publication number: 20210365116Abstract: One exemplary implementation provides an improved user experience on a device by using physiological data to initiate a user interaction for the user experience based on an identified interest or intention of a user. For example, a sensor may obtain physiological data (e.g., pupil diameter) of a user during a user experience in which content is displayed on a display. The physiological data varies over time during the user experience and a pattern is detected. The detected pattern is used to identify an interest of the user in the content or an intention of the user regarding the content. The user interaction is then initiated based on the identified interest or the identified intention.Type: ApplicationFiled: August 6, 2021Publication date: November 25, 2021Inventors: Avi Bar-Zeev, Devin W. Chalmers, Fletcher R. Rothkopf, Grant H. Mulliken, Holly E. Gerhard, Lilli I. Jonsson
-
Publication number: 20210325960Abstract: Aspects of the subject technology relate to gaze-based control of an electronic device. The gaze-based control can include enabling an option to provide user authorization when it is determined that the user has viewed and/or read text associated with a request for the user authorization. The gaze-based control can also include modifying a user interface or a user interface element based on user views and/or reads. The gaze-based control can be based on determining whether a user has viewed and/or read an electronic document and/or a physical document.Type: ApplicationFiled: February 26, 2021Publication date: October 21, 2021Inventors: Samuel L. IGLESIAS, Mark A. EBBOLE, Andrew P. RICHARDSON, Tyler R. CALDERONE, Michael E. BUERLI, Devin W. CHALMERS
-
Patent number: 11137967Abstract: In an exemplary process for interacting with user interface objects using an eye gaze, an affordance associated with a first object is displayed. A gaze direction or a gaze depth is determined. While the gaze direction or the gaze depth is determined to correspond to a gaze at the affordance, a first input representing user instruction to take action on the affordance is received, and the affordance is selected responsive to receiving the first input.Type: GrantFiled: March 24, 2020Date of Patent: October 5, 2021Assignee: Apple Inc.Inventors: Avi Bar-Zeev, Ryan S. Burgoyne, Devin W. Chalmers, Luis R. Deliz Centeno, Rahul Nair
-
Patent number: 11132162Abstract: In an exemplary process for interacting with user interface objects using an eye gaze, an affordance associated with a first object is displayed. A gaze direction or a gaze depth is determined. While the gaze direction or the gaze depth is determined to correspond to a gaze at the affordance, a first input representing user instruction to take action on the affordance is received, and the affordance is selected responsive to receiving the first input.Type: GrantFiled: March 24, 2020Date of Patent: September 28, 2021Assignee: Apple Inc.Inventors: Avi Bar-Zeev, Ryan S. Burgoyne, Devin W. Chalmers, Luis R. Deliz Centeno, Rahul Nair, Timothy R. Oriol, Alexis H. Palangie
-
Patent number: 11119573Abstract: One exemplary implementation provides an improved user experience on a device by using physiological data to initiate a user interaction for the user experience based on an identified interest or intention of a user. For example, a sensor may obtain physiological data (e.g., pupil diameter) of a user during a user experience in which content is displayed on a display. The physiological data varies over time during the user experience and a pattern is detected. The detected pattern is used to identify an interest of the user in the content or an intention of the user regarding the content. The user interaction is then initiated based on the identified interest or the identified intention.Type: GrantFiled: September 12, 2019Date of Patent: September 14, 2021Assignee: Apple Inc.Inventors: Avi Bar-Zeev, Devin W. Chalmers, Fletcher R. Rothkopf, Grant H. Mulliken, Holly E. Gerhard, Lilli I. Jonsson
-
Publication number: 20210004133Abstract: The present disclosure relates generally to remote touch detection. In some examples, a first electronic device obtains first image data and second image data about an input, and performs an operation based on the input in accordance with a determination that a set of one or more criteria is met based on the first image data and the second image data. In some examples, a first electronic device causes emission of infrared light by an infrared source of a second electronic device, obtains image data about an input, and performs an operation based on the input in accordance with a determination that a set of one or more criteria is met based on the image data.Type: ApplicationFiled: September 22, 2020Publication date: January 7, 2021Inventors: Samuel L. IGLESIAS, Devin W. CHALMERS, Rohit SETHI, Lejing WANG
-
Patent number: 10809910Abstract: The present disclosure relates generally to remote touch detection. In some examples, a first electronic device obtains first image data and second image data about an input, and performs an operation based on the input in accordance with a determination that a set of one or more criteria is met based on the first image data and the second image data. In some examples, a first electronic device causes emission of infrared light by an infrared source of a second electronic device, obtains image data about an input, and performs an operation based on the input in accordance with a determination that a set of one or more criteria is met based on the image data.Type: GrantFiled: August 28, 2019Date of Patent: October 20, 2020Assignee: Apple Inc.Inventors: Samuel L. Iglesias, Devin W. Chalmers, Rohit Sethi, Lejing Wang
-
Publication number: 20200225747Abstract: In an exemplary process for interacting with user interface objects using an eye gaze, an affordance associated with a first object is displayed. A gaze direction or a gaze depth is determined. While the gaze direction or the gaze depth is determined to correspond to a gaze at the affordance, a first input representing user instruction to take action on the affordance is received, and the affordance is selected responsive to receiving the first input.Type: ApplicationFiled: March 24, 2020Publication date: July 16, 2020Inventors: Avi BAR-ZEEV, Ryan S. BURGOYNE, Devin W. CHALMERS, Luis R. DELIZ CENTENO, Rahul NAIR
-
Publication number: 20200225746Abstract: In an exemplary process for interacting with user interface objects using an eye gaze, an affordance associated with a first object is displayed. A gaze direction or a gaze depth is determined. While the gaze direction or the gaze depth is determined to correspond to a gaze at the affordance, a first input representing user instruction to take action on the affordance is received, and the affordance is selected responsive to receiving the first input.Type: ApplicationFiled: March 24, 2020Publication date: July 16, 2020Inventors: Avi Bar-Zeev, Ryan S. Burgoyne, Devin W. Chalmers, Luis R. Deliz Centeno, Rahul Nair, Timothy R. Oriol, Alexis H. Palangie
-
Publication number: 20200104194Abstract: An electronic device is in communication with one or more wearable audio output devices. The electronic device detects occurrence of an event and outputs, via the one or more wearable audio output devices, one or more audio notifications corresponding to the event. After beginning to output the one or more audio notifications, the electronic device detects an input directed to the one or more wearable audio output devices. In response, if the input is detected within a predefined time period with respect to the one or more audio notifications corresponding to the event, the electronic device performs a first operation associated with the one or more audio notifications corresponding to the event; and, if the input is detected after the predefined time period has elapsed, the electronic device performs a second operation not associated with the one or more audio notifications corresponding to the first event.Type: ApplicationFiled: September 18, 2019Publication date: April 2, 2020Inventors: Devin W. Chalmers, Sean B. Kelly, Karlin Y. Bark
-
Publication number: 20200104025Abstract: The present disclosure relates generally to remote touch detection. In some examples, a first electronic device obtains first image data and second image data about an input, and performs an operation based on the input in accordance with a determination that a set of one or more criteria is met based on the first image data and the second image data. In some examples, a first electronic device causes emission of infrared light by an infrared source of a second electronic device, obtains image data about an input, and performs an operation based on the input in accordance with a determination that a set of one or more criteria is met based on the image data.Type: ApplicationFiled: August 28, 2019Publication date: April 2, 2020Inventors: Samuel L. IGLESIAS, Devin W. Chalmers, Rohit Sethi, Lejing Wang