Patents by Inventor Brian W. Temple
Brian W. Temple has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250111862Abstract: In some examples, an electronic device presents, via a display, a representation of a prediction of a food being consumed by a user of the electronic device in a computer-generated environment. In some examples, the electronic device presents, via the display, an indication of possible non-compliance of medication in the computer-generated environment. In some examples, the electronic device initiates a smoking detection mode in response to the acquisition and processing of data from the user of the electronic device or from the physical environment of the user of the electronic device.Type: ApplicationFiled: August 13, 2024Publication date: April 3, 2025Inventors: Ioana NEGOITA, Brian W. TEMPLE, Ian PERRY, David LOEWENTHAL, Trent A. GREENE
-
Publication number: 20250103695Abstract: This relates generally to systems and methods for tracking and recording pupil dilation and/or signs of unconsciousness in response to detecting specific movements of the electronic device. In some examples, the electronic device captures and tracks first biometric data including pupil dilation using one or more input devices. In some examples, in response to detecting a movement of the electronic device, such as a rapid acceleration or deceleration of the electronic device, the electronic device captures second biometric data. In some examples, the electronic device displays a virtual object while presenting an extended reality environment, such as a visual indication in response to detecting that the second biometric data meets one or more criteria based on a comparison of the second biometric data with the first biometric data. In some examples, the electronic device initiates an emergency response based on the second biometric data.Type: ApplicationFiled: September 19, 2024Publication date: March 27, 2025Inventors: Ioana NEGOITA, Ian PERRY, Timothy PSIAKI, David LOEWENTHAL, Trent A. GREENE, Brian W. TEMPLE
-
Publication number: 20250099814Abstract: Some examples of the disclosure are directed to systems and methods for presenting extended reality environments and, more particularly, to displaying one or more images relating to exercises in a physical environment while presenting an extended reality environment. In some situations, the electronic device detects an initiation of an exercise activity of a user of the electronic device using at least an optical sensor. In some examples, the electronic device presents a user interface including a representation of the identified exercise activity in the extended reality environment. In some examples, in response to detecting progression of the identified exercise activity, the user interface is updated with the updated representation of the exercise activity. In some examples, the electronic device presents a rest user interface during rest periods and/or after detecting rest.Type: ApplicationFiled: September 23, 2024Publication date: March 27, 2025Inventors: Thomas G. SALTER, Christopher I. WORD, Jeffrey S. NORRIS, Ioana NEGOITA, Trent A. GREENE, Finnegan N. SINCLAIR, Brian W. TEMPLE, Ian PERRY, Michael J. ROCKWELL
-
Publication number: 20250094016Abstract: Some examples of the disclosure are directed to systems and methods for moving virtual objects in three-dimensional environments in accordance with detected movement of the electronic device. In some examples, the electronic device detects movement according to a first or second movement pattern described in more detail herein. In some examples, in response to detecting the first movement pattern, the electronic device applies a first correction factor to movement of a virtual object in the environment. In some examples, in response to detecting the first movement pattern, the electronic device applies a first correction factor to movement of a virtual object in the environment.Type: ApplicationFiled: September 4, 2024Publication date: March 20, 2025Inventors: Ioana NEGOITA, Ian PERRY, Trent A. GREENE, Thomas J. MOORE, David LOEWENTHAL, Brian W. TEMPLE, Gregory LUTTER, Allison W. DRYER, Thomas G. SALTER
-
Publication number: 20250090934Abstract: Some examples of the disclosure are directed to systems and methods for displaying one or more user interfaces based on a context of an electronic device within a physical environment. In some examples, the electronic device detects initiation of an exercise activity associated with a user of the electronic device, optionally while a computer-generated environment is presented at the electronic device. In some examples, in response to detecting the initiation of the exercise activity, the electronic device activates an exercise tracking mode of operation. In some examples, while the exercise tracking mode of operation is active, the electronic device captures one or more images of a physical environment. In some examples, in accordance with detecting, in the one or more images, a feature of the physical environment, the electronic device performs a first operation associated with the exercise tracking mode of operation.Type: ApplicationFiled: September 3, 2024Publication date: March 20, 2025Inventors: Ioana NEGOITA, Ian PERRY, Trent A. GREENE, Brian W. TEMPLE, David LOEWENTHAL, Thomas J. MOORE, Thomas G. SALTER
-
Publication number: 20250005873Abstract: Various implementations disclosed herein include devices, systems, and methods that present virtual content based on detecting a reflective object and determining a three-dimensional (3D) position of the reflective object in a physical environment. For example, an example process may include obtaining sensor data in a physical environment that includes one or more objects. The method may further include detecting a reflection of a first object of the one or more objects upon a reflective surface of a reflective object based on the sensor data. The method may further include determining a 3D position of the reflective object in the physical environment based on determining a 3D position of the reflection of the first object. The method may further include presenting virtual content in a view of the physical environment. The virtual content may be positioned at a 3D location based on the 3D position of the reflective object.Type: ApplicationFiled: September 10, 2024Publication date: January 2, 2025Inventors: Yutaka Yokokawa, Devin W. Chalmers, Brian W. Temple, Rahul Nair, Thomas G. Salter
-
Publication number: 20240338160Abstract: Various implementations disclosed herein include devices, systems, and methods for displaying presentation notes at varying positions within a presenter's field of view. In some implementations, a device includes a display, one or more processors, and a memory. A first portion of a media content item corresponding to a presentation is displayed at a first location in a three-dimensional environment. Audience engagement data corresponding to an engagement level of a member of an audience is received. A second portion of the media content item is displayed at a second location in the three-dimensional environment. The second location is selected based on the audience engagement data.Type: ApplicationFiled: August 22, 2022Publication date: October 10, 2024Inventors: Thomas G. Salter, Anshu K. Chimalamarri, Brian W. Temple, Paul Ewers
-
Patent number: 12112441Abstract: Various implementations disclosed herein include devices, systems, and methods that present virtual content based on detecting a reflective object and determining a three-dimensional (3D) position of the reflective object in a physical environment. For example, an example process may include obtaining sensor data (e.g., image, sound, motion, etc.) from a sensor of an electronic device in a physical environment that includes one or more objects. The method may further include detecting a reflective object amongst the one or more objects based on the sensor data. The method may further include determining a 3D position of the reflective object in the physical environment (e.g., where the plane of the mirror is located). The method may further include presenting virtual content in a view of the physical environment. The virtual content may be positioned at a 3D location based on the 3D position of the reflective object.Type: GrantFiled: June 27, 2023Date of Patent: October 8, 2024Assignee: Apple Inc.Inventors: Yutaka Yokokawa, Devin W. Chalmers, Brian W. Temple, Rahul Nair, Thomas G. Salter
-
Publication number: 20240248532Abstract: In one implementation, a method for visualizing multi-modal inputs includes: displaying a first user interface element within an extended reality (XR) environment; determining a gaze direction based on first input data; in response to determining that the gaze direction is directed to the first user interface element, displaying a focus indicator with a first appearance in association with the first user interface element; detecting a change in pose of at least one of a head pose or a body pose of a user of the computing system; and, in response to detecting the change of pose, modifying the focus indicator from the first appearance to a second appearance different from the first appearance.Type: ApplicationFiled: January 11, 2022Publication date: July 25, 2024Inventors: Thomas G. Salter, Brian W. Temple, Gregory Lutter
-
Publication number: 20240219998Abstract: In one implementation, a method for dynamically changing sensory and/or input modes associated with content based on a current contextual state. The method includes: while in a first contextual state, presenting extended reality (XR) content, via the display device, according to a first presentation N mode and enabling a first set of input modes to be directed to the XR content; detecting a change from the first contextual state to a second contextual state; and in response to detecting the change from the first contextual state to the second contextual state, presenting, via the display device, the XR content according to a second presentation mode different from the first presentation mode and enabling a second set of input modes to be directed to the XR content that are different from the first set of input modes.Type: ApplicationFiled: July 13, 2022Publication date: July 4, 2024Inventors: Bryce L. Schmidtchen, Brian W. Temple, Devin W. Chalmers
-
Publication number: 20240212272Abstract: Various implementations disclosed herein include devices, systems, and methods that present virtual content based on detecting a reflection and determining the context associated with a use of the electronic device in the physical environment. For example, an example process may include obtaining sensor data from one or more sensors of the electronic device in a physical environment that includes one or more objects, detecting a reflected image amongst the one or more objects based on the sensor data, and in accordance with detecting the reflected image, determining a context associated with a use of the electronic device in the physical environment based on the sensor data, and presenting virtual content based on the context, wherein the virtual content is positioned at a three-dimensional (3D) location based on a 3D position of the reflected image.Type: ApplicationFiled: March 8, 2024Publication date: June 27, 2024Inventors: Brian W. TEMPLE, Devin W. CHALMERS, Rahul NAIR, Thomas G. SALTER
-
Publication number: 20240200962Abstract: Various implementations disclosed herein include devices, systems, and methods that provides directional awareness indicators based on context detected in a physical environment. For example, an example process may include obtaining sensor data from one or more sensors of the device in a physical environment, detecting a context associated with a use of the device in the physical environment based on the sensor data, determining whether to present a directional awareness indicator based on determining that the context represents a state in which the user would benefit from the directional awareness indicator, and in accordance with determining to present the directional awareness indicator, identifying a direction for the directional awareness indicator, wherein the direction corresponds to a cardinal direction or a direction towards an anchored location or an anchored device, and presenting the directional awareness indicator based on the identified direction.Type: ApplicationFiled: March 4, 2024Publication date: June 20, 2024Inventors: Brian W. TEMPLE, Devin W. CHALMERS, Thomas G. SALTER, Yiqiang NIE
-
Publication number: 20240103614Abstract: In some embodiments, the present disclosure includes techniques and user interfaces for interacting with graphical user interfaces using gaze. In some embodiments, the present disclosure includes techniques and user interfaces for repositioning virtual objects. In some embodiments, the present disclosure includes techniques and user interfaces for transitioning modes of a camera capture user interface.Type: ApplicationFiled: September 20, 2023Publication date: March 28, 2024Inventors: Allison W. DRYER, Giancarlo YERKES, Gregory LUTTER, Brian W. TEMPLE, Devin W. CHALMERS, Luis R. DELIZ CENTENO, Elena J. NATTINGER, Anna L. BREWER
-
Publication number: 20240005612Abstract: Various implementations disclosed herein include devices, systems, and methods that present virtual content based on detecting a reflective object and determining a three-dimensional (3D) position of the reflective object in a physical environment. For example, an example process may include obtaining sensor data (e.g., image, sound, motion, etc.) from a sensor of an electronic device in a physical environment that includes one or more objects. The method may further include detecting a reflective object amongst the one or more objects based on the sensor data. The method may further include determining a 3D position of the reflective object in the physical environment (e.g., where the plane of the mirror is located). The method may further include presenting virtual content in a view of the physical environment. The virtual content may be positioned at a 3D location based on the 3D position of the reflective object.Type: ApplicationFiled: June 27, 2023Publication date: January 4, 2024Inventors: Yutaka Yokokawa, Devin W. Chalmers, Brian W. Temple, Rahul Nair, Thomas G. Salter
-
Publication number: 20240005921Abstract: In one implementation, a method of changing a state of an object is performed at a device including an image sensor, one or more processors, and non-transitory memory. The method includes receiving a vocal command. The method includes obtaining, using the image sensor, an image of a physical environment. The method includes detecting, in the image of the physical environment, an object based on a visual model of the object stored in the non-transitory memory in association with an object identifier of the object. The method includes generating, based on the vocal command and detection of the object, an instruction including the object identifier of the object. The method includes effectuating the instruction to change a state of the object.Type: ApplicationFiled: June 21, 2023Publication date: January 4, 2024Inventors: Devin W. Chalmers, Brian W. Temple, Carlo Eduardo C. Del Mundo, Harry J. Saddler, Jean-Charles Bernard Marcel Bazin
-
Publication number: 20230290270Abstract: Devices, systems, and methods that facilitate learning a language in an extended reality (XR) environment. This may involve identifying objects or activities in the environment, identifying a context associated with the user or the environment, and providing language teaching content based on the objects, activities, or contexts. In one example, the language teaching content provides individual words, phrases, or sentences corresponding to the objects, activities, or contexts. In another example, the language teaching content requests user interaction (e.g., via quiz questions or educational games) corresponding to the objects, activities, or contexts. Context may be used to determine whether or how to provide the language teaching content. For example, based on a user's current course of language study (e.g., this week's vocabulary list), corresponding object or activities may be identified in the environment for use in providing the language teaching content.Type: ApplicationFiled: February 21, 2023Publication date: September 14, 2023Inventors: Brian W. Temple, Devin W. Chalmers, Thomas G. Salter
-
Publication number: 20220291743Abstract: Various implementations disclosed herein include devices, systems, and methods that determine that a user is interested in audio content by determining that a movement (e.g., a user's head bob) has a time-based relationship with detected audio content (e.g., the beat of music playing in the background). Some implementations involve obtaining first sensor data and second sensor data corresponding to a physical environment, the first sensor data corresponding to audio in the physical environment and the second sensor data corresponding to a body movement in the physical environment. A time-based relationship between one or more elements of the audio and one or more aspects of the body movement is identified based on the first sensor data and the second sensor data. An interest in content of the audio is identified based on identifying the time-based relationship. Various actions may be performed proactively based on identifying the interest in the content.Type: ApplicationFiled: March 8, 2022Publication date: September 15, 2022Inventors: Brian W. Temple, Devin W. Chalmers, Thomas G. Salter