Patents by Inventor Thomas G. Salter

Thomas G. Salter has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240111911
    Abstract: In one implementation, a method for spatially designating private content. The method includes: presenting, via a display device, an indication of a private viewing region relative to a location of the computing system; determining a first location for presentation of graphical content; and presenting, via the display device, the graphical content at the first location. The method further includes: transmitting a characterization vector associated with the graphical content to at least one other device for display thereon according to a determination that the first location of the graphical content is outside of the private viewing area; and forgoing transmission of the characterization vector associated with the graphical content to the at least one other device according to a determination that the first location of the graphical content is inside of the private viewing area.
    Type: Application
    Filed: December 13, 2023
    Publication date: April 4, 2024
    Inventors: Bart Colin Trzynadlowski, Thomas G. Salter, Devin William Chalmers, Anshu Kameswar Chimalamarri, Gregory Patrick Lane Lutter
  • Publication number: 20240104849
    Abstract: In some embodiments, the present disclosure includes techniques and user interfaces for interacting with virtual objects in an extended reality environment. In some embodiments, the techniques and user interfaces are for interacting with virtual objects in an extended reality environment, including repositioning virtual objects relative to the environment. In some embodiments, the techniques and user interfaces are for interacting with virtual objects, in an extended reality environment, including virtual objects that aid a user in navigating within the environment. In some embodiments, the techniques and user interfaces are for interacting with virtual objects, including objects displayed based on changes in a field-of-view of a user, in an extended reality environment, including repositioning virtual objects relative to the environment.
    Type: Application
    Filed: September 6, 2023
    Publication date: March 28, 2024
    Inventors: Yiqiang NIE, Giovanni AGNOLI, Devin CHALMERS, Allison W. DRYER, Thomas G. SALTER, Giancarlo YERKES
  • Publication number: 20240089695
    Abstract: A method includes obtaining an image of a machine-readable data representation that is located on a physical object using a camera of an electronic device. The machine-readable data representation includes an encoded form of a data value. The method further includes decoding the machine-readable data representation to determine the data value, whereby the data value includes a content identifier and a content source identifier. The method also includes selecting a content source based on the content source identifier, obtaining a content item and content location information based on the content identifier from the content source, determining a content position and a content orientation for the content item relative to the physical object based on the content location information, and displaying a representation of the content item using the electronic device according to the content position and the content orientation.
    Type: Application
    Filed: November 16, 2023
    Publication date: March 14, 2024
    Inventors: David W. Padgett, Christopher D. Fu, Scott G. Wade, Paul Ewers, Ioana Negoita, Thomas G. Salter, Dhruv Aditya Govil, Dimitris Ladopoulos
  • Publication number: 20240070931
    Abstract: In one implementation, a method of distributed content rendering is performed at a first device including a display, one or more processors, and non-transitory memory. The method includes determining a pose of a virtual object in a volumetric environment. The method includes generating a request for content rendering instructions based on the pose of the virtual object. The method includes sending, to a second device, the request for the content rendering instructions. The method includes receiving, from the second device, the content rendering instructions. The method includes displaying, based on the content rendering instructions, a content rendering on the virtual object.
    Type: Application
    Filed: February 25, 2022
    Publication date: February 29, 2024
    Inventors: Richard P. Lozada, Thomas G. Salter
  • Patent number: 11915097
    Abstract: Various implementations disclosed herein include devices, systems, and methods that provide color visual markers that include colored markings that encode data, where the colors of the colored markings are determined by scanning (e.g., detecting the visual marker using a sensor of an electronic device) the visual marker itself. In some implementations, a visual marker is detected in an image of a physical environment. In some implementations, the visual marker is detected in the image by detecting a predefined shape of a first portion of the visual marker in the image. Then, a color-interpretation scheme is determined for interpreting colored markings of the visual marker that encode data by identifying a set of colors at a corresponding set of predetermined locations on the visual marker. Then, the data of the visual marker is decoded using the colored markings and the set of colors of the color-interpretation scheme.
    Type: Grant
    Filed: January 7, 2021
    Date of Patent: February 27, 2024
    Assignee: Apple Inc.
    Inventors: Mohamed Selim Ben Himane, Anselm Grundhoefer, Arun Srivatsan Rangaprasad, Jeffrey S. Norris, Paul Ewers, Scott G. Wade, Thomas G. Salter, Tom Sengelaub
  • Publication number: 20240062485
    Abstract: In one implementation, a method of performing late-stage shift is performed at a device including a display, one or more processors, and non-transitory memory. The method includes generating, based on a first predicted pose of the device for a display time period, a first image. The method includes generating a mask indicating a first region of the first image and a second region of the first image. The method includes generating a second image by shifting, based on a second predicted pose of the device for the display time period, the first region of the first image without shifting the second region of the first image. The method includes displaying, on the display at the display time period, the second image.
    Type: Application
    Filed: October 30, 2023
    Publication date: February 22, 2024
    Inventors: Thomas G. Salter, Ganghun Kim, Ioana Negoita, Devin William Chalmers, Anshu Kameswar Chimalamarri, Thomas Justin Moore
  • Patent number: 11886625
    Abstract: In one implementation, a method for spatially designating private content. The method includes: presenting, via a display device, an indication of a private viewing region relative to a location of the computing system; determining a first location for presentation of graphical content; and presenting, via the display device, the graphical content at the first location. The method further includes: transmitting a characterization vector associated with the graphical content to at least one other device for display thereon according to a determination that the first location of the graphical content is outside of the private viewing area; and forgoing transmission of the characterization vector associated with the graphical content to the at least one other device according to a determination that the first location of the graphical content is inside of the private viewing area.
    Type: Grant
    Filed: December 14, 2021
    Date of Patent: January 30, 2024
    Assignee: APPLE INC.
    Inventors: Bart Colin Trzynadlowski, Thomas G. Salter, Devin William Chalmers, Anshu Kameswar Chimalamarri, Gregory Patrick Lane Lutter
  • Publication number: 20240023830
    Abstract: In one implementation, a method is performed for tiered posture awareness. The method includes: while presenting a three-dimensional (3D) environment, via the display device, obtaining head pose information for a user associated with the computing system; determining an accumulated strain value for the user based on the head pose information; and in accordance with a determination that the accumulated strain value for the user exceeds a first posture awareness threshold: determining a location for virtual content based on a height value associated with the user and a depth value associated with the 3D environment; and presenting, via the display device, the virtual content at the determined location while continuing to present the 3D environment via the display device.
    Type: Application
    Filed: May 22, 2023
    Publication date: January 25, 2024
    Inventors: Thomas G. Salter, Adeeti V. Ullal, Alexander G. Bruno, Daniel M. Trietsch, Edith M. Arnold, Edwin Iskandar, Ioana Negoita, James J. Dunne, Johahn Y. Leung, Karthik Jayaraman Raghuram, Matthew S. DeMers, Thomas J. Moore
  • Publication number: 20240019928
    Abstract: Various implementations disclosed herein include devices, systems, and methods for using a gaze vector and head pose information to effectuate a user interaction with a virtual object. In some implementations, a device includes a sensor for sensing a head pose of a user, a display, one or more processors, and a memory. In various implementations, a method includes displaying a set of virtual objects. Based on a gaze vector, it is determined that a gaze of the user is directed to a first virtual object of the set of virtual objects. A head pose value corresponding to the head pose of the user is obtained. An action relative to the first virtual object is performed based on the head pose value satisfying a head pose criterion.
    Type: Application
    Filed: September 28, 2023
    Publication date: January 18, 2024
    Inventors: Thomas G. Salter, Anshu K. Chimalamarri, Bryce L. Schmidtchen, Devin W. Chalmers, Gregory Lutter
  • Publication number: 20240005612
    Abstract: Various implementations disclosed herein include devices, systems, and methods that present virtual content based on detecting a reflective object and determining a three-dimensional (3D) position of the reflective object in a physical environment. For example, an example process may include obtaining sensor data (e.g., image, sound, motion, etc.) from a sensor of an electronic device in a physical environment that includes one or more objects. The method may further include detecting a reflective object amongst the one or more objects based on the sensor data. The method may further include determining a 3D position of the reflective object in the physical environment (e.g., where the plane of the mirror is located). The method may further include presenting virtual content in a view of the physical environment. The virtual content may be positioned at a 3D location based on the 3D position of the reflective object.
    Type: Application
    Filed: June 27, 2023
    Publication date: January 4, 2024
    Inventors: Yutaka Yokokawa, Devin W. Chalmers, Brian W. Temple, Rahul Nair, Thomas G. Salter
  • Patent number: 11836872
    Abstract: In one implementation, a method of performing late-stage shift is performed at a device including a display, one or more processors, and non-transitory memory. The method includes generating, based on a first predicted pose of the device for a display time period, a first image. The method includes generating a mask indicating a first region of the first image and a second region of the first image. The method includes generating a second image by shifting, based on a second predicted pose of the device for the display time period, the first region of the first image without shifting the second region of the first image. The method includes displaying, on the display at the display time period, the second image.
    Type: Grant
    Filed: February 1, 2022
    Date of Patent: December 5, 2023
    Assignee: APPLE INC.
    Inventors: Thomas G. Salter, Ganghun Kim, Ioana Negoita, Devin William Chalmers, Anshu Kameswar Chimalamarri, Thomas Justin Moore
  • Publication number: 20230377480
    Abstract: In some implementations, a method includes: while presenting a 3D environment, obtaining user profile and head pose information for a user; determining locations for visual cues within the 3D environment for a first portion of a guided stretching session based on the user profile and head pose information; presenting the visual cues at the determined locations within the 3D environment and a directional indicator; and in response to detecting a change to the head pose information: updating a location for the directional indicator based on the change to the head pose information; and in accordance with a determination that the change to the head pose information satisfies a criterion associated with a first visual cue among the visual cues, providing at least one of audio, haptic, or visual feedback indicating that the first visual cue has been completed for the first portion of the guided stretching session.
    Type: Application
    Filed: May 22, 2023
    Publication date: November 23, 2023
    Inventors: James J. Dunne, Adeeti V. Ullal, Alexander G. Bruno, Daniel M. Trietsch, Ioana Negoita, Irida Mance, Matthew S. DeMers, Thomas G. Salter
  • Patent number: 11825375
    Abstract: A method includes determining a device location of an electronic device, transmitting a request to a content source, the request including the device location of the electronic device, and receiving, from the content source in response to the request, a content item that is associated with display location information that describes a content position for the content item relative to a physical environment. The content item is selected by the content source based on the content position for the content item being within an area that is defined based on the device location. The method also includes displaying a representation of the content item as part of a computer-generated reality scene in which the representation of the content item is positioned relative to the physical environment according to the content position for the content item from the display location information for the content item.
    Type: Grant
    Filed: November 17, 2022
    Date of Patent: November 21, 2023
    Assignee: APPLE INC.
    Inventors: David W. Padgett, Christopher D. Fu, Scott G. Wade, Paul Ewers, Ioana Negoita, Thomas G. Salter, Dhruv Aditya Govil, Dimitris Ladopoulos
  • Patent number: 11822367
    Abstract: A method performed by an audio system comprising a headset. The method sends a playback signal containing user-desired audio content to drive a speaker of the headset that is being worn by a user, receives a microphone signal from a microphone that is arranged to capture sounds within an ambient environment in which the user is located, performs a speech detection algorithm upon the microphone signal to detect speech contained therein, in response to a detection of speech, determines that the user intends to engage in a conversation with a person who is located within the ambient environment, and, in response to determining that the user intends to engage in the conversation, adjusts the playback signal based on the user-desired audio content.
    Type: Grant
    Filed: May 17, 2021
    Date of Patent: November 21, 2023
    Assignee: Apple Inc.
    Inventors: Christopher T. Eubank, Devin W. Chalmers, Kirill Kalinichev, Rahul Nair, Thomas G. Salter
  • Patent number: 11768535
    Abstract: A method is performed at an electronic device including one or more processors, a non-transitory memory, and a first input device. The method includes detecting, via the first input device, an input directed to a first location within a physical environment. The first location is identified by an extremity tracking function based on the input. The method includes determining an interaction event based on a function of a semantic identifier that is associated with a portion of the first location. The method includes presenting computer-generated content that is a function of the interaction event.
    Type: Grant
    Filed: April 22, 2021
    Date of Patent: September 26, 2023
    Assignee: APPLE INC.
    Inventors: Bart Colin Trzynadlowski, Gregory Patrick Lane Lutter, Thomas G. Salter, Rahul Nair, Devin William Chalmers
  • Publication number: 20230297607
    Abstract: In one implementation, a method of presenting virtual content is performed by a device including an image sensor, one or more processors, and non-transitory memory. The method includes obtaining, using the image sensor, an image of a physical environment. The method includes detecting, in the image of the physical environment, machine-readable content associated with an object. The method includes determining an object type of the object. The method includes obtaining virtual content based on a search query creating using the machine-readable content and the object type. The method includes displaying the virtual content.
    Type: Application
    Filed: March 15, 2023
    Publication date: September 21, 2023
    Inventors: Thomas G. Salter, Christopher D. Fu, Devin W. Chalmers, Paulo R. Jansen dos Reis
  • Publication number: 20230298349
    Abstract: In one implementation, a method of displaying sports data is performed by a device including an image sensor, a display, one or more processors, and non-transitory memory. The method includes obtaining, using the image sensor, an image of a physical environment including a sporting event. The method includes detecting, in the image of the physical environment, an object. The method includes obtaining data regarding a current state of the sporting event with respect to the object. The method includes displaying, on the display in association with the physical environment, a representation of the data.
    Type: Application
    Filed: May 23, 2023
    Publication date: September 21, 2023
    Inventors: Thomas G. Salter, Brian Warren Temple
  • Publication number: 20230290270
    Abstract: Devices, systems, and methods that facilitate learning a language in an extended reality (XR) environment. This may involve identifying objects or activities in the environment, identifying a context associated with the user or the environment, and providing language teaching content based on the objects, activities, or contexts. In one example, the language teaching content provides individual words, phrases, or sentences corresponding to the objects, activities, or contexts. In another example, the language teaching content requests user interaction (e.g., via quiz questions or educational games) corresponding to the objects, activities, or contexts. Context may be used to determine whether or how to provide the language teaching content. For example, based on a user's current course of language study (e.g., this week's vocabulary list), corresponding object or activities may be identified in the environment for use in providing the language teaching content.
    Type: Application
    Filed: February 21, 2023
    Publication date: September 14, 2023
    Inventors: Brian W. Temple, Devin W. Chalmers, Thomas G. Salter
  • Patent number: 11715220
    Abstract: In one implementation, a method of activating a depth sensor is performed by a device including a depth sensor including a plurality of depth sensor elements, a display, one or more processors, and non-transitory memory. The method includes obtaining content to be displayed on the display in association with a physical environment. The method includes selecting a subset of the plurality of depth sensor elements. The method includes activating the subset of the plurality of depth sensor elements to obtain a depth map of the physical environment. The method includes displaying, on the display, at least a portion of the content based on the depth map of the physical environment.
    Type: Grant
    Filed: July 19, 2021
    Date of Patent: August 1, 2023
    Inventors: Thomas G. Salter, Anshu Kameswar Chimalamarri
  • Patent number: 11688168
    Abstract: In one implementation, a method of displaying sports data is performed by a device including an image sensor, a display, one or more processors, and non-transitory memory. The method includes obtaining, using the image sensor, an image of a physical environment including a sporting event. The method includes detecting, in the image of the physical environment, an object. The method includes obtaining data regarding a current state of the sporting event with respect to the object. The method includes displaying, on the display in association with the physical environment, a representation of the data.
    Type: Grant
    Filed: July 19, 2021
    Date of Patent: June 27, 2023
    Assignee: APPLE INC.
    Inventors: Thomas G. Salter, Brian Warren Temple