Patents by Inventor Aaron Faucher

Aaron Faucher has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12242666
    Abstract: Aspects of the disclosure are directed to an interface for receiving input using multiple modalities in an artificial reality environment. The interface can be a virtual keyboard displayed in an artificial reality environment that includes characters arranged as elements. Implementations include an artificial reality device/system for displaying the artificial reality environment and receiving user input in a first modality, and a controller device for receiving user input in an additional input modality. For example, the artificial reality system can be configured to receive user gaze input as a first input modality and the controller device can be configured to receive input in a second modality, such as touch input received at a trackpad. An interface manager can process input in the multiple modalities to control an indicator on the virtual interface. The interface manager can also resolve character selections from the virtual interface according to the input.
    Type: Grant
    Filed: April 8, 2022
    Date of Patent: March 4, 2025
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Roger Ibars Martinez, Johnathon Simmons, Pol Pla I Conesa, Nathan Aschenbach, Aaron Faucher, Chris Rojas, Emron Jackson Henry, Bryan Sparks
  • Publication number: 20240338086
    Abstract: Aspects of the present disclosure are directed to triggering virtual keyboard selections using multiple input modalities. An interface manager can display an interface, such as a virtual keyboard, to a user in an artificial reality environment. Implementations of the interface manager can track user eye gaze input and user hand input (e.g., hand or finger motion). The interface manager can resolve a character selection on the virtual keyboard according to the tracked user gaze input based on detection that the user's hand motion meets a trigger criteria. For example, the interface manager can: detect that the tracked user hand motion meets the trigger criteria at a given point in time; and resolve a selection from the virtual keyboard (e.g., selection of a displayed character) according to the tracked user gaze on the virtual keyboard at the given point in time.
    Type: Application
    Filed: June 18, 2024
    Publication date: October 10, 2024
    Applicant: Meta Platforms Technologies, LLC
    Inventors: Aaron FAUCHER, Pol PLA I CONESA, Daniel ROSAS, Nathan ASCHENBACH
  • Patent number: 12093462
    Abstract: Aspects of the present disclosure are directed to triggering virtual keyboard selections using multiple input modalities. An interface manager can display an interface, such as a virtual keyboard, to a user in an artificial reality environment. Implementations of the interface manager can track user eye gaze input and user hand input (e.g., hand or finger motion). The interface manager can resolve a character selection on the virtual keyboard according to the tracked user gaze input based on detection that the user's hand motion meets a trigger criteria. For example, the interface manager can: detect that the tracked user hand motion meets the trigger criteria at a given point in time; and resolve a selection from the virtual keyboard (e.g., selection of a displayed character) according to the tracked user gaze on the virtual keyboard at the given point in time.
    Type: Grant
    Filed: April 11, 2022
    Date of Patent: September 17, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Aaron Faucher, Pol Pla I Conesa, Daniel Rosas, Nathan Aschenbach
  • Patent number: 11995776
    Abstract: Extended reality interactions include capturing, with a first device, video of a first user and conveying same to a second, heterogeneous device. A 3D mesh is received by the first device from the second device for rendering an extended reality environment, which is simultaneously displayed on the second device. Video of a second user and pose transforms for compositing the video of the second user in the extended reality environment displayed on the first device is received. A view perspective of the video of the second user composited in the extended reality environment is based on the pose transforms. Input to the first device changes the view perspective. View perspective data is conveyed from the first device to the second device that causes a corresponding change in view perspective of the video of the first user composited in the extended reality environment simultaneously displayed on the second device.
    Type: Grant
    Filed: June 25, 2021
    Date of Patent: May 28, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Edgar Charles Evangelista, Aaron Faucher, Jaehyun Kim, Andrew R McHugh
  • Publication number: 20240104870
    Abstract: In some implementations, the disclosed systems and methods can detect an interaction with respect to a set of virtual objects, which can start with a particular gesture, and take an action with respect to one or more virtual objects based on a further interaction (e.g., holding the gesture for a particular amount of time, moving the gesture in a particular direction, releasing the gesture, etc.). In some implementations, the disclosed systems and methods can automatically review a 3D video to determine a depicted user or avatar movement pattern (e.g., dance moves, repair procedure, playing an instrument, etc.). In some implementations, the disclosed systems and methods can allow the gesture to included a flat hand with the user's thumb next to the palm, with the gesture toward the user's face.
    Type: Application
    Filed: December 7, 2023
    Publication date: March 28, 2024
    Inventors: Anna FUSTE LLEIXA, Pol PLA I CONESA, Daniel ROSAS, Aaron FAUCHER, Roger IBARS MARTINEZ, Nathan ASCHENBACH, Hae Jin LEE, Jing MA, Ana GARCIA PUYOL, Amber CHOO
  • Patent number: 11907521
    Abstract: A calling method can include determining, with a device, a position of a nearby device in response to detecting a signal transmitted from the nearby device and capturing, with a camera of the device, an image of an area near the device. Responsive to identifying an image of a call candidate appearing within the image of the area, a position of the call candidate can be determined from the image. The position of the call candidate can be correlated with the position of the nearby device based on proximity. Information associated with the call candidate can be retrieved based on the correlating. Based on the information retrieved, a visual identifier token corresponding to the call candidate can be generated. The visual identifier token can be presented on a display of the device and can be used by the user to initiate a call between the device and the nearby device.
    Type: Grant
    Filed: October 11, 2021
    Date of Patent: February 20, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Sylvia Leung, Aaron Faucher, Jaehyun Kim
  • Publication number: 20230324992
    Abstract: Aspects of the disclosure are directed to an interface for receiving input using multiple modalities in an artificial reality environment. The interface can be a virtual keyboard displayed in an artificial reality environment that includes characters arranged as elements. Implementations include an artificial reality device/system for displaying the artificial reality environment and receiving user input in a first modality, and a controller device for receiving user input in an additional input modality. For example, the artificial reality system can be configured to receive user gaze input as a first input modality and the controller device can be configured to receive input in a second modality, such as touch input received at a trackpad. An interface manager can process input in one or more of the modalities to control an indicator on the virtual interface. The interface manager can also resolve character selections from the virtual interface according to the input.
    Type: Application
    Filed: March 27, 2023
    Publication date: October 12, 2023
    Applicant: Meta Platforms Technologies, LLC
    Inventors: Roger IBARS MARTINEZ, Johnathon SIMMONS, Pol PLA I CONESA, Nathan ASCHENBACH, Aaron FAUCHER, Chris ROJAS, Emron Jackson HENRY, Bryan SPARKS
  • Publication number: 20230324997
    Abstract: Aspects of the present disclosure are directed to triggering virtual keyboard selections using multiple input modalities. An interface manager can display an interface, such as a virtual keyboard, to a user in an artificial reality environment. Implementations of the interface manager can track user eye gaze input and user hand input (e.g., hand or finger motion). The interface manager can resolve a character selection on the virtual keyboard according to the tracked user gaze input based on detection that the user's hand motion meets a trigger criteria. For example, the interface manager can: detect that the tracked user hand motion meets the trigger criteria at a given point in time; and resolve a selection from the virtual keyboard (e.g., selection of a displayed character) according to the tracked user gaze on the virtual keyboard at the given point in time.
    Type: Application
    Filed: April 11, 2022
    Publication date: October 12, 2023
    Inventors: Aaron FAUCHER, Pol PLA I CONESA, Daniel ROSAS, Nathan ASCHENBACH
  • Publication number: 20230324986
    Abstract: Aspects of the disclosure are directed to an interface for receiving input using multiple modalities in an artificial reality environment. The interface can be a virtual keyboard displayed in an artificial reality environment that includes characters arranged as elements. Implementations include an artificial reality device/system for displaying the artificial reality environment and receiving user input in a first modality, and a controller device for receiving user input in an additional input modality. For example, the artificial reality system can be configured to receive user gaze input as a first input modality and the controller device can be configured to receive input in a second modality, such as touch input received at a trackpad. An interface manager can process input in the multiple modalities to control an indicator on the virtual interface. The interface manager can also resolve character selections from the virtual interface according to the input.
    Type: Application
    Filed: April 8, 2022
    Publication date: October 12, 2023
    Inventors: Roger IBARS MARTINEZ, Johnathon SIMMONS, Pol PLA I CONESA, Nathan ASCHENBACH, Aaron FAUCHER, Chris ROJAS, Emron Jackson HENRY, Bryan SPARKS
  • Publication number: 20220236846
    Abstract: A calling method can include determining, with a device, a position of a nearby device in response to detecting a signal transmitted from the nearby device and capturing, with a camera of the device, an image of an area near the device. Responsive to identifying an image of a call candidate appearing within the image of the area, a position of the call candidate can be determined from the image. The position of the call candidate can be correlated with the position of the nearby device based on proximity. Information associated with the call candidate can be retrieved based on the correlating. Based on the information retrieved, a visual identifier token corresponding to the call candidate can be generated. The visual identifier token can be presented on a display of the device and can be used by the user to initiate a call between the device and the nearby device.
    Type: Application
    Filed: October 11, 2021
    Publication date: July 28, 2022
    Inventors: Sylvia Leung, Aaron Faucher, Jaehyun Kim
  • Publication number: 20220230399
    Abstract: Extended reality interactions include capturing, with a first device, video of a first user and conveying same to a second, heterogeneous device. A 3D mesh is received by the first device from the second device for rendering an extended reality environment, which is simultaneously displayed on the second device. Video of a second user and pose transforms for compositing the video of the second user in the extended reality environment displayed on the first device is received. A view perspective of the video of the second user composited in the extended reality environment is based on the pose transforms. Input to the first device changes the view perspective. View perspective data is conveyed from the first device to the second device that causes a corresponding change in view perspective of the video of the first user composited in the extended reality environment simultaneously displayed on the second device.
    Type: Application
    Filed: June 25, 2021
    Publication date: July 21, 2022
    Inventors: Edgar Charles Evangelista, Aaron Faucher, Jaehyun Kim, Andrew R McHugh