Patents by Inventor Aaron Faucher
Aaron Faucher has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12242666Abstract: Aspects of the disclosure are directed to an interface for receiving input using multiple modalities in an artificial reality environment. The interface can be a virtual keyboard displayed in an artificial reality environment that includes characters arranged as elements. Implementations include an artificial reality device/system for displaying the artificial reality environment and receiving user input in a first modality, and a controller device for receiving user input in an additional input modality. For example, the artificial reality system can be configured to receive user gaze input as a first input modality and the controller device can be configured to receive input in a second modality, such as touch input received at a trackpad. An interface manager can process input in the multiple modalities to control an indicator on the virtual interface. The interface manager can also resolve character selections from the virtual interface according to the input.Type: GrantFiled: April 8, 2022Date of Patent: March 4, 2025Assignee: Meta Platforms Technologies, LLCInventors: Roger Ibars Martinez, Johnathon Simmons, Pol Pla I Conesa, Nathan Aschenbach, Aaron Faucher, Chris Rojas, Emron Jackson Henry, Bryan Sparks
-
Publication number: 20240338086Abstract: Aspects of the present disclosure are directed to triggering virtual keyboard selections using multiple input modalities. An interface manager can display an interface, such as a virtual keyboard, to a user in an artificial reality environment. Implementations of the interface manager can track user eye gaze input and user hand input (e.g., hand or finger motion). The interface manager can resolve a character selection on the virtual keyboard according to the tracked user gaze input based on detection that the user's hand motion meets a trigger criteria. For example, the interface manager can: detect that the tracked user hand motion meets the trigger criteria at a given point in time; and resolve a selection from the virtual keyboard (e.g., selection of a displayed character) according to the tracked user gaze on the virtual keyboard at the given point in time.Type: ApplicationFiled: June 18, 2024Publication date: October 10, 2024Applicant: Meta Platforms Technologies, LLCInventors: Aaron FAUCHER, Pol PLA I CONESA, Daniel ROSAS, Nathan ASCHENBACH
-
Patent number: 12093462Abstract: Aspects of the present disclosure are directed to triggering virtual keyboard selections using multiple input modalities. An interface manager can display an interface, such as a virtual keyboard, to a user in an artificial reality environment. Implementations of the interface manager can track user eye gaze input and user hand input (e.g., hand or finger motion). The interface manager can resolve a character selection on the virtual keyboard according to the tracked user gaze input based on detection that the user's hand motion meets a trigger criteria. For example, the interface manager can: detect that the tracked user hand motion meets the trigger criteria at a given point in time; and resolve a selection from the virtual keyboard (e.g., selection of a displayed character) according to the tracked user gaze on the virtual keyboard at the given point in time.Type: GrantFiled: April 11, 2022Date of Patent: September 17, 2024Assignee: Meta Platforms Technologies, LLCInventors: Aaron Faucher, Pol Pla I Conesa, Daniel Rosas, Nathan Aschenbach
-
Patent number: 11995776Abstract: Extended reality interactions include capturing, with a first device, video of a first user and conveying same to a second, heterogeneous device. A 3D mesh is received by the first device from the second device for rendering an extended reality environment, which is simultaneously displayed on the second device. Video of a second user and pose transforms for compositing the video of the second user in the extended reality environment displayed on the first device is received. A view perspective of the video of the second user composited in the extended reality environment is based on the pose transforms. Input to the first device changes the view perspective. View perspective data is conveyed from the first device to the second device that causes a corresponding change in view perspective of the video of the first user composited in the extended reality environment simultaneously displayed on the second device.Type: GrantFiled: June 25, 2021Date of Patent: May 28, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Edgar Charles Evangelista, Aaron Faucher, Jaehyun Kim, Andrew R McHugh
-
Publication number: 20240104870Abstract: In some implementations, the disclosed systems and methods can detect an interaction with respect to a set of virtual objects, which can start with a particular gesture, and take an action with respect to one or more virtual objects based on a further interaction (e.g., holding the gesture for a particular amount of time, moving the gesture in a particular direction, releasing the gesture, etc.). In some implementations, the disclosed systems and methods can automatically review a 3D video to determine a depicted user or avatar movement pattern (e.g., dance moves, repair procedure, playing an instrument, etc.). In some implementations, the disclosed systems and methods can allow the gesture to included a flat hand with the user's thumb next to the palm, with the gesture toward the user's face.Type: ApplicationFiled: December 7, 2023Publication date: March 28, 2024Inventors: Anna FUSTE LLEIXA, Pol PLA I CONESA, Daniel ROSAS, Aaron FAUCHER, Roger IBARS MARTINEZ, Nathan ASCHENBACH, Hae Jin LEE, Jing MA, Ana GARCIA PUYOL, Amber CHOO
-
Patent number: 11907521Abstract: A calling method can include determining, with a device, a position of a nearby device in response to detecting a signal transmitted from the nearby device and capturing, with a camera of the device, an image of an area near the device. Responsive to identifying an image of a call candidate appearing within the image of the area, a position of the call candidate can be determined from the image. The position of the call candidate can be correlated with the position of the nearby device based on proximity. Information associated with the call candidate can be retrieved based on the correlating. Based on the information retrieved, a visual identifier token corresponding to the call candidate can be generated. The visual identifier token can be presented on a display of the device and can be used by the user to initiate a call between the device and the nearby device.Type: GrantFiled: October 11, 2021Date of Patent: February 20, 2024Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Sylvia Leung, Aaron Faucher, Jaehyun Kim
-
Publication number: 20230324992Abstract: Aspects of the disclosure are directed to an interface for receiving input using multiple modalities in an artificial reality environment. The interface can be a virtual keyboard displayed in an artificial reality environment that includes characters arranged as elements. Implementations include an artificial reality device/system for displaying the artificial reality environment and receiving user input in a first modality, and a controller device for receiving user input in an additional input modality. For example, the artificial reality system can be configured to receive user gaze input as a first input modality and the controller device can be configured to receive input in a second modality, such as touch input received at a trackpad. An interface manager can process input in one or more of the modalities to control an indicator on the virtual interface. The interface manager can also resolve character selections from the virtual interface according to the input.Type: ApplicationFiled: March 27, 2023Publication date: October 12, 2023Applicant: Meta Platforms Technologies, LLCInventors: Roger IBARS MARTINEZ, Johnathon SIMMONS, Pol PLA I CONESA, Nathan ASCHENBACH, Aaron FAUCHER, Chris ROJAS, Emron Jackson HENRY, Bryan SPARKS
-
Publication number: 20230324997Abstract: Aspects of the present disclosure are directed to triggering virtual keyboard selections using multiple input modalities. An interface manager can display an interface, such as a virtual keyboard, to a user in an artificial reality environment. Implementations of the interface manager can track user eye gaze input and user hand input (e.g., hand or finger motion). The interface manager can resolve a character selection on the virtual keyboard according to the tracked user gaze input based on detection that the user's hand motion meets a trigger criteria. For example, the interface manager can: detect that the tracked user hand motion meets the trigger criteria at a given point in time; and resolve a selection from the virtual keyboard (e.g., selection of a displayed character) according to the tracked user gaze on the virtual keyboard at the given point in time.Type: ApplicationFiled: April 11, 2022Publication date: October 12, 2023Inventors: Aaron FAUCHER, Pol PLA I CONESA, Daniel ROSAS, Nathan ASCHENBACH
-
Publication number: 20230324986Abstract: Aspects of the disclosure are directed to an interface for receiving input using multiple modalities in an artificial reality environment. The interface can be a virtual keyboard displayed in an artificial reality environment that includes characters arranged as elements. Implementations include an artificial reality device/system for displaying the artificial reality environment and receiving user input in a first modality, and a controller device for receiving user input in an additional input modality. For example, the artificial reality system can be configured to receive user gaze input as a first input modality and the controller device can be configured to receive input in a second modality, such as touch input received at a trackpad. An interface manager can process input in the multiple modalities to control an indicator on the virtual interface. The interface manager can also resolve character selections from the virtual interface according to the input.Type: ApplicationFiled: April 8, 2022Publication date: October 12, 2023Inventors: Roger IBARS MARTINEZ, Johnathon SIMMONS, Pol PLA I CONESA, Nathan ASCHENBACH, Aaron FAUCHER, Chris ROJAS, Emron Jackson HENRY, Bryan SPARKS
-
Publication number: 20220236846Abstract: A calling method can include determining, with a device, a position of a nearby device in response to detecting a signal transmitted from the nearby device and capturing, with a camera of the device, an image of an area near the device. Responsive to identifying an image of a call candidate appearing within the image of the area, a position of the call candidate can be determined from the image. The position of the call candidate can be correlated with the position of the nearby device based on proximity. Information associated with the call candidate can be retrieved based on the correlating. Based on the information retrieved, a visual identifier token corresponding to the call candidate can be generated. The visual identifier token can be presented on a display of the device and can be used by the user to initiate a call between the device and the nearby device.Type: ApplicationFiled: October 11, 2021Publication date: July 28, 2022Inventors: Sylvia Leung, Aaron Faucher, Jaehyun Kim
-
Publication number: 20220230399Abstract: Extended reality interactions include capturing, with a first device, video of a first user and conveying same to a second, heterogeneous device. A 3D mesh is received by the first device from the second device for rendering an extended reality environment, which is simultaneously displayed on the second device. Video of a second user and pose transforms for compositing the video of the second user in the extended reality environment displayed on the first device is received. A view perspective of the video of the second user composited in the extended reality environment is based on the pose transforms. Input to the first device changes the view perspective. View perspective data is conveyed from the first device to the second device that causes a corresponding change in view perspective of the video of the first user composited in the extended reality environment simultaneously displayed on the second device.Type: ApplicationFiled: June 25, 2021Publication date: July 21, 2022Inventors: Edgar Charles Evangelista, Aaron Faucher, Jaehyun Kim, Andrew R McHugh