Patents by Inventor Chris Rojas
Chris Rojas has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11917011Abstract: A method by a rendering device includes receiving a request to render multiple surfaces corresponding to multiple virtual objects to be concurrently displayed on an augmented-reality (AR) headset. The method further includes that the AR headset is connected to the rendering device via a wireless link. In response to a determination that a network quality of the wireless link is below a threshold condition, the method further includes selecting a first subset of the multiple surfaces that are higher priority than a second subset of the plurality of surfaces. The method includes transmitting the first subset of multiple surfaces to the AR headset for display and transmitting the second subset of multiple surfaces to the AR headset for display after transmitting the first subset. This method includes rendering the surfaces in accordance with a set of rendering parameters so as to satisfy one or more network constraints.Type: GrantFiled: January 10, 2022Date of Patent: February 27, 2024Assignee: META PLATFORMS TECHNOLOGIES, LLCInventors: Zhiqing Rao, Eugene Gorbatov, Chris Rojas, Dong Zheng, Cheng Chang, Yuting Fan
-
Publication number: 20230324992Abstract: Aspects of the disclosure are directed to an interface for receiving input using multiple modalities in an artificial reality environment. The interface can be a virtual keyboard displayed in an artificial reality environment that includes characters arranged as elements. Implementations include an artificial reality device/system for displaying the artificial reality environment and receiving user input in a first modality, and a controller device for receiving user input in an additional input modality. For example, the artificial reality system can be configured to receive user gaze input as a first input modality and the controller device can be configured to receive input in a second modality, such as touch input received at a trackpad. An interface manager can process input in one or more of the modalities to control an indicator on the virtual interface. The interface manager can also resolve character selections from the virtual interface according to the input.Type: ApplicationFiled: March 27, 2023Publication date: October 12, 2023Applicant: Meta Platforms Technologies, LLCInventors: Roger IBARS MARTINEZ, Johnathon SIMMONS, Pol PLA I CONESA, Nathan ASCHENBACH, Aaron FAUCHER, Chris ROJAS, Emron Jackson HENRY, Bryan SPARKS
-
Publication number: 20230325967Abstract: In some implementations, the disclosed systems and methods can provide a set of triggers that cause the degradation system to degrade output graphics. In further implementations, the disclosed systems and methods can display a virtual object in an artificial reality environment to a user, where the displayed virtual object corresponds to the captured video. In yet further implementations, the disclosed systems and methods can evaluate one or more contexts corresponding to activity of a user while using the artificial reality device to view images.Type: ApplicationFiled: June 16, 2023Publication date: October 12, 2023Applicant: Meta Platforms Technologies, LLCInventors: Chris ROJAS, Joshuah VINCENT, Paul Armistead HOOVER
-
Publication number: 20230324986Abstract: Aspects of the disclosure are directed to an interface for receiving input using multiple modalities in an artificial reality environment. The interface can be a virtual keyboard displayed in an artificial reality environment that includes characters arranged as elements. Implementations include an artificial reality device/system for displaying the artificial reality environment and receiving user input in a first modality, and a controller device for receiving user input in an additional input modality. For example, the artificial reality system can be configured to receive user gaze input as a first input modality and the controller device can be configured to receive input in a second modality, such as touch input received at a trackpad. An interface manager can process input in the multiple modalities to control an indicator on the virtual interface. The interface manager can also resolve character selections from the virtual interface according to the input.Type: ApplicationFiled: April 8, 2022Publication date: October 12, 2023Inventors: Roger IBARS MARTINEZ, Johnathon SIMMONS, Pol PLA I CONESA, Nathan ASCHENBACH, Aaron FAUCHER, Chris ROJAS, Emron Jackson HENRY, Bryan SPARKS
-
Publication number: 20230224369Abstract: A method by a rendering device includes receiving a request to render multiple surfaces corresponding to multiple virtual objects to be concurrently displayed on an augmented-reality (AR) headset. The method further includes that the AR headset is connected to the rendering device via a wireless link. In response to a determination that a network quality of the wireless link is below a threshold condition, the method further includes selecting a first subset of the multiple surfaces that are higher priority than a second subset of the plurality of surfaces. The method includes transmitting the first subset of multiple surfaces to the AR headset for display and transmitting the second subset of multiple surfaces to the AR headset for display after transmitting the first subset. This method includes rendering the surfaces in accordance with a set of rendering parameters so as to satisfy one or more network constraints.Type: ApplicationFiled: January 10, 2022Publication date: July 13, 2023Inventors: Zhiqing Rao, Eugene Gorbatov, Chris Rojas, Dong Zheng, Cheng Chang, Yuting Fan
-
Publication number: 20220139041Abstract: In some implementations, the disclosed systems and methods can automatically generate seller listing titles and descriptions for products; set a follow-me mode for various virtual objects, causing the virtual objects to be displayed as word-locked or body-locked in response to a current mode for the virtual objects and the location of the user of the XR device in relation to various anchor points for the virtual objects; create and/or apply XR profiles that specify one or more triggers for one or more effects that are applied to a user when the triggers are satisfied; and/or enable addition of external content in 3D applications.Type: ApplicationFiled: January 19, 2022Publication date: May 5, 2022Inventors: Ziliu Li, Gagneet Singh Mac, Annika Rodrigues, Camila Cortes De Almeida E De Vincenzo, Cody Char, Yeliz Karadayi, Chris Rojas, Jenna Velez, Jenny Kam, Vaishali Parekh, Aaron Draczynski, John Meurer, JR., Jonathan Koehmstedt, Jiakang Lu, Iurii Ashaiev
-
Patent number: 11294475Abstract: Embodiments described herein disclose methods and systems directed to input mode selection in artificial reality. In some implementations, various input modes enable a user to perform precise interactions with a target object without occluding the target object. Some input modes can include rays that extend along a line that intersects an origin point, a control point, and an interaction point. An interaction model can specify when the system switches between input modes, such as modes based solely on gaze, using long or short ray input, or with direct interaction between the user's hand(s) and objects. These transitions can be performed by evaluating rules that take context factors such as whether a user's hands are in view of the user, what posture the hands are in, whether a target object is selected, and whether a target object is within a threshold distance from the user.Type: GrantFiled: February 8, 2021Date of Patent: April 5, 2022Assignee: Facebook Technologies, LLCInventors: Etienne Pinchon, Jennifer Lynn Spurlock, Nathan Aschenbach, Gerrit Hendrik Hofmeester, Roger Ibars Martinez, Christopher Alan Baker, Chris Rojas
-
Patent number: 9494415Abstract: In embodiments, apparatuses, methods and storage media for human-computer interaction are described. In embodiments, an apparatus may include one or more light sources and a camera. Through capture of images by the camera, the computing device may detect positions of objects of a user, within a three-dimensional (3-D) interaction region within which to track positions of the objects of the user. The apparatus may utilize multiple light sources, which may be disposed at different distances to the display and may illuminate the objects in a direction other than the image capture direction. The apparatus may selectively illuminate individual light sources to facilitate detection of the objects in the direction toward the display. The camera may also capture images in synchronization with the selective illumination. Other embodiments may be described and claimed.Type: GrantFiled: November 7, 2013Date of Patent: November 15, 2016Assignee: Intel CorporationInventors: John N. Sweetser, Anders Grunnet-Jepsen, Paul Winer, Leonid M. Keselman, Steven S. Bateman, Chris Rojas, Akihiro Takagi, Chandrika Jayant
-
Publication number: 20150131852Abstract: In embodiments, apparatuses, methods and storage media for human-computer interaction are described. In embodiments, an apparatus may include one or more light sources and a camera. Through capture of images by the camera, the computing device may detect positions of objects of a user, within a three-dimensional (3-D) interaction region within which to track positions of the objects of the user. The apparatus may utilize multiple light sources, which may be disposed at different distances to the display and may illuminate the objects in a direction other than the image capture direction. The apparatus may selectively illuminate individual light sources to facilitate detection of the objects in the direction toward the display. The camera may also capture images in synchronization with the selective illumination. Other embodiments may be described and claimed.Type: ApplicationFiled: November 7, 2013Publication date: May 14, 2015Inventors: John N. Sweetser, Anders Grunnet-Jepsen, Paul Winer, Leonid M. Keselman, Steven S. Bateman, Chris Rojas, Akihiro Takagi, Chandrika Jayant