Patents by Inventor Evgenii Krivoruchko
Evgenii Krivoruchko has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240152245Abstract: A computer system displays a first object that includes at least a first portion of the first object and a second portion of the first object and detects a first gaze input that meets first criteria, wherein the first criteria require that the first gaze input is directed to the first portion of the first object in order for the first criteria to be met. In response, the computer system displays a first control element that corresponds to a first operation associated with the first object, wherein the first control element was not displayed prior to detecting that the first gaze input met the first criteria, and detects a first user input directed to the first control element. In response to detecting the first user input directed to the first control element, the computer system performs the first operation with respect to the first object.Type: ApplicationFiled: September 21, 2023Publication date: May 9, 2024Inventors: Lee S. Broughton, Israel Pastrana Vicente, Matan Stauber, Miquel Estany Rodriguez, James J. Owen, Jonathan R. Dascola, Stephen O. Lemay, Christian Schnorr, Zoey C. Taylor, Jay Moon, Benjamin H. Boesel, Benjamin Hylak, Richard D. Lyons, Willliam A. Sorrentino, III, Lynn I. Streja, Jonathan Ravasz, Nathan Gitter, Peter D. Anton, Michael J. Rockwell, Peter L. Hajas, Evgenii Krivoruchko, Mark A. Ebbole, James Magahern, Andrew J. Sawyer, Christopher D. McKenzie, Michael E. Buerli, Olivier D. R. Gutknecht
-
Patent number: 11947111Abstract: The present technology relates to artificial reality systems. Such systems provide projections a user can create to specify object interactions. For example, when a user wishes to interact with an object outside her immediate reach, she can use a projection to select, move, or otherwise interact with the distant object. The present technology also includes object selection techniques for identifying and disambiguating between objects, allowing a user to select objects both near and distant from the user. Yet further aspects of the present technology include techniques for interpreting various bimanual (two-handed) gestures for interacting with objects. The present technology further includes a model for differentiating between global and local modes for, e.g., providing different input modalities or interpretations of user gestures.Type: GrantFiled: September 2, 2022Date of Patent: April 2, 2024Assignee: Meta Platforms Technologies, LLCInventors: Jonathan Ravasz, Etienne Pinchon, Adam Tibor Varga, Jasper Stevens, Robert Ellis, Jonah Jones, Evgenii Krivoruchko
-
Publication number: 20240103701Abstract: A gaze virtual object is displayed that is selectable based on attention directed to the gaze virtual object to perform an operation associated with a selectable virtual object. An indication of attention of a user is displayed. An enlarged view of a region of a user interface is displayed. A value of a slider element is adjusted based on attention of a user. A user interface element is moved at a respective rate based on attention of a user. Text is entered into a text entry field in response to speech inputs. A value for a value selection user interface object is updated based on attention of a user. Movement of a virtual object is facilitated based on direct touch interactions. A user input is facilitated for displaying a selection refinement user interface object. A visual indicator is displayed indicating progress toward selecting a virtual object when criteria are met.Type: ApplicationFiled: September 22, 2023Publication date: March 28, 2024Inventors: Israel PASTRANA VICENTE, Evgenii KRIVORUCHKO, Kristi E. BAUERLY, Lorena S. PAZMINO
-
Publication number: 20240103684Abstract: In some embodiments, a computer system displays a virtual surface for containing one or more virtual objects in a three-dimensional environment. In some embodiments, a computer system automatically resizes virtual surfaces that contain objects. In some embodiments, a computer system displays feedback related to removal and/or addition of objects to virtual surfaces.Type: ApplicationFiled: September 22, 2023Publication date: March 28, 2024Inventors: Agatha Y. YU, Benjamin H. BOESEL, Stephen O. LEMAY, Matthew J. SUNDSTROM, Matan STAUBER, Evgenii KRIVORUCHKO, Zoey C. TAYLOR
-
Publication number: 20240103687Abstract: A gaze virtual object is displayed that is selectable based on attention directed to the gaze virtual object to perform an operation associated with a selectable virtual object. An indication of attention of a user is displayed. An enlarged view of a region of a user interface is displayed. A value of a slider element is adjusted based on attention of a user. A user interface element is moved at a respective rate based on attention of a user. Text is entered into a text entry field in response to speech inputs. A value for a value selection user interface object is updated based on attention of a user. Movement of a virtual object is facilitated based on direct touch interactions. A user input is facilitated for displaying a selection refinement user interface object. A visual indicator is displayed indicating progress toward selecting a virtual object when criteria are met.Type: ApplicationFiled: September 22, 2023Publication date: March 28, 2024Inventors: Israel PASTRANA VICENTE, Evgenii KRIVORUCHKO, Kristie E. BAUERLY, Zoey C. TAYLOR
-
Publication number: 20240103803Abstract: A gaze virtual object is displayed that is selectable based on attention directed to the gaze virtual object to perform an operation associated with a selectable virtual object. An indication of attention of a user is displayed. An enlarged view of a region of a user interface is displayed. A value of a slider element is adjusted based on attention of a user. A user interface element is moved at a respective rate based on attention of a user. Text is entered into a text entry field in response to speech inputs. A value for a value selection user interface object is updated based on attention of a user. Movement of a virtual object is facilitated based on direct touch interactions. A user input is facilitated for displaying a selection refinement user interface object. A visual indicator is displayed indicating progress toward selecting a virtual object when criteria are met.Type: ApplicationFiled: September 22, 2023Publication date: March 28, 2024Inventors: Evgenii KRIVORUCHKO, Kristi E. BAUERLY, Zoey C. TAYLOR, Miquel ESTANY RODRIGUEZ, James J. OWEN, Jose Antonio CHECA OLORIZ, Jay MOON, Pedro MARI, Lorena S. PAZMINO
-
Publication number: 20240103704Abstract: A gaze virtual object is displayed that is selectable based on attention directed to the gaze virtual object to perform an operation associated with a selectable virtual object. An indication of attention of a user is displayed. An enlarged view of a region of a user interface is displayed. A value of a slider element is adjusted based on attention of a user. A user interface element is moved at a respective rate based on attention of a user. Text is entered into a text entry field in response to speech inputs. A value for a value selection user interface object is updated based on attention of a user. Movement of a virtual object is facilitated based on direct touch interactions. A user input is facilitated for displaying a selection refinement user interface object. A visual indicator is displayed indicating progress toward selecting a virtual object when criteria are met.Type: ApplicationFiled: September 22, 2023Publication date: March 28, 2024Inventors: Israel PASTRANA VICENTE, Evgenii KRIVORUCHKO, Kristi E. BAUERLY, Zoey C. TAYLOR
-
Publication number: 20240103676Abstract: A gaze virtual object is displayed that is selectable based on attention directed to the gaze virtual object to perform an operation associated with a selectable virtual object. An indication of attention of a user is displayed. An enlarged view of a region of a user interface is displayed. A value of a slider element is adjusted based on attention of a user. A user interface element is moved at a respective rate based on attention of a user. Text is entered into a text entry field in response to speech inputs. A value for a value selection user interface object is updated based on attention of a user. Movement of a virtual object is facilitated based on direct touch interactions. A user input is facilitated for displaying a selection refinement user interface object. A visual indicator is displayed indicating progress toward selecting a virtual object when criteria are met.Type: ApplicationFiled: September 22, 2023Publication date: March 28, 2024Inventors: Israel PASTRANA VICENTE, Danielle M. PRICE, Evgenii KRIVORUCHKO, Jonathan R. DASCOLA, Jonathan RAVASZ, Marcos ALONSO, Hugo D. VERWEIJ, Zoey C. TAYLOR, Lorena S. PAZMINO
-
Publication number: 20240103716Abstract: A gaze virtual object is displayed that is selectable based on attention directed to the gaze virtual object to perform an operation associated with a selectable virtual object. An indication of attention of a user is displayed. An enlarged view of a region of a user interface is displayed. A value of a slider element is adjusted based on attention of a user. A user interface element is moved at a respective rate based on attention of a user. Text is entered into a text entry field in response to speech inputs. A value for a value selection user interface object is updated based on attention of a user. Movement of a virtual object is facilitated based on direct touch interactions. A user input is facilitated for displaying a selection refinement user interface object. A visual indicator is displayed indicating progress toward selecting a virtual object when criteria are met.Type: ApplicationFiled: September 22, 2023Publication date: March 28, 2024Inventors: Israel PASTRANA VICENTE, Evgenii KRIVORUCHKO, Kristi E. BAUERLY
-
Publication number: 20230334808Abstract: In some embodiments, a computer system performs different object selection-related operations. In some embodiments, a computer system places objects at locations in a displayed region based on attention of a user. In some embodiments, a computer system displays a container virtual object with curvature in a three-dimensional environment.Type: ApplicationFiled: January 12, 2023Publication date: October 19, 2023Inventors: Matthew J. SUNDSTROM, Matan STAUBER, Evgenii KRIVORUCHKO, Zoey C. TAYLOR
-
Publication number: 20230315247Abstract: A computer system displays a first user interface object at a first position in the three-dimensional environment that has a first spatial arrangement relative to a respective portion of a user. While displaying the first user interface object, the computer system detects an input that corresponds to movement of a viewpoint of the user, and in response, maintains display of the first user interface object at a respective position in the three-dimensional environment having the first spatial arrangement relative to the respective portion of the user. While displaying the first user interface object, the computer system detects a first gaze input directed to the first user interface object, and in response, in accordance with a determination that the first gaze input satisfies attention criteria with respect to the first user interface object, displays a plurality of affordances for accessing system functions of the computer system.Type: ApplicationFiled: February 15, 2023Publication date: October 5, 2023Inventors: Israel Pastrana Vicente, Jonathan R. Dascola, Stephen O. Lemay, Christopher D. McKenzie, Jay Moon, Jesse Chand, Dorian D. Dargan, Amy E. DeDonato, Matan Stauber, Lorena S. Pazmino, Evgenii Krivoruchko
-
Publication number: 20230259265Abstract: In some embodiments, a computer system scrolls scrollable content in response to a variety of user inputs. In some embodiments, a computer system enters text into a text entry field in response to voice inputs. In some embodiments, a computer system facilitates interactions with a soft keyboard. In some embodiments, a computer system facilitates interactions with a cursor. In some embodiments, a computer system facilitates deletion of text. In some embodiments, a computer system facilitates interactions with hardware input devices.Type: ApplicationFiled: January 3, 2023Publication date: August 17, 2023Inventors: Evgenii KRIVORUCHKO, Kristi E. BAUERLY, Stephen O. LEMAY, Israel PASTRANA VICENTE
-
Publication number: 20230114043Abstract: A computer system concurrently displays a view of a physical environment; and a computer-generated user interface element overlaid on the view of the physical environment. An appearance of the computer-generated user interface element is based on an appearance of the view of the physical environment on which the computer-generated user interface element is overlaid. In response to an appearance of the physical environment changing, the appearance of the computer-generated user interface element is updated at a first time based on a graphical composition of the appearance of one or more portions of the physical environment at different times prior to the first time, including: an appearance of a first portion of the physical environment at a second time that is before the first time; and an appearance of a second portion of the physical environment at a third time that is before the second time.Type: ApplicationFiled: September 21, 2022Publication date: April 13, 2023Inventors: Wan Si Wan, Gregory M. Apodaca, William A. Sorrentino, III, Miquel Rodriguez Estany, James J. Owen, Pol Pla I. Conesa, Alan C. Dye, Stephen O. Lemay, Richard D. Lyons, Israel Pastrana Vicente, Evgenii Krivoruchko, Giancarlo Yerkes
-
Publication number: 20230095282Abstract: In one implementation, a method for displaying a first pairing affordance that is world-locked to a first peripheral device. The method may be performed by an electronic device including a non-transitory memory, one or more processors, a display, and one or more input devices. The method includes detecting the first peripheral device within a three-dimensional (3D) environment via a computer vision technique. The method includes receiving, via the one or more input devices, a first user input that is directed to the first peripheral device within the 3D environment. The method includes, in response to receiving the first user input, displaying, on the display, the first pairing affordance that is world-locked to the first peripheral device within the 3D environment.Type: ApplicationFiled: September 22, 2022Publication date: March 30, 2023Inventors: Benjamin R. Blachnitzky, Aaron M. Burns, Anette L. Freiin von Kapri, Arun Rakesh Yoganandan, Benjamin H. Boesel, Evgenii Krivoruchko, Jonathan Ravasz, Shih-Sang Chiu
-
Publication number: 20230092874Abstract: A computer system detects whether the user satisfies attention criteria with respect to a first user interface object displayed in a first view of a three-dimensional environment. In response to detecting that the user does not satisfy the attention criteria with respect to the first user interface object, the computer system displays the first user interface object with a modified appearance. The computer system detects a first movement of a viewpoint of the user relative to a physical environment and detects that the user satisfies the attention criteria with respect to the first user interface object. In response, the computer system displays the first user interface object in a second view of the three-dimensional environment, including displaying the first user interface object with an appearance that emphasizes the first user interface object more than when the first user interface object was displayed with the modified appearance.Type: ApplicationFiled: September 19, 2022Publication date: March 23, 2023Inventors: Evgenii Krivoruchko, Israel Pastrana Vicente, Stephen O. Lemay, Christopher D. McKenzie
-
Publication number: 20220414999Abstract: The present technology relates to artificial reality systems. Such systems provide projections a user can create to specify object interactions. For example, when a user wishes to interact with an object outside her immediate reach, she can use a projection to select, move, or otherwise interact with the distant object. The present technology also includes object selection techniques for identifying and disambiguating between objects, allowing a user to select objects both near and distant from the user. Yet further aspects of the present technology include techniques for interpreting various bimanual (two-handed) gestures for interacting with objects. The present technology further includes a model for differentiating between global and local modes for, e.g., providing different input modalities or interpretations of user gestures.Type: ApplicationFiled: September 2, 2022Publication date: December 29, 2022Applicant: Meta Platforms Technologies, LLCInventors: Jonathan RAVASZ, Etienne PINCHON, Adam Tibor VARGA, Jasper STEVENS, Robert ELLIS, Jonah JONES, Evgenii KRIVORUCHKO
-
Patent number: 11468644Abstract: The present technology relates to artificial reality systems. Such systems provide projections a user can create to specify object interactions. For example, when a user wishes to interact with an object outside her immediate reach, she can use a projection to select, move, or otherwise interact with the distant object. The present technology also includes object selection techniques for identifying and disambiguating between objects, allowing a user to select objects both near and distant from the user. Yet further aspects of the present technology include techniques for interpreting various bimanual (two-handed) gestures for interacting with objects. The present technology further includes a model for differentiating between global and local modes for, e.g., providing different input modalities or interpretations of user gestures.Type: GrantFiled: September 9, 2020Date of Patent: October 11, 2022Assignee: Meta Platforms Technologies, LLCInventors: Jonathan Ravasz, Etienne Pinchon, Adam Tibor Varga, Jasper Stevens, Robert Ellis, Jonah Jones, Evgenii Krivoruchko
-
Publication number: 20220317776Abstract: In some embodiments, an electronic device displays virtual objects on a user interface element. In some embodiments, an electronic device displays indications of a content entry tool. In some embodiments, an electronic device provides for interactions with virtual objects in a three-dimensional environment. In some embodiments, an electronic device facilitates marking input associated with a three-dimensional object. In some embodiments, electronic device indicates a boundary of a marking canvas or volume for creating marks associated with an object in a three-dimensional environment.Type: ApplicationFiled: March 17, 2022Publication date: October 6, 2022Inventors: Matthew J. SUNDSTROM, Matan STAUBER, Evgenii KRIVORUCHKO
-
Publication number: 20220130121Abstract: The present technology relates to artificial reality systems. Such systems provide projections a user can create to specify object interactions. For example, when a user wishes to interact with an object outside her immediate reach, she can use a projection to select, move, or otherwise interact with the distant object. The present technology also includes object selection techniques for identifying and disambiguating between objects, allowing a user to select objects both near and distant from the user. Yet further aspects of the present technology include techniques for interpreting various bimanual (two-handed) gestures for interacting with objects. The present technology further includes a model for differentiating between global and local modes for, e.g., providing different input modalities or interpretations of user gestures.Type: ApplicationFiled: January 10, 2022Publication date: April 28, 2022Inventors: Jonathan RAVASZ, Etienne PINCHON, Adam Tibor VARGA, Jasper STEVENS, Robert ELLIS, Jonah JONES, Evgenii KRIVORUCHKO
-
Patent number: 11257295Abstract: The present technology relates to artificial reality systems. Such systems provide projections a user can create to specify object interactions. For example, when a user wishes to interact with an object outside her immediate reach, she can use a projection to select, move, or otherwise interact with the distant object. The present technology also includes object selection techniques for identifying and disambiguating between objects, allowing a user to select objects both near and distant from the user. Yet further aspects of the present technology include techniques for interpreting various bimanual (two-handed) gestures for interacting with objects. The present technology further includes a model for differentiating between global and local modes for, e.g., providing different input modalities or interpretations of user gestures.Type: GrantFiled: March 24, 2021Date of Patent: February 22, 2022Assignee: Facebook Technologies, LLCInventors: Jonathan Ravasz, Etienne Pinchon, Adam Tibor Varga, Jasper Stevens, Robert Ellis, Jonah Jones, Evgenii Krivoruchko