Patents by Inventor Jonathan Ravasz

Jonathan Ravasz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210090331
    Abstract: The present technology relates to artificial reality systems. Such systems provide projections a user can create to specify object interactions. For example, when a user wishes to interact with an object outside her immediate reach, she can use a projection to select, move, or otherwise interact with the distant object. The present technology also includes object selection techniques for identifying and disambiguating between objects, allowing a user to select objects both near and distant from the user. Yet further aspects of the present technology include techniques for interpreting various bimanual (two-handed) gestures for interacting with objects. The present technology further includes a model for differentiating between global and local modes for, e.g., providing different input modalities or interpretations of user gestures.
    Type: Application
    Filed: September 20, 2019
    Publication date: March 25, 2021
    Inventors: Jonathan Ravasz, Etienne Pinchon, Adam Tibor Varga, Jasper Stevens, Robert Ellis, Jonah Jones, Evgenii Krivoruchko
  • Publication number: 20210090341
    Abstract: The present technology relates to artificial reality systems. Such systems provide projections a user can create to specify object interactions. For example, when a user wishes to interact with an object outside her immediate reach, she can use a projection to select, move, or otherwise interact with the distant object. The present technology also includes object selection techniques for identifying and disambiguating between objects, allowing a user to select objects both near and distant from the user. Yet further aspects of the present technology include techniques for interpreting various bimanual (two-handed) gestures for interacting with objects. The present technology further includes a model for differentiating between global and local modes for, e.g., providing different input modalities or interpretations of user gestures.
    Type: Application
    Filed: September 9, 2020
    Publication date: March 25, 2021
    Inventors: Jonathan Ravasz, Etienne Pinchon, Adam Tibor Varga, Jasper Stevens, Robert Ellis, Jonah Jones, Evgenii Krivoruchko
  • Publication number: 20210090332
    Abstract: The present technology relates to artificial reality systems. Such systems provide projections a user can create to specify object interactions. For example, when a user wishes to interact with an object outside her immediate reach, she can use a projection to select, move, or otherwise interact with the distant object. The present technology also includes object selection techniques for identifying and disambiguating between objects, allowing a user to select objects both near and distant from the user. Yet further aspects of the present technology include techniques for interpreting various bimanual (two-handed) gestures for interacting with objects. The present technology further includes a model for differentiating between global and local modes for, e.g., providing different input modalities or interpretations of user gestures.
    Type: Application
    Filed: September 20, 2019
    Publication date: March 25, 2021
    Inventors: Jonathan Ravasz, Etienne Pinchon, Adam Tibor Varga, Jasper Stevens, Robert Ellis, Jonah Jones, Evgenii Krivoruchko
  • Patent number: 10955929
    Abstract: An artificial reality system is described that renders, presents, and controls user interface elements within an artificial reality environment, and performs actions in response to one or more detected gestures of the user. The artificial reality system captures image data representative of a physical environment and outputs the artificial reality content. The artificial reality system identifies, from the image data, a gesture comprising a motion of a first digit of a hand and a second digit of the hand to form a pinching configuration a particular number of times within a threshold amount of time. The artificial reality system assigns one or more input characters to one or more of a plurality of digits of the hand and processes a selection of a first input character of the one or more input characters assigned to the second digit of the hand in response to the identified gesture.
    Type: Grant
    Filed: June 7, 2019
    Date of Patent: March 23, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Jonathan Ravasz, Jasper Stevens, Adam Tibor Varga, Etienne Pinchon, Simon Charles Tickner, Jennifer Lynn Spurlock, Kyle Eric Sorge-Toomey, Robert Ellis, Barrett Fox
  • Patent number: 10921879
    Abstract: An artificial reality system is described that renders, presents, and controls user interface elements within an artificial reality environment, and performs actions in response to one or more detected gestures of the user. The artificial reality system includes an image capture device, a head-mounted display (HMD), a user interface (UI) engine, and a rendering engine. The image capture device captures image data representative of a physical environment. The HMD outputs artificial reality content, the artificial reality content including an assistant element. The gesture detector identifies, from the image data, a gesture that includes a gripping motion of two or more digits of a hand to form a gripping configuration at a location that corresponds to the assistant element, and subsequent to the gripping motion, a throwing motion of the hand with respect to the assistant element. The UI engine generates a UI element in response to identifying the gesture.
    Type: Grant
    Filed: June 7, 2019
    Date of Patent: February 16, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Jonathan Ravasz, Jasper Stevens, Adam Tibor Varga, Etienne Pinchon, Simon Charles Tickner, Jennifer Lynn Spurlock, Kyle Eric Sorge-Toomey, Robert Ellis, Barrett Fox
  • Patent number: 10890983
    Abstract: An artificial reality system is described that renders, presents, and controls user interface elements within an artificial reality environment, and performs actions in response to one or more detected gestures of the user. The artificial reality system can include a menu that can be activated and interacted with using one hand. In response to detecting a menu activation gesture performed using one hand, the artificial reality system can cause a menu to be rendered. A menu sliding gesture (e.g., horizontal motion) of the hand can be used to cause a slidably engageable user interface (UI) element to move along a horizontal dimension of the menu while horizontal positioning of the UI menu is held constant. Motion of the hand orthogonal to the menu sliding gesture (e.g., non-horizontal motion) can cause the menu to be repositioned. The implementation of the artificial reality system does require use of both hands or use of other input devices in order to interact with the artificial reality system.
    Type: Grant
    Filed: June 7, 2019
    Date of Patent: January 12, 2021
    Assignee: Facebook Technologies, LLC
    Inventors: Jonathan Ravasz, Jasper Stevens, Adam Tibor Varga, Etienne Pinchon, Simon Charles Tickner, Jennifer Lynn Spurlock, Kyle Eric Sorge-Toomey, Robert Ellis, Barrett Fox
  • Publication number: 20200387213
    Abstract: An artificial reality system is described that renders, presents, and controls user interface elements within an artificial reality environment, and performs actions in response to one or more detected gestures of the user. The artificial reality system includes an image capture device, a head-mounted display (HMD), a user interface (UI) engine, and a rendering engine. The image capture device captures image data representative of a physical environment. The HMD outputs artificial reality content, the artificial reality content including an assistant element. The gesture detector identifies, from the image data, a gesture that includes a gripping motion of two or more digits of a hand to form a gripping configuration at a location that corresponds to the assistant element, and subsequent to the gripping motion, a throwing motion of the hand with respect to the assistant element. The UI engine generates a UI element in response to identifying the gesture.
    Type: Application
    Filed: June 7, 2019
    Publication date: December 10, 2020
    Inventors: Jonathan Ravasz, Jasper Stevens, Adam Tibor Varga, Etienne Pinchon, Simon Charles Tickner, Jennifer Lynn Spurlock, Kyle Eric Sorge-Toomey, Robert Ellis, Barrett Fox
  • Publication number: 20200388247
    Abstract: An artificial reality system is described that renders, presents, and controls user interface elements within an artificial reality environment, and performs actions in response to one or more detected gestures of the user. The artificial reality system includes an image capture device, a head-mounted display (HMD), a gesture detector, a user interface (UI) engine, and a rendering engine. The image capture device captures image data representative of a physical environment. The HMD outputs artificial reality content. The gesture detector identifies, from the image data, a gesture including a configuration of a hand that is substantially stationary for at least a threshold period of time and positioned such that an index finger and a thumb of the hand form approximately a right angle. The UI engine generates a UI element in response to the identified gesture. The rendering engine renders the UI element as an overlay to the artificial reality content.
    Type: Application
    Filed: June 7, 2019
    Publication date: December 10, 2020
    Inventors: Jonathan Ravasz, Jasper Stevens, Adam Tibor Varga, Etienne Pinchon, Simon Charles Tickner, Jennifer Lynn Spurlock, Kyle Eric Sorge-Toomey, Robert Ellis, Barrett Fox
  • Publication number: 20200387214
    Abstract: An artificial reality system is described that renders, presents, and controls user interface elements within an artificial reality environment, and performs actions in response to one or more detected gestures of the user. The artificial reality system captures image data representative of a physical environment, renders artificial reality content and a virtual keyboard with a plurality of virtual keys as an overlay to the artificial reality content, and outputs the artificial reality content and the virtual keyboard. The artificial reality system identifies, from the image data, a gesture comprising a first digit of a hand being brought in contact with a second digit of the hand, wherein a point of the contact corresponds to a location of a first virtual key of the plurality of virtual keys of the virtual keyboard. The artificial reality system processes a selection of the first virtual key in response to the identified gesture.
    Type: Application
    Filed: June 7, 2019
    Publication date: December 10, 2020
    Inventors: Jonathan Ravasz, Jasper Stevens, Adam Tibor Varga, Etienne Pinchon, Simon Charles Tickner, Jennifer Lynn Spurlock, Kyle Eric Sorge-Toomey, Robert Ellis, Barrett Fox
  • Publication number: 20200387229
    Abstract: An artificial reality system is described that renders, presents, and controls user interface elements within an artificial reality environment, and performs actions in response to one or more detected gestures of the user. The artificial reality system captures image data representative of a physical environment and outputs the artificial reality content. The artificial reality system identifies, from the image data, a gesture comprising a motion of a first digit of a hand and a second digit of the hand to form a pinching configuration a particular number of times within a threshold amount of time. The artificial reality system assigns one or more input characters to one or more of a plurality of digits of the hand and processes a selection of a first input character of the one or more input characters assigned to the second digit of the hand in response to the identified gesture.
    Type: Application
    Filed: June 7, 2019
    Publication date: December 10, 2020
    Inventors: Jonathan Ravasz, Jasper Stevens, Adam Tibor Varga, Etienne Pinchon, Simon Charles Tickner, Jennifer Lynn Spurlock, Kyle Eric Sorge-Toomey, Robert Ellis, Barrett Fox
  • Publication number: 20200387286
    Abstract: An artificial reality system is described that renders, presents, and controls user interface elements within an artificial reality environment, and performs actions in response to one or more detected gestures of the user. The artificial reality system includes an image capture device, a head-mounted display (HMD), a user interface (UI) engine, and a rendering engine. The image capture device configured to capture image data representative of a physical environment. The HMD outputs artificial reality content. The gesture detector is configured to identify, from the image data, a gesture including a configuration of a wrist that is substantially stationary for at least a threshold period of time and positioned such that a normal from the wrist is facing the HMD. The UI engine is configured to generate a UI element in response to the identified gesture. The rendering engine renders the UI element overlaid on an image of the wrist.
    Type: Application
    Filed: June 7, 2019
    Publication date: December 10, 2020
    Inventors: Jonathan Ravasz, Jasper Stevens, Adam Tibor Varga, Etienne Pinchon, Simon Charles Tickner, Jennifer Lynn Spurlock, Kyle Eric Sorge-Toomey, Robert Ellis, Barrett Fox
  • Publication number: 20200387228
    Abstract: An artificial reality system is described that renders, presents, and controls user interface elements within an artificial reality environment, and performs actions in response to one or more detected gestures of the user. The artificial reality system can include a menu that can be activated and interacted with using one hand. In response to detecting a menu activation gesture performed using one hand, the artificial reality system can cause a menu to be rendered. A menu sliding gesture (e.g., horizontal motion) of the hand can be used to cause a slidably engageable user interface (UI) element to move along a horizontal dimension of the menu while horizontal positioning of the UI menu is held constant. Motion of the hand orthogonal to the menu sliding gesture (e.g., non-horizontal motion) can cause the menu to be repositioned.
    Type: Application
    Filed: June 7, 2019
    Publication date: December 10, 2020
    Inventors: Jonathan Ravasz, Jasper Stevens, Adam Tibor Varga, Etienne Pinchon, Simon Charles Tickner, Jennifer Lynn Spurlock, Kyle Eric Sorge-Toomey, Robert Ellis, Barrett Fox
  • Publication number: 20200387287
    Abstract: An artificial reality system is described that renders, presents, and controls user interface elements within an artificial reality environment, and performs actions in response to one or more detected gestures of the user. In one example, an artificial reality system comprises an image capture device configured to capture image data representative of a physical environment; a head-mounted display (HMD) configured to output artificial reality content; a gesture detector configured to identify, from the image data, a gesture comprising a motion of two fingers from a hand to form a pinching configuration and a subsequent pulling motion while in the pinching configuration; a user interface (UI) engine configured to generate a UI input element in response to identifying the gesture; and a rendering engine configured to render the UI input element as an overlay to at least some of the artificial reality content.
    Type: Application
    Filed: June 7, 2019
    Publication date: December 10, 2020
    Inventors: Jonathan Ravasz, Jasper Stevens, Adam Tibor Varga, Etienne Pinchon, Simon Charles Tickner, Jennifer Lynn Spurlock, Kyle Eric Sorge-Toomey, Robert Ellis, Barrett Fox
  • Patent number: 10852839
    Abstract: An artificial reality system is described that renders, presents, and controls user interface elements within an artificial reality environment, and performs actions in response to one or more detected gestures of the user. The artificial reality system includes an image capture device, a head-mounted display (HMD), a user interface (UI) engine, and a rendering engine. The head-mounted display (HMD) outputs artificial reality content, the artificial reality content including a display element that appears superimposed on and attached to an arm. The gesture detector identifies, from the image data, a gesture that includes a gripping motion of a hand with respect to the display element. The UI engine is configured to, in response to the identification of the gesture, (i) update the display element to appear detached from and separate from the arm, and (ii) generate a UI element that appears detached from the arm.
    Type: Grant
    Filed: June 7, 2019
    Date of Patent: December 1, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Jonathan Ravasz, Jasper Stevens, Adam Tibor Varga, Etienne Pinchon, Simon Charles Tickner, Jennifer Lynn Spurlock, Kyle Eric Sorge-Toomey, Robert Ellis, Barrett Fox
  • Patent number: 10802600
    Abstract: The present technology relates to artificial reality systems. Such systems provide projections a user can create to specify object interactions. For example, when a user wishes to interact with an object outside her immediate reach, she can use a projection to select, move, or otherwise interact with the distant object. The present technology also includes object selection techniques for identifying and disambiguating between objects, allowing a user to select objects both near and distant from the user. Yet further aspects of the present technology include techniques for interpreting various bimanual (two-handed) gestures for interacting with objects. The present technology further includes a model for differentiating between global and local modes for, e.g., providing different input modalities or interpretations of user gestures.
    Type: Grant
    Filed: September 20, 2019
    Date of Patent: October 13, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: Jonathan Ravasz, Etienne Pinchon, Adam Tibor Varga, Jasper Stevens, Robert Ellis, Jonah Jones, Evgenii Krivoruchko
  • Patent number: 10747302
    Abstract: In one embodiment, a method includes displaying a horizontal screen visible to a user through a display, determining a horizontal distance between a position of the user and the horizontal screen, determining a vertical distance between a controller associated with the user and the horizontal screen, creating an interaction screen, where the interaction screen and the horizontal screen intersect in a closest point, where the interaction screen is tilted toward the user from the horizontal screen by an angle, detecting a first event that a ray cast from a virtual representation of the controller hits a first point on the interaction screen, translating the first event to a second event that the ray cast hits a second point on the horizontal screen, and displaying a curved line from the controller to the second point on the horizontal screen that is visible to the user through the display.
    Type: Grant
    Filed: June 8, 2018
    Date of Patent: August 18, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: William Arthur Hugh Steptoe, Jonathan Ravasz, Michael James LeBeau
  • Patent number: 10592104
    Abstract: In one embodiment, a method includes displaying a virtual keyboard at a first position within a virtual scene that is visible to a user through a display, detecting that the user touches a first point on a trackpad of a controller that is associated with the display, re-positioning, in response to the detection, the virtual keyboard from the first position to a second position, where the second position is determined based on the first point on the trackpad, and displaying, in response to the detection, a pointing indicator, where the pointing indicator is displayed on top of the virtual keyboard, and where the pointing indicator represents that an area of the virtual keyboard indicated by the pointing indicator is being pointed by the user.
    Type: Grant
    Filed: June 8, 2018
    Date of Patent: March 17, 2020
    Assignee: Facebook Technologies, LLC
    Inventors: William Arthur Hugh Steptoe, Jonathan Ravasz, Michael James LeBeau
  • Publication number: 20190377406
    Abstract: In one embodiment, a method includes displaying a horizontal screen visible to a user through a display, determining a horizontal distance between a position of the user and the horizontal screen, determining a vertical distance between a controller associated with the user and the horizontal screen, creating an interaction screen, where the interaction screen and the horizontal screen intersect in a closest point, where the interaction screen is tilted toward the user from the horizontal screen by an angle, detecting a first event that a ray cast from a virtual representation of the controller hits a first point on the interaction screen, translating the first event to a second event that the ray cast hits a second point on the horizontal screen, and displaying a curved line from the controller to the second point on the horizontal screen that is visible to the user through the display.
    Type: Application
    Filed: June 8, 2018
    Publication date: December 12, 2019
    Inventors: William Arthur Hugh Steptoe, Jonathan Ravasz, Michael James LeBeau