Patents by Inventor Jonathan Ravasz

Jonathan Ravasz has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240241616
    Abstract: In one implementation, a method for navigating windows in 3D. The method includes: displaying a first content pane with a first appearance at a first z-depth within an extended reality (XR) environment, wherein the first content pane includes first content and an input field; detecting a user input directed to the input field; and, in response to detecting the user input directed to the input field: moving the content first pane to a second z-depth within the XR environment, wherein the second z-depth is different from the first z-depth; modifying the first content pane by changing the first content pane from the first appearance to a second appearance; and displaying a second content pane with the first appearance at the first z-depth within the XR environment.
    Type: Application
    Filed: May 11, 2022
    Publication date: July 18, 2024
    Inventors: Shih-Sang Chiu, Benjamin H. Boesel, David H. Huang, Jonathan Perron, Jonathan Ravasz, Jordan A. Cazamias, Tyson Erze
  • Publication number: 20240231569
    Abstract: In one implementation, a method of displaying content is performed at a device including a display, one or more processors, and non-transitory memory. The method includes displaying, in a first area, a first content pane including first content including a link to second content. The method includes, while displaying the first content pane in the first area, receiving a user input selecting the link to the second content and indicating a second area separate from the first area and not displaying a content pane. The method includes, in response to receiving the user input selecting the link to the second content and indicating the second area, displaying, in the second area, a second content pane including the second content.
    Type: Application
    Filed: May 31, 2022
    Publication date: July 11, 2024
    Inventors: Shih-Sang Chiu, Benjamin H. Boesel, David H. Huang, Jonathan Perron, Jonathan Ravasz, Jordan A. Cazamias, Tyson Erze
  • Publication number: 20240201493
    Abstract: The present technology relates to artificial reality systems. Such systems provide projections a user can create to specify object interactions. For example, when a user wishes to interact with an object outside her immediate reach, she can use a projection to select, move, or otherwise interact with the distant object. The present technology also includes object selection techniques for identifying and disambiguating between objects, allowing a user to select objects both near and distant from the user. Yet further aspects of the present technology include techniques for interpreting various bimanual (two-handed) gestures for interacting with objects. The present technology further includes a model for differentiating between global and local modes for, e.g., providing different input modalities or interpretations of user gestures.
    Type: Application
    Filed: February 26, 2024
    Publication date: June 20, 2024
    Inventors: Jonathan RAVASZ, Etienne PINCHON, Adam Tibor VARGA, Jasper STEVENS, Robert ELLIS, Jonah JONES, Evgenii KRIVORUCHKO
  • Patent number: 12008216
    Abstract: A method is performed at an electronic device including one or more processors, a non-transitory memory, and a display. The method includes obtaining a first volumetric object associated with a first content region. The first content region is associated with a first tab. The method includes generating a first volumetric representation of the first volumetric object based on a function of the first tab. The first volumetric representation is displayable within the first tab. The method includes concurrently displaying, on the display, the first content region and the first volumetric representation within the first tab. In some implementations, the method includes changing a view of the first volumetric representation, such as rotating the first volumetric representation or according to a positional change to the electronic device. In some implementations, the method includes generating a plurality of volumetric representations and classifying the plurality of volumetric representations.
    Type: Grant
    Filed: May 19, 2021
    Date of Patent: June 11, 2024
    Assignee: APPLE INC.
    Inventors: Benjamin Hunter Boesel, Jonathan Perron, Shih Sang Chiu, David H. Y. Huang, Jonathan Ravasz, Jordan Alexander Cazamias
  • Publication number: 20240152245
    Abstract: A computer system displays a first object that includes at least a first portion of the first object and a second portion of the first object and detects a first gaze input that meets first criteria, wherein the first criteria require that the first gaze input is directed to the first portion of the first object in order for the first criteria to be met. In response, the computer system displays a first control element that corresponds to a first operation associated with the first object, wherein the first control element was not displayed prior to detecting that the first gaze input met the first criteria, and detects a first user input directed to the first control element. In response to detecting the first user input directed to the first control element, the computer system performs the first operation with respect to the first object.
    Type: Application
    Filed: September 21, 2023
    Publication date: May 9, 2024
    Inventors: Lee S. Broughton, Israel Pastrana Vicente, Matan Stauber, Miquel Estany Rodriguez, James J. Owen, Jonathan R. Dascola, Stephen O. Lemay, Christian Schnorr, Zoey C. Taylor, Jay Moon, Benjamin H. Boesel, Benjamin Hylak, Richard D. Lyons, Willliam A. Sorrentino, III, Lynn I. Streja, Jonathan Ravasz, Nathan Gitter, Peter D. Anton, Michael J. Rockwell, Peter L. Hajas, Evgenii Krivoruchko, Mark A. Ebbole, James Magahern, Andrew J. Sawyer, Christopher D. McKenzie, Michael E. Buerli, Olivier D. R. Gutknecht
  • Patent number: 11947111
    Abstract: The present technology relates to artificial reality systems. Such systems provide projections a user can create to specify object interactions. For example, when a user wishes to interact with an object outside her immediate reach, she can use a projection to select, move, or otherwise interact with the distant object. The present technology also includes object selection techniques for identifying and disambiguating between objects, allowing a user to select objects both near and distant from the user. Yet further aspects of the present technology include techniques for interpreting various bimanual (two-handed) gestures for interacting with objects. The present technology further includes a model for differentiating between global and local modes for, e.g., providing different input modalities or interpretations of user gestures.
    Type: Grant
    Filed: September 2, 2022
    Date of Patent: April 2, 2024
    Assignee: Meta Platforms Technologies, LLC
    Inventors: Jonathan Ravasz, Etienne Pinchon, Adam Tibor Varga, Jasper Stevens, Robert Ellis, Jonah Jones, Evgenii Krivoruchko
  • Publication number: 20240103680
    Abstract: A computer system displays a user interface that includes a first region and a second region separated by a third region and displays a focus indicator within the first region. In response to detecting an input to move the focus indicator relative to the user interface, the input being associated with movement toward the second region, if the input meets respective criteria based on the movement associated with the input, the computer system moves the focus indicator from the first region to the second region in accordance with the movement associated with the input without displaying the focus indicator in the third region; and, if the input does not meet the respective criteria, the computer system changes an appearance of the focus indicator in accordance with the movement associated with the input while continuing to display at least a portion of the focus indicator within the first region.
    Type: Application
    Filed: September 19, 2023
    Publication date: March 28, 2024
    Inventors: Jonathan Ravasz, Israel Pastrana Vicente, Stephen O. Lemay, Kristi E.S. Bauerly, Zoey C. Taylor
  • Publication number: 20240103682
    Abstract: A computer system displays a first application user interface at a first location in a three-dimensional environment. While displaying the first application user interface at the first location in the three-dimensional environment, the computer system detects, at a first time, a first input corresponding to a request to close the first application user interface. In response to detecting the first input corresponding to a request to close the first application user interface: the computer system closes the first application user interface, including ceasing to display the first application user interface in the three-dimensional environment; and, in accordance with a determination that respective criteria are met, the computer system displays a home menu user interface at a respective home menu position that is determined based on the first location of the first application user interface in the three-dimensional environment.
    Type: Application
    Filed: September 21, 2023
    Publication date: March 28, 2024
    Inventors: Stephen O. Lemay, Zoey C. Taylor, Benjamin Hylak, Willliam A. Sorrentino, III, Jonathan Ravasz, Peter D. Anton, Michael J. Rockwell
  • Publication number: 20240103676
    Abstract: A gaze virtual object is displayed that is selectable based on attention directed to the gaze virtual object to perform an operation associated with a selectable virtual object. An indication of attention of a user is displayed. An enlarged view of a region of a user interface is displayed. A value of a slider element is adjusted based on attention of a user. A user interface element is moved at a respective rate based on attention of a user. Text is entered into a text entry field in response to speech inputs. A value for a value selection user interface object is updated based on attention of a user. Movement of a virtual object is facilitated based on direct touch interactions. A user input is facilitated for displaying a selection refinement user interface object. A visual indicator is displayed indicating progress toward selecting a virtual object when criteria are met.
    Type: Application
    Filed: September 22, 2023
    Publication date: March 28, 2024
    Inventors: Israel PASTRANA VICENTE, Danielle M. PRICE, Evgenii KRIVORUCHKO, Jonathan R. DASCOLA, Jonathan RAVASZ, Marcos ALONSO, Hugo D. VERWEIJ, Zoey C. TAYLOR, Lorena S. PAZMINO
  • Publication number: 20240103712
    Abstract: A computer system detects a gaze input directed to a region in an environment and, while detecting the gaze input, detects a touch input. In response, the computer system displays a focus indicator at a location corresponding to the region. The computer system detects a continuation of the touch input that includes movement of the touch input along an input surface while being maintained on the input surface. In response, the computer system moves the focus indicator in accordance with the movement of the touch input: within a user interface of an application, if the movement corresponds to a request to move the focus indicator within the user interface; and within the user interface without moving the focus indicator outside of the boundary of the user interface, if the movement corresponds to a request to move the focus indicator outside of a boundary of the user interface.
    Type: Application
    Filed: September 19, 2023
    Publication date: March 28, 2024
    Inventors: Jonathan Ravasz, Israel Pastrana Vicente, Stephen O. Lemay, Kristi E.S. Bauerly, Zoey C. Taylor
  • Publication number: 20240028177
    Abstract: The present disclosure generally relates to user interfaces for electronic devices, including user interfaces for viewing and interacting with media items.
    Type: Application
    Filed: September 28, 2023
    Publication date: January 25, 2024
    Inventors: Israel PASTRANA VICENTE, Benjamin H. BOESEL, Shih-Sang CHIU, Graham R. CLARKE, Miquel ESTANY RODRIGUEZ, Chia Yang LIN, James J. OWEN, Jonathan RAVASZ, William A. SORRENTINO, III
  • Publication number: 20230384907
    Abstract: In some embodiments, a computer system facilitates manipulation of a three-dimensional environment relative to a viewpoint of a user of the computer system. In some embodiments, a computer system facilitates manipulation of virtual objects in a virtual environment. In some embodiments, a computer system facilitates manipulation of a three-dimensional environment relative to a reference point determined based on attention of a user of the computer system.
    Type: Application
    Filed: April 11, 2023
    Publication date: November 30, 2023
    Inventors: Benjamin H. BOESEL, Jonathan RAVASZ, Shih-Sang CHIU, Jordan A. CAZAMIAS, Stephen O. LEMAY, Christopher D. MCKENZIE, Dorian D. DARGAN, David H. HUANG
  • Publication number: 20230343027
    Abstract: Various implementations disclosed herein include devices, systems, and methods for selecting multiple virtual objects within an environment. In some implementations, a method includes receiving a first gesture associated with a first virtual object in an environment. A movement of the first virtual object in the environment within a threshold distance of a second virtual object in the environment is detected. In response to detecting the movement of the first virtual object in the environment within the threshold distance of the second virtual object in the environment, a concurrent movement of the first virtual object and the second virtual object is displayed in the environment based on the first gesture.
    Type: Application
    Filed: March 20, 2023
    Publication date: October 26, 2023
    Inventors: Jordan A. Cazamias, Aaron M. Burns, David M. Schattel, Jonathan Perron, Jonathan Ravasz, Shih-Sang Chiu
  • Publication number: 20230334724
    Abstract: Various implementations disclosed herein include devices, systems, and methods for determining a placement of virtual objects in a collection of virtual objects when changing from a first viewing arrangement to a second viewing arrangement based on their respective positions in one of the viewing arrangements. In some implementations, a method includes displaying a set of virtual objects in a first viewing arrangement in a first region of an environment. The set of virtual objects are arranged in a first spatial arrangement. A user input corresponding to a request to change to a second viewing arrangement in a second region of the environment is obtained. A mapping is determined between the first spatial arrangement and a second spatial arrangement based on spatial relationships between the set of virtual objects. The set of virtual objects is displayed in the second viewing arrangement in the second region of the environment.
    Type: Application
    Filed: March 20, 2023
    Publication date: October 19, 2023
    Inventors: Jordan A. Cazamias, Aaron M. Burns, David M. Schattel, Jonathan Perron, Jonathan Ravasz, Shih-Sang Chiu
  • Publication number: 20230333644
    Abstract: Various implementations disclosed herein include devices, systems, and methods for organizing virtual objects within an environment. In some implementations, a method includes obtaining a user input corresponding to a command to associate a virtual object with a region of an environment. A gaze input corresponding to a user focus location in the region is detected. A movement of the virtual object to an object placement location proximate the user focus location is displayed.
    Type: Application
    Filed: March 20, 2023
    Publication date: October 19, 2023
    Inventors: Jordan A. Cazamias, Aaron M. Burns, David M. Schattel, Jonathan Perron, Jonathan Ravasz, Shih-Sang Chiu
  • Publication number: 20230336865
    Abstract: The present disclosure generally relates to techniques and user interfaces for capturing media, displaying a preview of media, displaying a recording indicator, displaying a camera user interface, and/or displaying previously captured media.
    Type: Application
    Filed: November 22, 2022
    Publication date: October 19, 2023
    Inventors: Alexandre DA VEIGA, Lee S. Broughton, Angel Suet Yan CHEUNG, Stephen O. LEMAY, Chia Yang LIN, Behkish J. MANZARI, Ivan MARKOVIC, Alexander MENZIES, Aaron MORING, Jonathan RAVASZ, Tobias RICK, Bryce L. SCHMIDTCHEN, William A. SORRENTINO, III
  • Publication number: 20230316634
    Abstract: In some embodiments, a computer system selectively recenters virtual content to a viewpoint of a user, in the presence of physical or virtual obstacles, and/or automatically recenters one or more virtual objects in response to the display generation component changing state, selectively recenters content associated with a communication session between multiple users in response detected user input, changes the visual prominence of content included in virtual objects based on viewpoint and/or based on a detected user attention of a user, modifies visual prominence of one or more virtual objects to resolve apparent obscuring of the one or more virtual objects, modifies visual prominence based on user viewpoint relative to virtual objects, concurrently modifies visual prominence based various types of user interaction, and/or changes an amount of visual impact of an environmental effect in response to detected user input.
    Type: Application
    Filed: January 19, 2023
    Publication date: October 5, 2023
    Inventors: Shih-Sang CHIU, Benjamin H. BOESEL, Jonathan PERRON, Stephen O. LEMAY, Christopher D. MCKENZIE, Dorian D. DARGAN, Jonathan RAVASZ, Nathan GITTER
  • Publication number: 20230154122
    Abstract: In some embodiments, an electronic device automatically updates the orientation of a virtual object in a three-dimensional environment based on a viewpoint of a user in the three-dimensional environment. In some embodiments, an electronic device automatically updates the orientation of a virtual object in a three-dimensional environment based on viewpoints of a plurality of users in the three-dimensional environment. In some embodiments, the electronic device modifies an appearance of a real object that is between a virtual object and the viewpoint of a user in a three-dimensional environment. In some embodiments, the electronic device automatically selects a location for a user in a three-dimensional environment that includes one or more virtual objects and/or other users.
    Type: Application
    Filed: January 13, 2023
    Publication date: May 18, 2023
    Inventors: Jonathan R. DASCOLA, Alexis Henri PALANGIE, Peter D. ANTON, Stephen O. LEMAY, Jonathan RAVASZ, Shi-Sang CHIU, Christopher D. MCKENZIE, Dorian D. DARGAN
  • Publication number: 20230100689
    Abstract: In some embodiments, an electronic device facilitates cursor interactions in different regions in a three-dimensional environment. In some embodiments, an electronic device facilitates cursor interactions in content. In some embodiments, an electronic device facilitates cursor movement. In some embodiments, an electronic device facilitates interaction with multiple input devices. In some embodiments, a computer system facilitates cursor movement based on movement of a hand of a user of the computer system and a location of a gaze of the user in the three-dimensional environment. In some embodiments, a computer system facilitates cursor selection and scrolling of content in the three-dimensional environment.
    Type: Application
    Filed: September 24, 2022
    Publication date: March 30, 2023
    Inventors: Shih-Sang CHIU, Christopher D. MCKENZIE, Pol PLA I CONESA, Jonathan RAVASZ
  • Publication number: 20230095282
    Abstract: In one implementation, a method for displaying a first pairing affordance that is world-locked to a first peripheral device. The method may be performed by an electronic device including a non-transitory memory, one or more processors, a display, and one or more input devices. The method includes detecting the first peripheral device within a three-dimensional (3D) environment via a computer vision technique. The method includes receiving, via the one or more input devices, a first user input that is directed to the first peripheral device within the 3D environment. The method includes, in response to receiving the first user input, displaying, on the display, the first pairing affordance that is world-locked to the first peripheral device within the 3D environment.
    Type: Application
    Filed: September 22, 2022
    Publication date: March 30, 2023
    Inventors: Benjamin R. Blachnitzky, Aaron M. Burns, Anette L. Freiin von Kapri, Arun Rakesh Yoganandan, Benjamin H. Boesel, Evgenii Krivoruchko, Jonathan Ravasz, Shih-Sang Chiu