Patents by Inventor Jonathan TOMPSON

Jonathan TOMPSON has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230274548
    Abstract: Techniques are disclosed that enable processing a video capturing a periodic activity using a repetition network to generate periodic output (e.g., a period length of the periodic activity captured in the video and/or a frame wise periodicity indication of the video capturing the periodic activity). Various implementations include a class agnostic repetition network which can be used to generate periodic output for a wide variety of periodic activities. Additional or alternative implementations include generating synthetic repetition videos which can be utilized to train the repetition network.
    Type: Application
    Filed: June 10, 2020
    Publication date: August 31, 2023
    Inventors: Debidatta Dwibedi, Yusuf Aytar, Jonathan Tompson, Andrew Zisserman, Pierre Sermanet
  • Publication number: 20220004883
    Abstract: An encoder neural network is described which can encode a data item, such as a frame of a video, to form a respective encoded data item. Data items of a first data sequence are associated with respective data items of a second sequence, by determining which of the encoded data items of the second sequence is closest to the encoded data item produced from each data item of the first sequence. Thus, the two data sequences are aligned. The encoder neural network is trained automatically using a training set of data sequences, by an iterative process of successively increasing cycle consistency between pairs of the data sequences.
    Type: Application
    Filed: November 21, 2019
    Publication date: January 6, 2022
    Inventors: Yusuf Aytar, Debidatta Dwibedi, Andrew Zisserman, Jonathan Tompson, Pierre Sermanet
  • Patent number: 11181986
    Abstract: Systems and methods for context-sensitive hand interaction with an immersive environment are provided. An example method includes determining a contextual factor for a user and selecting an interaction mode based on the contextual factor. The example method may also include monitoring a hand of the user to determine a hand property and determining an interaction with an immersive environment based on the interaction mode and the hand property.
    Type: Grant
    Filed: August 19, 2020
    Date of Patent: November 23, 2021
    Assignee: GOOGLE LLC
    Inventors: Shiqi Chen, Jonathan Tompson, Rahul Garg
  • Publication number: 20200379576
    Abstract: Systems and methods for context-sensitive hand interaction with an immersive environment are provided. An example method includes determining a contextual factor for a user and selecting an interaction mode based on the contextual factor. The example method may also include monitoring a hand of the user to determine a hand property and determining an interaction with an immersive environment based on the interaction mode and the hand property.
    Type: Application
    Filed: August 19, 2020
    Publication date: December 3, 2020
    Inventors: Shiqi Chen, Jonathan Tompson, Rahul Garg
  • Patent number: 10782793
    Abstract: Systems and methods for context-sensitive hand interaction with an immersive environment are provided. An example method includes determining a contextual factor for a user and selecting an interaction mode based on the contextual factor. The example method may also include monitoring a hand of the user to determine a hand property and determining an interaction with an immersive environment based on the interaction mode and the hand property.
    Type: Grant
    Filed: August 10, 2018
    Date of Patent: September 22, 2020
    Assignee: GOOGLE LLC
    Inventors: Shiqi Chen, Jonathan Tompson, Rahul Garg
  • Patent number: 10635161
    Abstract: In one aspect, a method and system are described for receiving input for a virtual user in a virtual environment. The input may be based on a plurality of movements performed by a user accessing the virtual environment. Based on the plurality of movements, the method and system can include detecting that at least one portion of the virtual user is within a threshold distance of a collision zone, the collision zone being associated with at least one virtual object. The method and system can also include selecting a collision mode for the virtual user based on the at least one portion and the at least one virtual object and dynamically modifying the virtual user based on the selected collision mode.
    Type: Grant
    Filed: August 4, 2016
    Date of Patent: April 28, 2020
    Assignee: GOOGLE LLC
    Inventors: Manuel Christian Clement, Alexander James Faaborg, Rahul Garg, Jonathan Tompson, Shiqi Chen
  • Patent number: 10599211
    Abstract: In one aspect, a method and system are described for receiving input for a virtual user in a virtual environment. The input may be based on a plurality of movements performed by a user accessing the virtual environment. Based on the plurality of movements, the method and system can include detecting that at least one portion of the virtual user is within a threshold distance of a collision zone, the collision zone being associated with at least one virtual object. The method and system can also include selecting a collision mode for the virtual user based on the at least one portion and the at least one virtual object and dynamically modifying the virtual user based on the selected collision mode.
    Type: Grant
    Filed: August 4, 2016
    Date of Patent: March 24, 2020
    Assignee: GOOGLE LLC
    Inventors: Manuel Christian Clement, Alexander James Faaborg, Rahul Garg, Jonathan Tompson, Shiqi Chen
  • Publication number: 20190050062
    Abstract: Systems and methods for context-sensitive hand interaction with an immersive environment are provided. An example method includes determining a contextual factor for a user and selecting an interaction mode based on the contextual factor. The example method may also include monitoring a hand of the user to determine a hand property and determining an interaction with an immersive environment based on the interaction mode and the hand property.
    Type: Application
    Filed: August 10, 2018
    Publication date: February 14, 2019
    Inventors: Shiqi Chen, Jonathan Tompson, Rahul Garg
  • Publication number: 20170038830
    Abstract: In one aspect, a method and system are described for receiving input for a virtual user in a virtual environment. The input may be based on a plurality of movements performed by a user accessing the virtual environment. Based on the plurality of movements, the method and system can include detecting that at least one portion of the virtual user is within a threshold distance of a collision zone, the collision zone being associated with at least one virtual object. The method and system can also include selecting a collision mode for the virtual user based on the at least one portion and the at least one virtual object and dynamically modifying the virtual user based on the selected collision mode.
    Type: Application
    Filed: August 4, 2016
    Publication date: February 9, 2017
    Inventors: Manuel Christian CLEMENT, Alexander James FAABORG, Rahul GARG, Jonathan TOMPSON, Shiqi CHEN