Patents by Inventor Guru Guruganesh

Guru Guruganesh has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250111210
    Abstract: Systems and methods for processing inputs using attention neural networks. In particular, one or more of the attention layers within the attention neural network compute relative position biases using functional interpolation.
    Type: Application
    Filed: September 27, 2024
    Publication date: April 3, 2025
    Inventors: Chong You, Guru Guruganesh, Joshua Timothy Ainslie, Manzil Zaheer, Sanjiv Kumar, Santiago Ontañón, Shanda Li, Venkata Sesha Pavana Srinadh Bhojanapalli, Sumit Sanghai
  • Publication number: 20250028556
    Abstract: Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for distributing resources to client devices are described. A computer-implemented system receives a request for a resource distribution constraint for a given resource of a first type, determines that a second type of resource that has a sufficient amount of historical distribution data to generate a distribution constraint, and determines the resource distribution constraint using historical distribution data for the second type of resource.
    Type: Application
    Filed: July 19, 2023
    Publication date: January 23, 2025
    Inventors: Joshua Ruizhi Wang, Brian Mulford, Qiaoran Li, Michael John de Ridder, Pawel Opalinski, Tresa Johnson, Paul David Duetting, Guru Guruganesh, Jonathan Schneider
  • Publication number: 20220156553
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing network inputs using an attention neural network that has one or more sparse attention sub-layers. Each sparse attention sub-layer is configured to apply a sparse attention mechanism that attends differently for input positions that are in a first proper subset of the input positions in the input to the sub-layer than for positions that are not in the first proper subset.
    Type: Application
    Filed: January 31, 2022
    Publication date: May 19, 2022
    Inventors: Joshua Timothy Ainslie, Santiago Ontañón, Philip Pham, Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Amr Ahmed
  • Patent number: 11238332
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing network inputs using an attention neural network that has one or more sparse attention sub-layers. Each sparse attention sub-layer is configured to apply a sparse attention mechanism that attends differently for input positions that are in a first proper subset of the input positions in the input to the sub-layer than for positions that are not in the first proper subset.
    Type: Grant
    Filed: June 7, 2021
    Date of Patent: February 1, 2022
    Assignee: Google LLC
    Inventors: Joshua Timothy Ainslie, Santiago Ontañón, Philip Pham, Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Amr Ahmed
  • Publication number: 20210383191
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing network inputs using an attention neural network that has one or more sparse attention sub-layers. Each sparse attention sub-layer is configured to apply a sparse attention mechanism that attends differently for input positions that are in a first proper subset of the input positions in the input to the sub-layer than for positions that are not in the first proper subset.
    Type: Application
    Filed: June 7, 2021
    Publication date: December 9, 2021
    Inventors: Joshua Timothy Ainslie, Santiago Ontañón, Philip Pham, Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Amr Ahmed