Patents by Inventor Manzil Zaheer

Manzil Zaheer has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11960867
    Abstract: Using a natural language (NL) latent presentation in the automated conversion of source code from a base programming language (e.g., C++) to a target programming language (e.g., Python). A base-to-NL model can be used to generate an NL latent representation by processing a base source code snippet in the base programming language. Further, an NL-to-target model can be used to generate a target source code snippet in the target programming language (that is functionally equivalent to the base source code snippet), by processing the NL latent representation. In some implementations, output(s) from the NL-to-target model indicate canonical representation(s) of variables, and in generating the target source code snippet, technique(s) are used to match those canonical representation(s) to variable(s) of the base source code snippet. In some implementations, multiple candidate target source code snippets are generated, and a subset (e.g., one) is selected based on evaluation(s).
    Type: Grant
    Filed: May 17, 2023
    Date of Patent: April 16, 2024
    Assignee: GOOGLE LLC
    Inventors: Rishabh Singh, Hanjun Dai, Manzil Zaheer, Artem Goncharuk, Karen Davis, David Andre
  • Publication number: 20230394310
    Abstract: Generally, the present disclosure is directed to systems and methods that perform adaptive optimization with improved convergence properties. The adaptive optimization techniques described herein are useful in various optimization scenarios, including, for example, training a machine-learned model such as, for example, a neural network. In particular, according to one aspect of the present disclosure, a system implementing the adaptive optimization technique can, over a plurality of iterations, employ an adaptive effective learning rate while also ensuring that the effective learning rate is non-increasing.
    Type: Application
    Filed: August 22, 2023
    Publication date: December 7, 2023
    Inventors: Sashank Jakkam Reddi, Sanjiv Kumar, Manzil Zaheer, Satyen Chandrakant Kale
  • Publication number: 20230350657
    Abstract: Techniques are described herein for translating source code using sparse-self attention. In various implementations, a source code snippet in a first programming language may be processed to obtain graph(s) representing snippet tokens, and relationships therebetween. Based on the graph(s), a subset of snippet token pairs may be identified from a superset of all possible token pairs in the source code snippet. Each token pair of the subset may include snippet tokens that are represented by nodes connected by one or more edges of the one or more graphs. A self-attention network of a translation machine learning model may be adapted to sparsely attend across the identified subset of token pairs. The source code snippet may then be processed based on the adapted translation machine learning model to generate a translation of the source code snippet in the second programming language.
    Type: Application
    Filed: April 28, 2022
    Publication date: November 2, 2023
    Inventors: Rishabh Singh, Bin Ni, Manzil Zaheer
  • Publication number: 20230325164
    Abstract: Techniques are described herein for translating a source code snippet from a first programming language to a second programming language independently of sequence-to-sequence decoding. In various implementations, the source code snippet written in the first programming language may be processed using an encoder portion of a transformer network to generate an embedding of the source code snippet. The embedding of the source code snippet may be processed using an all-pair attention layer to generate an attended embedding of the source code snippet. The attended embedding of the source code snippet may be processed using an output layer to generate, by way of a single transformation of the attended embedding of the source code snippet, data indicative of a translation of the source code snippet in the second programming language.
    Type: Application
    Filed: April 11, 2022
    Publication date: October 12, 2023
    Inventors: Rishabh Singh, Manzil Zaheer
  • Patent number: 11775823
    Abstract: Generally, the present disclosure is directed to systems and methods that perform adaptive optimization with improved convergence properties. The adaptive optimization techniques described herein are useful in various optimization scenarios, including, for example, training a machine-learned model such as, for example, a neural network. In particular, according to one aspect of the present disclosure, a system implementing the adaptive optimization technique can, over a plurality of iterations, employ an adaptive effective learning rate while also ensuring that the effective learning rate is non-increasing.
    Type: Grant
    Filed: September 8, 2020
    Date of Patent: October 3, 2023
    Assignee: GOOGLE LLC
    Inventors: Sashank Jakkam Reddi, Sanjiv Kumar, Manzil Zaheer, Satyen Chandrakant Kale
  • Patent number: 11693637
    Abstract: Using a natural language (NL) latent presentation in the automated conversion of source code from a base programming language (e.g., C++) to a target programming language (e.g., Python). A base-to-NL model can be used to generate an NL latent representation by processing a base source code snippet in the base programming language. Further, an NL-to-target model can be used to generate a target source code snippet in the target programming language (that is functionally equivalent to the base source code snippet), by processing the NL latent representation. In some implementations, output(s) from the NL-to-target model indicate canonical representation(s) of variables, and in generating the target source code snippet, technique(s) are used to match those canonical representation(s) to variable(s) of the base source code snippet. In some implementations, multiple candidate target source code snippets are generated, and a subset (e.g., one) is selected based on evaluation(s).
    Type: Grant
    Filed: May 13, 2021
    Date of Patent: July 4, 2023
    Assignee: GOOGLE LLC
    Inventors: Rishabh Singh, Hanjun Dai, Manzil Zaheer, Artem Goncharuk, Karen Davis, David Andre
  • Patent number: 11636308
    Abstract: According to embodiments, a recurrent neural network (RNN) is equipped with a set data structure whose operations are differentiable, which data structure can be used to store information for a long period of time. This differentiable set data structure can “remember” an event in the sequence of sequential data that may impact another event much later in the sequence, thereby allowing the RNN to classify the sequence based on many kinds of long dependencies. An RNN that is equipped with the differentiable set data structure can be properly trained with backpropagation and gradient descent optimizations. According to embodiments, a differentiable set data structure can be used to store and retrieve information with a simple set-like interface. According to further embodiments, the RNN can be extended to support several add operations, which can make the differentiable set data structure behave like a Bloom filter.
    Type: Grant
    Filed: October 31, 2016
    Date of Patent: April 25, 2023
    Assignee: Oracle International Corporation
    Inventors: Jean-Baptiste Tristan, Michael Wick, Manzil Zaheer
  • Publication number: 20220335274
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for multi-stage computationally-efficient inference using a first and second neural network.
    Type: Application
    Filed: April 14, 2022
    Publication date: October 20, 2022
    Inventors: Ankit Singh Rawat, Manzil Zaheer, Aditya Krishna Menon, Sanjiv Kumar, Amr Ahmed
  • Publication number: 20220156553
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing network inputs using an attention neural network that has one or more sparse attention sub-layers. Each sparse attention sub-layer is configured to apply a sparse attention mechanism that attends differently for input positions that are in a first proper subset of the input positions in the input to the sub-layer than for positions that are not in the first proper subset.
    Type: Application
    Filed: January 31, 2022
    Publication date: May 19, 2022
    Inventors: Joshua Timothy Ainslie, Santiago Ontañón, Philip Pham, Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Amr Ahmed
  • Patent number: 11238332
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing network inputs using an attention neural network that has one or more sparse attention sub-layers. Each sparse attention sub-layer is configured to apply a sparse attention mechanism that attends differently for input positions that are in a first proper subset of the input positions in the input to the sub-layer than for positions that are not in the first proper subset.
    Type: Grant
    Filed: June 7, 2021
    Date of Patent: February 1, 2022
    Assignee: Google LLC
    Inventors: Joshua Timothy Ainslie, Santiago Ontañón, Philip Pham, Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Amr Ahmed
  • Publication number: 20210383191
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing network inputs using an attention neural network that has one or more sparse attention sub-layers. Each sparse attention sub-layer is configured to apply a sparse attention mechanism that attends differently for input positions that are in a first proper subset of the input positions in the input to the sub-layer than for positions that are not in the first proper subset.
    Type: Application
    Filed: June 7, 2021
    Publication date: December 9, 2021
    Inventors: Joshua Timothy Ainslie, Santiago Ontañón, Philip Pham, Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Amr Ahmed
  • Publication number: 20210319339
    Abstract: Generally, the present disclosure provides systems and methods for performing machine learning in hyperbolic space. Specifically, techniques are provided which enable the learning of a classifier (e.g., large-margin classifier) for data defined within a hyperbolic space (e.g., which may be particularly beneficial for data that possesses a hierarchical structure).
    Type: Application
    Filed: April 12, 2021
    Publication date: October 14, 2021
    Inventors: Ankit Singh Rawat, Manzil Zaheer, Aditya Krishna Menon, Sanjiv Kumar, Melanie Weber
  • Publication number: 20210073639
    Abstract: A computing system and method can be used to implement a version of federated learning (FL) that incorporates adaptivity (e.g., leverages an adaptive learning rate). In particular, the present disclosure provides a general optimization framework in which (1) clients perform multiple epochs of training using a client optimizer to minimize loss on their local data and (2) a server system updates its global model by applying a gradient-based server optimizer to the average of the clients' model updates. This framework can seamlessly incorporate adaptivity by using adaptive optimizers as client and/or server optimizers. Building upon this general framework, the present disclosure also provides example specific adaptive optimization techniques for FL which use per-coordinate methods as server optimizers. By focusing on adaptive server optimization, the use of adaptive learning rates is enabled without increase in client storage or communication costs and compatibility with cross-device FL can be ensured.
    Type: Application
    Filed: November 20, 2020
    Publication date: March 11, 2021
    Inventors: Sashank Jakkam Reddi, Sanjiv Kumar, Manzil Zaheer, Zachary Charles, Zach Garrett, Keith Rush, Jakub Konecny, Hugh Brendan McMahan
  • Publication number: 20200401893
    Abstract: Generally, the present disclosure is directed to systems and methods that perform adaptive optimization with improved convergence properties. The adaptive optimization techniques described herein are useful in various optimization scenarios, including, for example, training a machine-learned model such as, for example, a neural network. In particular, according to one aspect of the present disclosure, a system implementing the adaptive optimization technique can, over a plurality of iterations, employ an adaptive effective learning rate while also ensuring that the effective learning rate is non-increasing.
    Type: Application
    Filed: September 8, 2020
    Publication date: December 24, 2020
    Inventors: Sashank Jakkam Reddi, Sanjiv Kumar, Manzil Zaheer, Satyen Chandrakant Kale
  • Patent number: 10769529
    Abstract: Generally, the present disclosure is directed to systems and methods that perform adaptive optimization with improved convergence properties. The adaptive optimization techniques described herein are useful in various optimization scenarios, including, for example, training a machine-learned model such as, for example, a neural network. In particular, according to one aspect of the present disclosure, a system implementing the adaptive optimization technique can, over a plurality of iterations, employ an adaptive effective learning rate while also ensuring that the effective learning rate is non-increasing.
    Type: Grant
    Filed: October 18, 2019
    Date of Patent: September 8, 2020
    Assignee: Google LLC
    Inventors: Sashank Jakkam Reddi, Sanjiv Kumar, Manzil Zaheer, Satyen Chandrakant Kale
  • Publication number: 20200175365
    Abstract: Generally, the present disclosure is directed to systems and methods that perform adaptive optimization with improved convergence properties. The adaptive optimization techniques described herein are useful in various optimization scenarios, including, for example, training a machine-learned model such as, for example, a neural network. In particular, according to one aspect of the present disclosure, a system implementing the adaptive optimization technique can, over a plurality of iterations, employ an adaptive effective learning rate while also ensuring that the effective learning rate is non-increasing.
    Type: Application
    Filed: October 18, 2019
    Publication date: June 4, 2020
    Inventors: Sashank Jakkam Reddi, Sanjiv Kumar, Manzil Zaheer, Satyen Chandrakant Kale
  • Patent number: 10394872
    Abstract: Herein is described an unsupervised learning method to discover topics and reduce the dimensionality of documents by designing and simulating a stochastic cellular automaton. A key formula that appears in many inference methods for LDA is used as the local update rule of the cellular automaton. Approximate counters may be used to represent counter values being tracked by the inference algorithms. Also, sparsity may be used to reduce the amount of computation needed for sampling a topic for particular words in the corpus being analyzed.
    Type: Grant
    Filed: November 4, 2015
    Date of Patent: August 27, 2019
    Assignee: Oracle International Corporation
    Inventors: Jean-Baptiste Tristan, Stephen J. Green, Guy L. Steele, Jr., Manzil Zaheer
  • Publication number: 20180121792
    Abstract: According to embodiments, a recurrent neural network (RNN) is equipped with a set data structure whose operations are differentiable, which data structure can be used to store information for a long period of time. This differentiable set data structure can “remember” an event in the sequence of sequential data that may impact another event much later in the sequence, thereby allowing the RNN to classify the sequence based on many kinds of long dependencies. An RNN that is equipped with the differentiable set data structure can be properly trained with backpropagation and gradient descent optimizations. According to embodiments, a differentiable set data structure can be used to store and retrieve information with a simple set-like interface. According to further embodiments, the RNN can be extended to support several add operations, which can make the differentiable set data structure behave like a Bloom filter.
    Type: Application
    Filed: October 31, 2016
    Publication date: May 3, 2018
    Inventors: Jean-Baptiste Tristan, Michael Wick, Manzil Zaheer
  • Publication number: 20160350411
    Abstract: Herein is described an unsupervised learning method to discover topics and reduce the dimensionality of documents by designing and simulating a stochastic cellular automaton. A key formula that appears in many inference methods for LDA is used as the local update rule of the cellular automaton. Approximate counters may be used to represent counter values being tracked by the inference algorithms. Also, sparsity may be used to reduce the amount of computation needed for sampling a topic for particular words in the corpus being analyzed.
    Type: Application
    Filed: November 4, 2015
    Publication date: December 1, 2016
    Inventors: Jean-Baptiste Tristan, Stephen J. Green, Guy L. Steele, JR., Manzil Zaheer