Patents by Inventor Manzil Zaheer
Manzil Zaheer has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11960867Abstract: Using a natural language (NL) latent presentation in the automated conversion of source code from a base programming language (e.g., C++) to a target programming language (e.g., Python). A base-to-NL model can be used to generate an NL latent representation by processing a base source code snippet in the base programming language. Further, an NL-to-target model can be used to generate a target source code snippet in the target programming language (that is functionally equivalent to the base source code snippet), by processing the NL latent representation. In some implementations, output(s) from the NL-to-target model indicate canonical representation(s) of variables, and in generating the target source code snippet, technique(s) are used to match those canonical representation(s) to variable(s) of the base source code snippet. In some implementations, multiple candidate target source code snippets are generated, and a subset (e.g., one) is selected based on evaluation(s).Type: GrantFiled: May 17, 2023Date of Patent: April 16, 2024Assignee: GOOGLE LLCInventors: Rishabh Singh, Hanjun Dai, Manzil Zaheer, Artem Goncharuk, Karen Davis, David Andre
-
Publication number: 20230394310Abstract: Generally, the present disclosure is directed to systems and methods that perform adaptive optimization with improved convergence properties. The adaptive optimization techniques described herein are useful in various optimization scenarios, including, for example, training a machine-learned model such as, for example, a neural network. In particular, according to one aspect of the present disclosure, a system implementing the adaptive optimization technique can, over a plurality of iterations, employ an adaptive effective learning rate while also ensuring that the effective learning rate is non-increasing.Type: ApplicationFiled: August 22, 2023Publication date: December 7, 2023Inventors: Sashank Jakkam Reddi, Sanjiv Kumar, Manzil Zaheer, Satyen Chandrakant Kale
-
Publication number: 20230350657Abstract: Techniques are described herein for translating source code using sparse-self attention. In various implementations, a source code snippet in a first programming language may be processed to obtain graph(s) representing snippet tokens, and relationships therebetween. Based on the graph(s), a subset of snippet token pairs may be identified from a superset of all possible token pairs in the source code snippet. Each token pair of the subset may include snippet tokens that are represented by nodes connected by one or more edges of the one or more graphs. A self-attention network of a translation machine learning model may be adapted to sparsely attend across the identified subset of token pairs. The source code snippet may then be processed based on the adapted translation machine learning model to generate a translation of the source code snippet in the second programming language.Type: ApplicationFiled: April 28, 2022Publication date: November 2, 2023Inventors: Rishabh Singh, Bin Ni, Manzil Zaheer
-
Publication number: 20230325164Abstract: Techniques are described herein for translating a source code snippet from a first programming language to a second programming language independently of sequence-to-sequence decoding. In various implementations, the source code snippet written in the first programming language may be processed using an encoder portion of a transformer network to generate an embedding of the source code snippet. The embedding of the source code snippet may be processed using an all-pair attention layer to generate an attended embedding of the source code snippet. The attended embedding of the source code snippet may be processed using an output layer to generate, by way of a single transformation of the attended embedding of the source code snippet, data indicative of a translation of the source code snippet in the second programming language.Type: ApplicationFiled: April 11, 2022Publication date: October 12, 2023Inventors: Rishabh Singh, Manzil Zaheer
-
Patent number: 11775823Abstract: Generally, the present disclosure is directed to systems and methods that perform adaptive optimization with improved convergence properties. The adaptive optimization techniques described herein are useful in various optimization scenarios, including, for example, training a machine-learned model such as, for example, a neural network. In particular, according to one aspect of the present disclosure, a system implementing the adaptive optimization technique can, over a plurality of iterations, employ an adaptive effective learning rate while also ensuring that the effective learning rate is non-increasing.Type: GrantFiled: September 8, 2020Date of Patent: October 3, 2023Assignee: GOOGLE LLCInventors: Sashank Jakkam Reddi, Sanjiv Kumar, Manzil Zaheer, Satyen Chandrakant Kale
-
Patent number: 11693637Abstract: Using a natural language (NL) latent presentation in the automated conversion of source code from a base programming language (e.g., C++) to a target programming language (e.g., Python). A base-to-NL model can be used to generate an NL latent representation by processing a base source code snippet in the base programming language. Further, an NL-to-target model can be used to generate a target source code snippet in the target programming language (that is functionally equivalent to the base source code snippet), by processing the NL latent representation. In some implementations, output(s) from the NL-to-target model indicate canonical representation(s) of variables, and in generating the target source code snippet, technique(s) are used to match those canonical representation(s) to variable(s) of the base source code snippet. In some implementations, multiple candidate target source code snippets are generated, and a subset (e.g., one) is selected based on evaluation(s).Type: GrantFiled: May 13, 2021Date of Patent: July 4, 2023Assignee: GOOGLE LLCInventors: Rishabh Singh, Hanjun Dai, Manzil Zaheer, Artem Goncharuk, Karen Davis, David Andre
-
Patent number: 11636308Abstract: According to embodiments, a recurrent neural network (RNN) is equipped with a set data structure whose operations are differentiable, which data structure can be used to store information for a long period of time. This differentiable set data structure can “remember” an event in the sequence of sequential data that may impact another event much later in the sequence, thereby allowing the RNN to classify the sequence based on many kinds of long dependencies. An RNN that is equipped with the differentiable set data structure can be properly trained with backpropagation and gradient descent optimizations. According to embodiments, a differentiable set data structure can be used to store and retrieve information with a simple set-like interface. According to further embodiments, the RNN can be extended to support several add operations, which can make the differentiable set data structure behave like a Bloom filter.Type: GrantFiled: October 31, 2016Date of Patent: April 25, 2023Assignee: Oracle International CorporationInventors: Jean-Baptiste Tristan, Michael Wick, Manzil Zaheer
-
Publication number: 20220335274Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for multi-stage computationally-efficient inference using a first and second neural network.Type: ApplicationFiled: April 14, 2022Publication date: October 20, 2022Inventors: Ankit Singh Rawat, Manzil Zaheer, Aditya Krishna Menon, Sanjiv Kumar, Amr Ahmed
-
Publication number: 20220156553Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing network inputs using an attention neural network that has one or more sparse attention sub-layers. Each sparse attention sub-layer is configured to apply a sparse attention mechanism that attends differently for input positions that are in a first proper subset of the input positions in the input to the sub-layer than for positions that are not in the first proper subset.Type: ApplicationFiled: January 31, 2022Publication date: May 19, 2022Inventors: Joshua Timothy Ainslie, Santiago Ontañón, Philip Pham, Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Amr Ahmed
-
Patent number: 11238332Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing network inputs using an attention neural network that has one or more sparse attention sub-layers. Each sparse attention sub-layer is configured to apply a sparse attention mechanism that attends differently for input positions that are in a first proper subset of the input positions in the input to the sub-layer than for positions that are not in the first proper subset.Type: GrantFiled: June 7, 2021Date of Patent: February 1, 2022Assignee: Google LLCInventors: Joshua Timothy Ainslie, Santiago Ontañón, Philip Pham, Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Amr Ahmed
-
Publication number: 20210383191Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for processing network inputs using an attention neural network that has one or more sparse attention sub-layers. Each sparse attention sub-layer is configured to apply a sparse attention mechanism that attends differently for input positions that are in a first proper subset of the input positions in the input to the sub-layer than for positions that are not in the first proper subset.Type: ApplicationFiled: June 7, 2021Publication date: December 9, 2021Inventors: Joshua Timothy Ainslie, Santiago Ontañón, Philip Pham, Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Amr Ahmed
-
Publication number: 20210319339Abstract: Generally, the present disclosure provides systems and methods for performing machine learning in hyperbolic space. Specifically, techniques are provided which enable the learning of a classifier (e.g., large-margin classifier) for data defined within a hyperbolic space (e.g., which may be particularly beneficial for data that possesses a hierarchical structure).Type: ApplicationFiled: April 12, 2021Publication date: October 14, 2021Inventors: Ankit Singh Rawat, Manzil Zaheer, Aditya Krishna Menon, Sanjiv Kumar, Melanie Weber
-
Publication number: 20210073639Abstract: A computing system and method can be used to implement a version of federated learning (FL) that incorporates adaptivity (e.g., leverages an adaptive learning rate). In particular, the present disclosure provides a general optimization framework in which (1) clients perform multiple epochs of training using a client optimizer to minimize loss on their local data and (2) a server system updates its global model by applying a gradient-based server optimizer to the average of the clients' model updates. This framework can seamlessly incorporate adaptivity by using adaptive optimizers as client and/or server optimizers. Building upon this general framework, the present disclosure also provides example specific adaptive optimization techniques for FL which use per-coordinate methods as server optimizers. By focusing on adaptive server optimization, the use of adaptive learning rates is enabled without increase in client storage or communication costs and compatibility with cross-device FL can be ensured.Type: ApplicationFiled: November 20, 2020Publication date: March 11, 2021Inventors: Sashank Jakkam Reddi, Sanjiv Kumar, Manzil Zaheer, Zachary Charles, Zach Garrett, Keith Rush, Jakub Konecny, Hugh Brendan McMahan
-
Publication number: 20200401893Abstract: Generally, the present disclosure is directed to systems and methods that perform adaptive optimization with improved convergence properties. The adaptive optimization techniques described herein are useful in various optimization scenarios, including, for example, training a machine-learned model such as, for example, a neural network. In particular, according to one aspect of the present disclosure, a system implementing the adaptive optimization technique can, over a plurality of iterations, employ an adaptive effective learning rate while also ensuring that the effective learning rate is non-increasing.Type: ApplicationFiled: September 8, 2020Publication date: December 24, 2020Inventors: Sashank Jakkam Reddi, Sanjiv Kumar, Manzil Zaheer, Satyen Chandrakant Kale
-
Patent number: 10769529Abstract: Generally, the present disclosure is directed to systems and methods that perform adaptive optimization with improved convergence properties. The adaptive optimization techniques described herein are useful in various optimization scenarios, including, for example, training a machine-learned model such as, for example, a neural network. In particular, according to one aspect of the present disclosure, a system implementing the adaptive optimization technique can, over a plurality of iterations, employ an adaptive effective learning rate while also ensuring that the effective learning rate is non-increasing.Type: GrantFiled: October 18, 2019Date of Patent: September 8, 2020Assignee: Google LLCInventors: Sashank Jakkam Reddi, Sanjiv Kumar, Manzil Zaheer, Satyen Chandrakant Kale
-
Publication number: 20200175365Abstract: Generally, the present disclosure is directed to systems and methods that perform adaptive optimization with improved convergence properties. The adaptive optimization techniques described herein are useful in various optimization scenarios, including, for example, training a machine-learned model such as, for example, a neural network. In particular, according to one aspect of the present disclosure, a system implementing the adaptive optimization technique can, over a plurality of iterations, employ an adaptive effective learning rate while also ensuring that the effective learning rate is non-increasing.Type: ApplicationFiled: October 18, 2019Publication date: June 4, 2020Inventors: Sashank Jakkam Reddi, Sanjiv Kumar, Manzil Zaheer, Satyen Chandrakant Kale
-
Patent number: 10394872Abstract: Herein is described an unsupervised learning method to discover topics and reduce the dimensionality of documents by designing and simulating a stochastic cellular automaton. A key formula that appears in many inference methods for LDA is used as the local update rule of the cellular automaton. Approximate counters may be used to represent counter values being tracked by the inference algorithms. Also, sparsity may be used to reduce the amount of computation needed for sampling a topic for particular words in the corpus being analyzed.Type: GrantFiled: November 4, 2015Date of Patent: August 27, 2019Assignee: Oracle International CorporationInventors: Jean-Baptiste Tristan, Stephen J. Green, Guy L. Steele, Jr., Manzil Zaheer
-
Publication number: 20180121792Abstract: According to embodiments, a recurrent neural network (RNN) is equipped with a set data structure whose operations are differentiable, which data structure can be used to store information for a long period of time. This differentiable set data structure can “remember” an event in the sequence of sequential data that may impact another event much later in the sequence, thereby allowing the RNN to classify the sequence based on many kinds of long dependencies. An RNN that is equipped with the differentiable set data structure can be properly trained with backpropagation and gradient descent optimizations. According to embodiments, a differentiable set data structure can be used to store and retrieve information with a simple set-like interface. According to further embodiments, the RNN can be extended to support several add operations, which can make the differentiable set data structure behave like a Bloom filter.Type: ApplicationFiled: October 31, 2016Publication date: May 3, 2018Inventors: Jean-Baptiste Tristan, Michael Wick, Manzil Zaheer
-
Publication number: 20160350411Abstract: Herein is described an unsupervised learning method to discover topics and reduce the dimensionality of documents by designing and simulating a stochastic cellular automaton. A key formula that appears in many inference methods for LDA is used as the local update rule of the cellular automaton. Approximate counters may be used to represent counter values being tracked by the inference algorithms. Also, sparsity may be used to reduce the amount of computation needed for sampling a topic for particular words in the corpus being analyzed.Type: ApplicationFiled: November 4, 2015Publication date: December 1, 2016Inventors: Jean-Baptiste Tristan, Stephen J. Green, Guy L. Steele, JR., Manzil Zaheer