Patents by Inventor Sashank Jakkam Reddi

Sashank Jakkam Reddi has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230394310
    Abstract: Generally, the present disclosure is directed to systems and methods that perform adaptive optimization with improved convergence properties. The adaptive optimization techniques described herein are useful in various optimization scenarios, including, for example, training a machine-learned model such as, for example, a neural network. In particular, according to one aspect of the present disclosure, a system implementing the adaptive optimization technique can, over a plurality of iterations, employ an adaptive effective learning rate while also ensuring that the effective learning rate is non-increasing.
    Type: Application
    Filed: August 22, 2023
    Publication date: December 7, 2023
    Inventors: Sashank Jakkam Reddi, Sanjiv Kumar, Manzil Zaheer, Satyen Chandrakant Kale
  • Patent number: 11775823
    Abstract: Generally, the present disclosure is directed to systems and methods that perform adaptive optimization with improved convergence properties. The adaptive optimization techniques described herein are useful in various optimization scenarios, including, for example, training a machine-learned model such as, for example, a neural network. In particular, according to one aspect of the present disclosure, a system implementing the adaptive optimization technique can, over a plurality of iterations, employ an adaptive effective learning rate while also ensuring that the effective learning rate is non-increasing.
    Type: Grant
    Filed: September 8, 2020
    Date of Patent: October 3, 2023
    Assignee: GOOGLE LLC
    Inventors: Sashank Jakkam Reddi, Sanjiv Kumar, Manzil Zaheer, Satyen Chandrakant Kale
  • Patent number: 11676033
    Abstract: A method for training a machine learning model, e.g., a neural network, using a regularization scheme is disclosed. The method includes generating regularized partial gradients of losses computed using an objective function for training the machine learning model.
    Type: Grant
    Filed: March 6, 2020
    Date of Patent: June 13, 2023
    Assignee: Google LLC
    Inventors: Aditya Krishna Menon, Ankit Singh Rawat, Sashank Jakkam Reddi, Sanjiv Kumar
  • Publication number: 20230153700
    Abstract: Provided are systems and methods which more efficiency train embedding models through the use of a cache of item embeddings for candidate items over a number of training iterations. The cached item embeddings can be “stale” embeddings that were generated by a previous version of the model at a previous training iteration. Specifically, at each iteration, the (potentially stale) item embeddings included in the cache can be used when generating similarity scores that are the basis for sampling a number of items to use as negatives in the current training iteration. For example, a Gumbel-Max sampling approach can be used to sample negative items that will enable an approximation of a true gradient. New embeddings can be generated for the sampled negative items and can be used to train the model at the current iteration.
    Type: Application
    Filed: November 8, 2022
    Publication date: May 18, 2023
    Inventors: Erik Michael Lindgren, Sashank Jakkam Reddi, Ruiqi Guo, Sanjiv Kumar
  • Publication number: 20230113984
    Abstract: Generally, the present disclosure is directed to systems and methods that perform adaptive optimization with improved convergence properties. The adaptive optimization techniques described herein are useful in various optimization scenarios, including, for example, training a machine-learned model such as, for example, a neural network. In particular, according to one aspect of the present disclosure, a system implementing the adaptive optimization technique can, over a plurality of iterations, employ an adaptive learning rate while also ensuring that the learning rate is non-increasing.
    Type: Application
    Filed: December 14, 2022
    Publication date: April 13, 2023
    Inventors: Sashank Jakkam Reddi, Sanjiv Kumar, Satyen Chandrakant Kale
  • Patent number: 11586904
    Abstract: Generally, the present disclosure is directed to systems and methods that perform adaptive optimization with improved convergence properties. The adaptive optimization techniques described herein are useful in various optimization scenarios, including, for example, training a machine-learned model such as, for example, a neural network. In particular, according to one aspect of the present disclosure, a system implementing the adaptive optimization technique can, over a plurality of iterations, employ an adaptive learning rate while also ensuring that the learning rate is non-increasing.
    Type: Grant
    Filed: September 13, 2018
    Date of Patent: February 21, 2023
    Assignee: GOOGLE LLC
    Inventors: Sashank Jakkam Reddi, Sanjiv Kumar, Satyen Chandrakant Kale
  • Publication number: 20210073639
    Abstract: A computing system and method can be used to implement a version of federated learning (FL) that incorporates adaptivity (e.g., leverages an adaptive learning rate). In particular, the present disclosure provides a general optimization framework in which (1) clients perform multiple epochs of training using a client optimizer to minimize loss on their local data and (2) a server system updates its global model by applying a gradient-based server optimizer to the average of the clients' model updates. This framework can seamlessly incorporate adaptivity by using adaptive optimizers as client and/or server optimizers. Building upon this general framework, the present disclosure also provides example specific adaptive optimization techniques for FL which use per-coordinate methods as server optimizers. By focusing on adaptive server optimization, the use of adaptive learning rates is enabled without increase in client storage or communication costs and compatibility with cross-device FL can be ensured.
    Type: Application
    Filed: November 20, 2020
    Publication date: March 11, 2021
    Inventors: Sashank Jakkam Reddi, Sanjiv Kumar, Manzil Zaheer, Zachary Charles, Zach Garrett, Keith Rush, Jakub Konecny, Hugh Brendan McMahan
  • Publication number: 20210049298
    Abstract: Methods, systems, and apparatus, including computer programs encoded on computer storage media, for privacy preserving training of a machine learning model.
    Type: Application
    Filed: August 14, 2020
    Publication date: February 18, 2021
    Inventors: Ananda Theertha Suresh, Xinnan Yu, Sanjiv Kumar, Sashank Jakkam Reddi, Venkatadheeraj Pichapati
  • Publication number: 20200401893
    Abstract: Generally, the present disclosure is directed to systems and methods that perform adaptive optimization with improved convergence properties. The adaptive optimization techniques described herein are useful in various optimization scenarios, including, for example, training a machine-learned model such as, for example, a neural network. In particular, according to one aspect of the present disclosure, a system implementing the adaptive optimization technique can, over a plurality of iterations, employ an adaptive effective learning rate while also ensuring that the effective learning rate is non-increasing.
    Type: Application
    Filed: September 8, 2020
    Publication date: December 24, 2020
    Inventors: Sashank Jakkam Reddi, Sanjiv Kumar, Manzil Zaheer, Satyen Chandrakant Kale
  • Patent number: 10769529
    Abstract: Generally, the present disclosure is directed to systems and methods that perform adaptive optimization with improved convergence properties. The adaptive optimization techniques described herein are useful in various optimization scenarios, including, for example, training a machine-learned model such as, for example, a neural network. In particular, according to one aspect of the present disclosure, a system implementing the adaptive optimization technique can, over a plurality of iterations, employ an adaptive effective learning rate while also ensuring that the effective learning rate is non-increasing.
    Type: Grant
    Filed: October 18, 2019
    Date of Patent: September 8, 2020
    Assignee: Google LLC
    Inventors: Sashank Jakkam Reddi, Sanjiv Kumar, Manzil Zaheer, Satyen Chandrakant Kale
  • Publication number: 20200175365
    Abstract: Generally, the present disclosure is directed to systems and methods that perform adaptive optimization with improved convergence properties. The adaptive optimization techniques described herein are useful in various optimization scenarios, including, for example, training a machine-learned model such as, for example, a neural network. In particular, according to one aspect of the present disclosure, a system implementing the adaptive optimization technique can, over a plurality of iterations, employ an adaptive effective learning rate while also ensuring that the effective learning rate is non-increasing.
    Type: Application
    Filed: October 18, 2019
    Publication date: June 4, 2020
    Inventors: Sashank Jakkam Reddi, Sanjiv Kumar, Manzil Zaheer, Satyen Chandrakant Kale
  • Publication number: 20200090031
    Abstract: Generally, the present disclosure is directed to systems and methods that perform adaptive optimization with improved convergence properties. The adaptive optimization techniques described herein are useful in various optimization scenarios, including, for example, training a machine-learned model such as, for example, a neural network. In particular, according to one aspect of the present disclosure, a system implementing the adaptive optimization technique can, over a plurality of iterations, employ an adaptive learning rate while also ensuring that the learning rate is non-increasing.
    Type: Application
    Filed: September 13, 2018
    Publication date: March 19, 2020
    Inventors: Sashank Jakkam Reddi, Sanjiv Kumar, Satyen Chandrakant Kale