Patents by Inventor Varun Praveen

Varun Praveen has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230342666
    Abstract: Devices, systems, and techniques for experiment-based training of machine learning models (MLMs) using early stopping. The techniques include starting training tracks (TTs) that train candidate MLMs using the same training data and respective sets of training settings, performing a first evaluation of a first candidate MLM prior to completion of a corresponding first TT, and responsive to the first evaluation, placing the first TT on an inactive status, inactive status indicating that further training of the first candidate MLM is to be ceased. The techniques further include continuing at least a second TT using the training data, and responsive to conclusion of the TTs, selecting, as one or more final MLMs, the first candidate MLM or a second candidate MLM.
    Type: Application
    Filed: April 25, 2023
    Publication date: October 26, 2023
    Inventors: Steve Masson, Farzin Aghdasi, Parthasarathy Sriram, Arvind Sai Kumar, Varun Praveen
  • Publication number: 20230342600
    Abstract: Devices, systems, and techniques for provisioning of cloud-based machine learning training, optimization, and deployment services. The techniques include providing, to a remote client device, a list of available machine learning models (MLMs), receiving from the remote client device an indication of selected MLM(s) from the provided list, identifying training settings for selected MLM(s), identifying a training data for the selected MLM(s), configuring, using the identified training settings, execution of one or more processes to train the selected MLM(s) using the identified training data, and providing to the remote client device a representation of completed training of at least one MLM.
    Type: Application
    Filed: April 25, 2023
    Publication date: October 26, 2023
    Inventors: Steve Masson, Farzin Aghdasi, Parthasarathy Sriram, Arvind Sai Kumar, Varun Praveen
  • Publication number: 20220261631
    Abstract: Apparatuses, systems, and techniques to provisioning of pipelines for efficient training, retraining, configuring, deploying, and using machine learning models for inference in user-specific platforms.
    Type: Application
    Filed: February 12, 2021
    Publication date: August 18, 2022
    Inventors: Jonathan Michael Cohen, Ryan Edward Leary, Scot Duane Junkin, Purnendu Mukherjee, Joao Felipe Santos, Tomasz Kornuta, Varun Praveen
  • Publication number: 20220044114
    Abstract: Apparatuses, systems, and techniques to use low precision quantization to train a neural network. In at least one embodiment, one or more weights of a trained model are represented by low bit integer numbers instead of using full floating point precision. Changing precision of the one or more weights is performed by first quantizing all weights and activations of a neural network except for layers that require finer granularity in representation than an 8 bit quantization can provide to generate a first trained model. Subsequently, precision of the one or more weights of the first trained model is changed again to generate a second trained model. For the second trained model, the precision of one or more weights of at least one additional layer is changed in addition to the layers that previously had precision values changed while training the neural network to generate the first trained model.
    Type: Application
    Filed: June 9, 2021
    Publication date: February 10, 2022
    Inventors: Parthasarathy Sriram, Varun Praveen, Farzin Aghdasi
  • Publication number: 20210089921
    Abstract: Transfer learning can be used to enable a user to obtain a machine learning model that is fully trained for an intended inferencing task without having to train the model from scratch. A pre-trained model can be obtained that is relevant for that inferencing task. Additional training data, as may correspond to at least one additional class of data, can be used to further train this model. This model can then be pruned and retrained in order to obtain a smaller model that retains high accuracy for the intended inferencing task.
    Type: Application
    Filed: September 23, 2020
    Publication date: March 25, 2021
    Inventors: Farzin Aghdasi, Varun Praveen, FNU Ratnesh Kumar, Partha Sriram
  • Publication number: 20200160185
    Abstract: Input layers of an element-wise operation in a neural network can be pruned such that the shape (e.g., the height, the width, and the depth) of the pruned layers matches. A pruning engine identifies all of the input layers into the element-wise operation. For each set of corresponding neurons in the input layers, the pruning engine equalizes the metrics associated with the neurons to generate an equalized metric associated with the set. The pruning engine prunes the input layers based on the equalized metrics generated for each unique set of corresponding neurons.
    Type: Application
    Filed: November 21, 2018
    Publication date: May 21, 2020
    Inventors: Varun Praveen, Anil Ubale, Parthasarathy Sriram, Greg Heinrich, Tayfun Gurel