Patents by Inventor Venkatanathan Varadarajan

Venkatanathan Varadarajan has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20200334569
    Abstract: Techniques are provided for selection of machine learning algorithms based on performance predictions by using hyperparameter predictors. In an embodiment, for each mini-machine learning model (MML model) of a plurality of MML models, a respective hyperparameter predictor set that predicts a respective set of hyperparameter settings for a first data set is trained. Each MML model represents a respective reference machine learning model (RML model) of a plurality of RML models. A first plurality of data set samples is generated from the first data set. A first plurality of first meta-feature sets is generated, each first meta-feature set describing a respective first data set sample of said first plurality. A respective target set of hyperparameter settings are generated for said each MML model using a hypertuning algorithm. The first plurality of first meta-feature sets and the respective target set of hyperparameter settings are used to train the respective hyperparameter predictor set.
    Type: Application
    Filed: April 18, 2019
    Publication date: October 22, 2020
    Inventors: Hesam Fathi Moghadam, Sandeep Agrawal, Venkatanathan Varadarajan, Anatoly Yakovlev, Sam Idicula, Nipun Agarwal
  • Publication number: 20200327448
    Abstract: Herein are techniques for exploring hyperparameters of a machine learning model (MLM) and to train a regressor to predict a time needed to train the MLM based on a hyperparameter configuration and a dataset. In an embodiment that is deployed in production inferencing mode, for each landmark configuration, each containing values for hyperparameters of a MLM, a computer configures the MLM based on the landmark configuration and measures time spent training the MLM on a dataset. An already trained regressor predicts time needed to train the MLM based on a proposed configuration of the MLM, dataset meta-feature values, and training durations and hyperparameter values of landmark configurations of the MLM. When instead in training mode, a regressor in training ingests a training corpus of MLM performance history to learn, by reinforcement, to predict a training time for the MLM for new datasets and/or new hyperparameter configurations.
    Type: Application
    Filed: April 15, 2019
    Publication date: October 15, 2020
    Inventors: ANATOLY YAKOVLEV, VENKATANATHAN VARADARAJAN, SANDEEP AGRAWAL, HESAM FATHI MOGHADAM, SAM IDICULA, NIPUN AGARWAL
  • Publication number: 20200125961
    Abstract: Techniques are described for generating and applying mini-machine learning variants of machine learning algorithms to save computational resources in tuning and selection of machine learning algorithms. In an embodiment, at least one of the hyper-parameter values for a reference variant is modified to a new hyper-parameter value thereby generating a new variant of machine learning algorithm from the reference variant of machine learning algorithm. A performance score is determined for the new variant of machine learning algorithm using a training dataset, the performance score representing the accuracy of the new machine learning model for the training dataset. By performing training of the new variant of machine learning algorithm with the training data set, a cost metric of the new variant of machine learning algorithm is measured by measuring usage the used computing resources for the training.
    Type: Application
    Filed: October 19, 2018
    Publication date: April 23, 2020
    Inventors: SANDEEP AGRAWAL, VENKATANATHAN VARADARAJAN, SAM IDICULA, NIPUN AGARWAL
  • Patent number: 10630957
    Abstract: Techniques described herein provide methods and systems for scalable distribution of computer vision workloads. In an embodiment, a method comprises receiving, at each of a first node and a second node of a distributed system of nodes, two images. The first image comprises a first set of pixels and the second image comprising a second set of pixels. The method further comprises shifting, at the first node, each pixel of the first set of pixels of the first image in a uniform direction by a first number of pixels to form a first shifted image and shifting, at the second node, each pixel of the first set of pixels of the first image in the uniform direction by a second number of pixels to form a second shifted image. The second number of pixels is different from the first number of pixels.
    Type: Grant
    Filed: October 1, 2019
    Date of Patent: April 21, 2020
    Assignee: Oracle International Corporation
    Inventors: Venkatanathan Varadarajan, Arun Raghavan, Sam Idicula, Nipun Agarwal
  • Publication number: 20200036954
    Abstract: Techniques described herein provide methods and systems for scalable distribution of computer vision workloads. In an embodiment, a method comprises receiving, at each of a first node and a second node of a distributed system of nodes, two images. The first image comprises a first set of pixels and the second image comprising a second set of pixels. The method further comprises shifting, at the first node, each pixel of the first set of pixels of the first image in a uniform direction by a first number of pixels to form a first shifted image and shifting, at the second node, each pixel of the first set of pixels of the first image in the uniform direction by a second number of pixels to form a second shifted image. The second number of pixels is different from the first number of pixels.
    Type: Application
    Filed: October 1, 2019
    Publication date: January 30, 2020
    Inventors: Venkatanathan Varadarajan, Arun Raghavan, Sam Idicula, Nipun Agarwal
  • Patent number: 10529049
    Abstract: Techniques are provided herein for generating an integral image of an input image in parallel across the cores of a multi-core processor. The input image is split into a plurality of tiles, each of which is stored in a scratchpad memory associated with a distinct core. At each tile, a partial integral image of the tile is first computed over the tile, using a Single-Pass Algorithm. This is followed by aggregating partial sums belonging to subsets of tiles using a 2D Inclusive Parallel Prefix Algorithm. A summation is finally performed over the aggregated partial sums to generate the integral image over the entire input image.
    Type: Grant
    Filed: March 27, 2017
    Date of Patent: January 7, 2020
    Assignee: Oracle International Corporation
    Inventors: Venkatanathan Varadarajan, Arun Raghavan, Sam Idicula, Nipun Agarwal
  • Patent number: 10469822
    Abstract: Techniques described herein provide methods and systems for scalable distribution of computer vision workloads. In an embodiment, a method comprises receiving, at each of a first node and a second node of a distributed system of nodes, two images. The first image comprises a first set of pixels and the second image comprising a second set of pixels. The method further comprises shifting, at the first node, each pixel of the first set of pixels of the first image in a uniform direction by a first number of pixels to form a first shifted image and shifting, at the second node, each pixel of the first set of pixels of the first image in the uniform direction by a second number of pixels to form a second shifted image. The second number of pixels is different from the first number of pixels.
    Type: Grant
    Filed: March 28, 2017
    Date of Patent: November 5, 2019
    Assignee: Oracle International Corporation
    Inventors: Venkatanathan Varadarajan, Arun Raghavan, Sam Idicula, Nipun Agarwal
  • Publication number: 20190244139
    Abstract: Techniques are provided herein for optimal initialization of value ranges of machine learning algorithm hyperparameters and other predictions based on dataset meta-features. In an embodiment for each particular hyperparameter of a machine learning algorithm, a computer invokes, based on an inference dataset, a distinct trained metamodel for the particular hyperparameter to detect an improved subrange of possible values for the particular hyperparameter. The machine learning algorithm is configured based on the improved subranges of possible values for the hyperparameters. The machine learning algorithm is invoked to obtain a result. In an embodiment, a gradient-based search space reduction (GSSR) finds an optimal value within the improved subrange of values for the particular hyperparameter. In an embodiment, the metamodel is trained based on performance data from exploratory sampling of configuration hyperspace, such as by GSSR.
    Type: Application
    Filed: March 7, 2018
    Publication date: August 8, 2019
    Inventors: VENKATANATHAN VARADARAJAN, SANDEEP AGRAWAL, SAM IDICULA, NIPUN AGARWAL
  • Publication number: 20190095818
    Abstract: Herein, horizontally scalable techniques efficiently configure machine learning algorithms for optimal accuracy and without informed inputs. In an embodiment, for each particular hyperparameter, and for each epoch, a computer processes the particular hyperparameter. An epoch explores one hyperparameter based on hyperparameter tuples. A respective score is calculated from each tuple. The tuple contains a distinct combination of values, each of which is contained in a value range of a distinct hyperparameter. All values of a tuple that belong to the particular hyperparameter are distinct. All values of a tuple that belong to other hyperparameters are held constant. The value range of the particular hyperparameter is narrowed based on an intersection point of a first line based on the scores and a second line based on the scores. A machine learning algorithm is optimally configured from repeatedly narrowed value ranges of hyperparameters. The configured algorithm is invoked to obtain a result.
    Type: Application
    Filed: January 31, 2018
    Publication date: March 28, 2019
    Inventors: Venkatanathan Varadarajan, Sam Idicula, Sandeep Agrawal, Nipun Agarwal
  • Publication number: 20190095819
    Abstract: Herein are techniques for automatic tuning of hyperparameters of machine learning algorithms. System throughput is maximized by horizontally scaling and asynchronously dispatching the configuration, training, and testing of an algorithm. In an embodiment, a computer stores a best cost achieved by executing a target model based on best values of the target algorithm's hyperparameters. The best values and their cost are updated by epochs that asynchronously execute. Each epoch has asynchronous costing tasks that explore a distinct hyperparameter. Each costing task has a sample of exploratory values that differs from the best values along the distinct hyperparameter. The asynchronous costing tasks of a same epoch have different values for the distinct hyperparameter, which accomplishes an exploration. In an embodiment, an excessive update of best values or best cost creates a major epoch for exploration in a subspace that is more or less unrelated to other epochs, thereby avoiding local optima.
    Type: Application
    Filed: September 21, 2018
    Publication date: March 28, 2019
    Inventors: VENKATANATHAN VARADARAJAN, SAM IDICULA, SANDEEP AGRAWAL, NIPUN AGARWAL
  • Publication number: 20190095756
    Abstract: Techniques are provided for selection of machine learning algorithms based on performance predictions by trained algorithm-specific regressors. In an embodiment, a computer derives meta-feature values from an inference dataset by, for each meta-feature, deriving a respective meta-feature value from the inference dataset. For each trainable algorithm and each regression meta-model that is respectively associated with the algorithm, a respective score is calculated by invoking the meta-model based on at least one of: a respective subset of meta-feature values, and/or hyperparameter values of a respective subset of hyperparameters of the algorithm. The algorithm(s) are selected based on the respective scores. Based on the inference dataset, the selected algorithm(s) may be invoked to obtain a result. In an embodiment, the trained regressors are distinctly configured artificial neural networks. In an embodiment, the trained regressors are contained within algorithm-specific ensembles.
    Type: Application
    Filed: January 30, 2018
    Publication date: March 28, 2019
    Inventors: Sandeep Agrawal, Sam Idicula, Venkatanathan Varadarajan, Nipun Agarwal
  • Publication number: 20180288384
    Abstract: Techniques described herein provide methods and systems for scalable distribution of computer vision workloads. In an embodiment, a method comprises receiving, at each of a first node and a second node of a distributed system of nodes, two images. The first image comprises a first set of pixels and the second image comprising a second set of pixels. The method further comprises shifting, at the first node, each pixel of the first set of pixels of the first image in a uniform direction by a first number of pixels to form a first shifted image and shifting, at the second node, each pixel of the first set of pixels of the first image in the uniform direction by a second number of pixels to form a second shifted image. The second number of pixels is different from the first number of pixels.
    Type: Application
    Filed: March 28, 2017
    Publication date: October 4, 2018
    Inventors: Venkatanathan Varadarajan, Arun Raghavan, Sam Idicula, Nipun Agarwal
  • Publication number: 20180276784
    Abstract: Techniques are provided herein for generating an integral image of an input image in parallel across the cores of a multi-core processor. The input image is split into a plurality of tiles, each of which is stored in a scratchpad memory associated with a distinct core. At each tile, a partial integral image of the tile is first computed over the tile, using a Single-Pass Algorithm. This is followed by aggregating partial sums belonging to subsets of tiles using a 2D Inclusive Parallel Prefix Algorithm. A summation is finally performed over the aggregated partial sums to generate the integral image over the entire input image.
    Type: Application
    Filed: March 27, 2017
    Publication date: September 27, 2018
    Inventors: Venkatanathan Varadarajan, Arun Henry Benjamin Raghavan, Sam Idicula, Nipun Agarwal
  • Patent number: 9128739
    Abstract: A method includes the step of running a set of instances on at least one cloud for a first time interval, each of the instances comprising a bundle of virtualized resources. The method also includes the step of evaluating one or more performance characteristics of each of the instances in the set of instances over the first time interval. The method further includes the step of determining a first subset of the set of instances to maintain for a second time interval and a second subset of the set of instances to terminate for the second time interval responsive to the evaluating step. The steps are performed by at least one processing device comprising a processor coupled to a memory.
    Type: Grant
    Filed: December 31, 2012
    Date of Patent: September 8, 2015
    Assignee: EMC Corporation
    Inventors: Ari Juels, Kevin D. Bowers, Benjamin Farley, Venkatanathan Varadarajan, Thomas Ristenpart, Michael M. Swift