Patents by Inventor Sandeep Agrawal
Sandeep Agrawal has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20200327448Abstract: Herein are techniques for exploring hyperparameters of a machine learning model (MLM) and to train a regressor to predict a time needed to train the MLM based on a hyperparameter configuration and a dataset. In an embodiment that is deployed in production inferencing mode, for each landmark configuration, each containing values for hyperparameters of a MLM, a computer configures the MLM based on the landmark configuration and measures time spent training the MLM on a dataset. An already trained regressor predicts time needed to train the MLM based on a proposed configuration of the MLM, dataset meta-feature values, and training durations and hyperparameter values of landmark configurations of the MLM. When instead in training mode, a regressor in training ingests a training corpus of MLM performance history to learn, by reinforcement, to predict a training time for the MLM for new datasets and/or new hyperparameter configurations.Type: ApplicationFiled: April 15, 2019Publication date: October 15, 2020Inventors: ANATOLY YAKOVLEV, VENKATANATHAN VARADARAJAN, SANDEEP AGRAWAL, HESAM FATHI MOGHADAM, SAM IDICULA, NIPUN AGARWAL
-
Patent number: 10740362Abstract: Aspects generating a container structure wherein processors are configured to compare attributes of a new container to attributes of each of plurality of existing containers within a container model as a function of a weighted Jaccard co-efficient distance matrix. The aspects identify a neighbor subset of the model containers that each have attributes that are nearest to the new container attributes, relative to remaining others of the model containers; select, as a nearest container, a container of the neighbor subset that has closest matching statistical properties to statistical properties calculated for the new container attributes; and generate a sizing recommendation for the new container to meet future utilization needs predicted as a function of usage pattern data of the nearest container.Type: GrantFiled: December 22, 2017Date of Patent: August 11, 2020Assignee: International Business Machines CorporationInventors: Anmol Sandeep Agrawal, Albee Jhoney, Suman Mondal, Pothuraju Srinivas, Vijay K. Sukthankar
-
Publication number: 20200221466Abstract: The present disclosure relates to a method and system for selecting a communication channel from a plurality of available channels in an infrastructure basic service set network having at least an access point and a plurality of access points, the method comprising: monitoring the available channels for a predetermined scan time by a co-located radio station to capture signal level from the plurality of access points at each channel; calculating weighted channel power level of each channel by the processor to determine the channel with minimum power and free channels from the available channels; receiving the signal level of each channel from co-located radio station to processor and calculating weighted channel power level of each channel by a processor to select the communication channel, selecting communication channels comprises: identifying free channels; where there is only one free channel, the free channel is selected, where there is plurality of free channels, the free channel with minimum interferenType: ApplicationFiled: August 13, 2018Publication date: July 9, 2020Inventors: Vipin TYAGI, N.V. Vishnu MURTHY, Sandeep AGRAWAL, Suja S., Naveen K. V., Krishnam Raju M., Manjula B. R., Kavita MATHUR, Diganta JENA, Sridhar K., Aswathy A., Biswaranjan SAHOOV
-
Publication number: 20200125961Abstract: Techniques are described for generating and applying mini-machine learning variants of machine learning algorithms to save computational resources in tuning and selection of machine learning algorithms. In an embodiment, at least one of the hyper-parameter values for a reference variant is modified to a new hyper-parameter value thereby generating a new variant of machine learning algorithm from the reference variant of machine learning algorithm. A performance score is determined for the new variant of machine learning algorithm using a training dataset, the performance score representing the accuracy of the new machine learning model for the training dataset. By performing training of the new variant of machine learning algorithm with the training data set, a cost metric of the new variant of machine learning algorithm is measured by measuring usage the used computing resources for the training.Type: ApplicationFiled: October 19, 2018Publication date: April 23, 2020Inventors: SANDEEP AGRAWAL, VENKATANATHAN VARADARAJAN, SAM IDICULA, NIPUN AGARWAL
-
Publication number: 20190244139Abstract: Techniques are provided herein for optimal initialization of value ranges of machine learning algorithm hyperparameters and other predictions based on dataset meta-features. In an embodiment for each particular hyperparameter of a machine learning algorithm, a computer invokes, based on an inference dataset, a distinct trained metamodel for the particular hyperparameter to detect an improved subrange of possible values for the particular hyperparameter. The machine learning algorithm is configured based on the improved subranges of possible values for the hyperparameters. The machine learning algorithm is invoked to obtain a result. In an embodiment, a gradient-based search space reduction (GSSR) finds an optimal value within the improved subrange of values for the particular hyperparameter. In an embodiment, the metamodel is trained based on performance data from exploratory sampling of configuration hyperspace, such as by GSSR.Type: ApplicationFiled: March 7, 2018Publication date: August 8, 2019Inventors: VENKATANATHAN VARADARAJAN, SANDEEP AGRAWAL, SAM IDICULA, NIPUN AGARWAL
-
Publication number: 20190197178Abstract: Aspects generating a container structure wherein processors are configured to compare attributes of a new container to attributes of each of plurality of existing containers within a container model as a function of a weighted Jaccard co-efficient distance matrix. The aspects identify a neighbor subset of the model containers that each have attributes that are nearest to the new container attributes, relative to remaining others of the model containers; select, as a nearest container, a container of the neighbor subset that has closest matching statistical properties to statistical properties calculated for the new container attributes; and generate a sizing recommendation for the new container to meet future utilization needs predicted as a function of usage pattern data of the nearest container.Type: ApplicationFiled: December 22, 2017Publication date: June 27, 2019Inventors: Anmol Sandeep Agrawal, Albee Jhoney, Suman Mondal, Pothuraju Srinivas, Vijay K. Sukthankar
-
Publication number: 20190095818Abstract: Herein, horizontally scalable techniques efficiently configure machine learning algorithms for optimal accuracy and without informed inputs. In an embodiment, for each particular hyperparameter, and for each epoch, a computer processes the particular hyperparameter. An epoch explores one hyperparameter based on hyperparameter tuples. A respective score is calculated from each tuple. The tuple contains a distinct combination of values, each of which is contained in a value range of a distinct hyperparameter. All values of a tuple that belong to the particular hyperparameter are distinct. All values of a tuple that belong to other hyperparameters are held constant. The value range of the particular hyperparameter is narrowed based on an intersection point of a first line based on the scores and a second line based on the scores. A machine learning algorithm is optimally configured from repeatedly narrowed value ranges of hyperparameters. The configured algorithm is invoked to obtain a result.Type: ApplicationFiled: January 31, 2018Publication date: March 28, 2019Inventors: Venkatanathan Varadarajan, Sam Idicula, Sandeep Agrawal, Nipun Agarwal
-
Publication number: 20190095819Abstract: Herein are techniques for automatic tuning of hyperparameters of machine learning algorithms. System throughput is maximized by horizontally scaling and asynchronously dispatching the configuration, training, and testing of an algorithm. In an embodiment, a computer stores a best cost achieved by executing a target model based on best values of the target algorithm's hyperparameters. The best values and their cost are updated by epochs that asynchronously execute. Each epoch has asynchronous costing tasks that explore a distinct hyperparameter. Each costing task has a sample of exploratory values that differs from the best values along the distinct hyperparameter. The asynchronous costing tasks of a same epoch have different values for the distinct hyperparameter, which accomplishes an exploration. In an embodiment, an excessive update of best values or best cost creates a major epoch for exploration in a subspace that is more or less unrelated to other epochs, thereby avoiding local optima.Type: ApplicationFiled: September 21, 2018Publication date: March 28, 2019Inventors: VENKATANATHAN VARADARAJAN, SAM IDICULA, SANDEEP AGRAWAL, NIPUN AGARWAL
-
Publication number: 20190095756Abstract: Techniques are provided for selection of machine learning algorithms based on performance predictions by trained algorithm-specific regressors. In an embodiment, a computer derives meta-feature values from an inference dataset by, for each meta-feature, deriving a respective meta-feature value from the inference dataset. For each trainable algorithm and each regression meta-model that is respectively associated with the algorithm, a respective score is calculated by invoking the meta-model based on at least one of: a respective subset of meta-feature values, and/or hyperparameter values of a respective subset of hyperparameters of the algorithm. The algorithm(s) are selected based on the respective scores. Based on the inference dataset, the selected algorithm(s) may be invoked to obtain a result. In an embodiment, the trained regressors are distinctly configured artificial neural networks. In an embodiment, the trained regressors are contained within algorithm-specific ensembles.Type: ApplicationFiled: January 30, 2018Publication date: March 28, 2019Inventors: Sandeep Agrawal, Sam Idicula, Venkatanathan Varadarajan, Nipun Agarwal
-
Publication number: 20190095399Abstract: Techniques are described herein for performing efficient matrix multiplication in architectures with scratchpad memories or associative caches using asymmetric allocation of space for the different matrices. The system receives a left matrix and a right matrix. In an embodiment, the system allocates, in a scratchpad memory, asymmetric memory space for tiles for each of the two matrices as well as a dot product matrix. The system proceeds with then performing dot product matrix multiplication involving the tiles of the left and the right matrices, storing resulting dot product values in corresponding allocated dot product matrix tiles. The system then proceeds to write the stored dot product values from the scratchpad memory into main memory.Type: ApplicationFiled: September 26, 2017Publication date: March 28, 2019Inventors: Gaurav Chadha, Sam Idicula, Sandeep Agrawal, Nipun Agarwal
-
Patent number: 8331427Abstract: A universal asynchronous receiver-transmitter module that includes a sampling controller that assigns a variable number of active edges in a clock signal to respective bits in a serial data signal. A serial data reception path derives a bit from the serial data signal on the basis of the variable number of active edges that the sampling controller has assigned to the bit.Type: GrantFiled: September 7, 2010Date of Patent: December 11, 2012Assignee: ST Ericsson SAInventors: Sandeep Agrawal, Sathya Jaganathan, Johannes Boonstra
-
Patent number: 8266360Abstract: An electronic circuit has an interface for an I2C-bus. The interface comprises a first node for a clock line of the I2C-bus; a second node for a data line of the I2C-bus; and an I2C-bus controller for controlling an operation of the interface under combined control of the clock line and the data line. The circuit has a plurality of further nodes for connecting to a plurality of further data lines. The controller has an operational mode for control of receiving from the further nodes, or for control of supplying to the further nodes, a plurality of data bits in parallel under combined control of the clock line and the data line.Type: GrantFiled: August 13, 2008Date of Patent: September 11, 2012Assignee: NXP B.V.Inventor: Sandeep Agrawal
-
Publication number: 20110197009Abstract: An electronic circuit has an interface for an I2C-bus. The interface comprises a first node for a clock line of the I2C-bus; a second node for a data line of the I2C-bus; and an I2C-bus controller for controlling an operation of the interface under combined control of the clock line and the data line. The circuit has a plurality of further nodes for connecting to a plurality of further data lines. The controller has an operational mode for control of receiving from the further nodes, or for control of supplying to the further nodes, a plurality of data bits in parallel under combined control of the clock line and the data line.Type: ApplicationFiled: August 13, 2008Publication date: August 11, 2011Applicant: NXP B.V.Inventor: Sandeep Agrawal
-
Publication number: 20110116557Abstract: A universal asynchronous receiver-transmitter module that includes a sampling controller that assigns a variable number of active edges in a clock signal to respective bits in a serial data signal. A serial data reception path derives a bit from the serial data signal on the basis of the variable number of active edges that the sampling controller has assigned to the bit.Type: ApplicationFiled: September 7, 2010Publication date: May 19, 2011Applicant: ST-ERICSSON SAInventors: Sandeep Agrawal, Sathya Jaganathan, Johannes Boonstra