Patents by Inventor Nilesh Jain

Nilesh Jain has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240144030
    Abstract: Methods, apparatus, systems, and articles of manufacture to modify pre-trained models to apply neural architecture search are disclosed. Example instructions, when executed, cause processor circuitry to at least access a pre-trained machine learning model, create a super-network based on the pre-trained machine learning model, create a plurality of subnetworks based on the super-network, and search the plurality of subnetworks to select a subnetwork.
    Type: Application
    Filed: June 8, 2022
    Publication date: May 2, 2024
    Inventors: Juan Pablo Muñoz, Nilesh Jain, Chaunté Lacewell, Alexander Kozlov, Nikolay Lyalyushkin, Vasily Shamporov, Anastasia Senina
  • Patent number: 11914869
    Abstract: Systems and methods for cognitive encryption of data are disclosed. The methods may include maintaining a plurality of data storage systems in communication with an external metadata management system, operating the metadata management system to store metadata corresponding to data residing on the plurality of data storage systems, identifying a candidate data set residing on at least one of the plurality of data storage systems on which at least one security action should be performed using information included in the metadata management system, and in response to identifying the candidate data set, identifying the at least one security action.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: February 27, 2024
    Assignee: International Business Machines Corporation
    Inventors: Joseph Dain, Nilesh P. Bhosale, Abhishek Jain, Gregory Kishi
  • Publication number: 20240045685
    Abstract: Systems, methods, and apparatuses relating sparsity based FMA. In some examples, an instance of a single FMA instruction has one or more fields for an opcode, one or more fields to identify a source/destination matrix operand, one or more fields to identify a first plurality of source matrix operands, one or more fields to identify a second plurality of matrix operands, wherein the opcode is to indicate that execution circuitry is to select a proper subset of FP8 data elements from the first plurality of source matrix operands based on sparsity controls from a first matrix operand of the second plurality of matrix operands and perform a FMA.
    Type: Application
    Filed: October 1, 2022
    Publication date: February 8, 2024
    Inventors: Menachem Adelman, Amit Gradstein, Alexander Heinecke, Christopher Hughes, Naveen Mellempudi, Shahar Mizrahi, Dana Rip, Simon Rubanovich, Uri Sherman, Guy Boudoukh, Evangelos Georganas, Nilesh Jain, Barukh Ziv
  • Publication number: 20240029455
    Abstract: Systems, apparatuses and methods may provide for technology that encodes multi-view visual data into latent features via an aggregator encoder, decodes the latent features into one or more novel target views different from views of the multi-view visual data via a rendering decoder, and decodes the latent features into an object label via a label decoder. The operation to decode the latent features via the rendering decoder and to decode the latent features via the label decoder occur at least partially at the same time. The operation to encode, via the aggregator encoder, the multi-view visual data into the latent features further includes operations to: perform, via the aggregator encoder, semantic object recognition operations based on radiance field view synthesis operations, and perform, via the aggregator encoder, radiance field view synthesis operations based on semantic object recognition operations.
    Type: Application
    Filed: September 27, 2023
    Publication date: January 25, 2024
    Inventors: Peixi Xiong, Nilesh Jain, Ravishankar Iyer, Mrutunjayya Mrutunjayya
  • Publication number: 20240020219
    Abstract: An aspect of the present disclosure determines test cases to be run upon changes in software application code. In one embodiment, a system receives a test suite containing multiple test cases designed to perform the testing of a software application, the software application containing one or more components. The system executes each test case to determine a corresponding sequence of components executed in the software application for the test case, and then stores a dependency data indicating for each test case the corresponding determined sequence of components. Upon determining that a first component has been changed, the system identifies a first set of test cases that cause execution of the first component by performing a reverse look-up in the dependency data. The system then includes the identified first set of test cases in the test cases to be run for re-testing the software application.
    Type: Application
    Filed: July 14, 2022
    Publication date: January 18, 2024
    Inventors: Nilesh Jain, Krishnananda Subbarao
  • Publication number: 20240007414
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed to optimize resources in edge networks. An example apparatus includes agent managing circuitry to invoke an exploration agent to identify platform resource devices, select a first one of the identified platform resource devices, and generate first optimization metrics for the workload corresponding to the first one of the identified platform resource devices, the first optimization metrics corresponding to a first path. The example agent is to further select a second one of the identified platform resource devices, generate second optimization metrics for the workload corresponding to the second one of the identified platform resource devices, the second optimization metrics corresponding to a second path.
    Type: Application
    Filed: June 25, 2021
    Publication date: January 4, 2024
    Inventors: Nilesh Jain, Rajesh Poornachandran, Eriko Nurvitadhi, Anahita Bhiwandiwalla, Juan Pablo Munoz, Ravishankar Iyer, Chaunte W. Lacewell
  • Publication number: 20230409326
    Abstract: Techniques and mechanisms for processor circuitry to execute a load and expand instruction of an instruction set to generate decompressed matrix data. In an embodiment, the instruction comprises a source operand which indicates a location from which compressed matrix data, and corresponding metadata, are to be accessed. A destination operand of the instruction indicates a location which is to receive decompressed metadata, which is generated, during execution of the instruction, based on the compressed matrix data and the corresponding metadata. The metadata comprises compression mask information which identifies which elements of the matrix have been masked from the compressed matrix data. In another embodiment, the instruction further comprises a count operand which identifies a total number of the unmasked matrix elements which are represented in the compressed matrix data.
    Type: Application
    Filed: June 15, 2022
    Publication date: December 21, 2023
    Applicant: Intel Corporation
    Inventors: Menachem Adelman, Amit Gradstein, Simon Rubanovich, Barukh Ziv, Uri Sherman, Dana Rip, Shahar Mizrahi, Dan Baum, Rinat Rappoport, Nilesh Jain, Zeev Sperber, Gideon Stupp, Alexander Heinecke, Christopher Hughes, Evangelos Georganas
  • Patent number: 11637687
    Abstract: Methods, apparatus, systems and articles of manufacture to determine provenance for data supply chains are disclosed. Example instructions cause a machine to at least, in response to data being generated, generate a local data object and object metadata corresponding to the data; hash the local data object; generate a hash of a label of the local data object; generate a hierarchical data structure for the data including the hash of the local data object and the hash of the label of the local data object; generate a data supply chain object including the hierarchical data structure; and transmit the data and the data supply chain object to a device that requested access to the data.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: April 25, 2023
    Assignee: Intel Corporation
    Inventors: Ned Smith, Francesc Guim Bernat, Sanjay Bakshi, Paul O'Neill, Ben McCahill, Brian A. Keating, Adrian Hoban, Kapil Sood, Mona Vij, Nilesh Jain, Rajesh Poornachandran, Trevor Cooper, Kshitij A. Doshi, Marcin Spoczynski
  • Publication number: 20230102279
    Abstract: Systems, methods, and apparatuses relating sparsity based FMA. In some examples, an instance of a single FMA instruction has one or more fields for an opcode, one or more fields to identify a source/destination matrix operand, one or more fields to identify a first plurality of source matrix operands, one or more fields to identify a second plurality of matrix operands, wherein the opcode is to indicate that execution circuitry is to select a proper subset of data elements from the first plurality of source matrix operands based on sparsity controls from a first matrix operand of the second plurality of matrix operands and perform a FMA.
    Type: Application
    Filed: September 25, 2021
    Publication date: March 30, 2023
    Inventors: Menachem ADELMAN, Robert VALENTINE, Dan BAUM, Amit GRADSTEIN, Simon RUBANOVICH, Regev SHEMY, Zeev SPERBER, Alexander HEINECKE, Christopher HUGHES, Evangelos GEORGANAS, Mark CHARNEY, Arik NARKIS, Rinat RAPPOPORT, Barukh ZIV, Yaroslav POLLAK, Nilesh JAIN, Yash AKHAURI, Brinda GANESH, Rajesh POORNACHANDRAN, Guy BOUDOUKH
  • Patent number: 11557085
    Abstract: Embodiments are directed to neural network processing for multi-object three-dimensional (3D) modeling. An embodiment of a computer-readable storage medium includes executable computer program instructions for obtaining data from multiple cameras, the data including multiple images, and generating a 3D model for 3D imaging based at least in part on the data from the cameras, wherein generating the 3D model includes one or more of performing processing with a first neural network to determine temporal direction based at least in part on motion of one or more objects identified in an image of the multiple images or performing processing with a second neural network to determine semantic content information for an image of the multiple images.
    Type: Grant
    Filed: December 4, 2020
    Date of Patent: January 17, 2023
    Assignee: Intel Corporation
    Inventors: Jill Boyce, Soethiha Soe, Selvakumar Panneer, Adam Lake, Nilesh Jain, Deepak Vembar, Glen J. Anderson, Varghese George, Carl Marshall, Scott Janus, Saurabh Tangri, Karthik Veeramani, Prasoonkumar Surti
  • Publication number: 20230011937
    Abstract: Example systems, methods, and apparatus to generate optimized models for Internet of Things device are disclosed. An example apparatus includes a data receiver to collect data from a sensor of an internet of things device based a first sampling frequency and a buffer having a first buffer size; a model trainer to train a model based on the data collected from the sensor; a buffer analyzer to select a second sampling frequency and to reduce the buffer to a second buffer size, the model trainer to update the model based on the second buffer size; and a platform analyzer to: determine a duration of time that that internet of things device will take to analyze sensor data based on the updated model.
    Type: Application
    Filed: July 20, 2022
    Publication date: January 12, 2023
    Inventors: Nilesh Jain, Vui Seng Chua, Fahim Mohammad, Anindya Paul
  • Publication number: 20220300795
    Abstract: Systems, apparatuses and methods may provide for technology that includes a performance-enhanced decompression pipeline having first decoder hardware to convert variable length weights to fixed length keys, wherein the variable length weights are non-uniform quantization values, and second decoder hardware to convert the fixed length keys to bit value. In one example, the first length keys are compressed representations of the variable length weights and the bit values are bit accurate representations of the fixed length keys.
    Type: Application
    Filed: June 9, 2022
    Publication date: September 22, 2022
    Inventors: Yash Akhauri, Nilesh Jain, Pasquale Cocchini, Eriko Nurvitadhi
  • Patent number: 11411832
    Abstract: Example systems, methods, and apparatus to generate optimized models for Internet of Things device are disclosed. An example apparatus includes a data receiver to collect data from a sensor of an internet of things device based a first sampling frequency and a buffer having a first buffer size; a model trainer to train a model based on the data collected from the sensor; a buffer analyzer to select a second sampling frequency and to reduce the buffer to a second buffer size, the model trainer to update the model based on the second buffer size; and a platform analyzer to: determine a duration of time that that internet of things device will take to analyze sensor data based on the updated model.
    Type: Grant
    Filed: December 28, 2018
    Date of Patent: August 9, 2022
    Assignee: INTEL CORPORATION
    Inventors: Nilesh Jain, Vui Seng Chua, Fahim Mohammad, Anindya Paul
  • Publication number: 20220122215
    Abstract: Embodiments described herein include software, firmware, and hardware that provides techniques to enable deterministic scheduling across multiple general-purpose graphics processing units. One embodiment provides a multi-GPU architecture with uniform latency. One embodiment provides techniques to distribute memory output based on memory chip thermals. One embodiment provides techniques to enable thermally aware workload scheduling. One embodiment provides techniques to enable end to end contracts for workload scheduling on multiple GPUs.
    Type: Application
    Filed: March 14, 2020
    Publication date: April 21, 2022
    Applicant: Intel Corporation
    Inventors: JOYDEEP RAY, SELVAKUMAR PANNEER, SAURABH TANGRI, BEN ASHBAUGH, SCOTT JANUS, ABHISHEK APPU, VARGHESE GEORGE, RAVISHANKAR IYER, NILESH JAIN, PATTABHIRAMAN K, ALTUG KOKER, MIKE MACPHERSON, JOSH MASTRONARDE, ELMOUSTAPHA OULD-AHMED-VALL, JAYAKRISHNA P. S, ERIC SAMSON
  • Publication number: 20220114451
    Abstract: Methods, apparatus, systems, and articles of manufacture for data enhanced automated model generation are disclosed. Example instructions, when executed, cause at least one processor to access a request to generate a machine learning model to perform a selected task, generate task knowledge based on a previously generated machine learning model, create a search space based on the task knowledge, and generate a machine learning model using neural architecture search, the neural architecture search beginning based on the search space.
    Type: Application
    Filed: December 22, 2021
    Publication date: April 14, 2022
    Inventors: Chaunté W. Lacewell, Juan Pablo Muñoz, Rajesh Poornachandran, Nilesh Jain, Anahita Bhiwandiwalla, Eriko Nurvitadhi, Abhijit Davare
  • Publication number: 20220116284
    Abstract: Methods, apparatus, systems, and articles of manufacture for dynamic XPU hardware-aware deep learning model management are disclosed. An example method includes extracting a plurality of models from a dataset, respective ones of the plurality of models optimized for a selected quality of service (QoS) objective of a plurality of QoS objectives, identifying a plurality of feature differences between respective ones of the plurality of models, and identifying a plurality of feature similarities between respective ones of the plurality of models.
    Type: Application
    Filed: December 22, 2021
    Publication date: April 14, 2022
    Inventors: Ravishankar Iyer, Nilesh Jain, Juan Munoz, Eriko Nurvitadhi, Anahita Bhiwandiwalla, Rajesh Poornachandran
  • Publication number: 20220114495
    Abstract: Methods, apparatus, systems, and articles of manufacture are disclosed for composable machine learning compute nodes. An example apparatus includes interface circuitry to receive a workload, instructions in the apparatus, and processor circuitry to at least one of execute or instantiate the instructions to generate a first configuration of one or more machine-learning models based on a workload, generate a second configuration of hardware, determine an evaluation parameter based on an execution of the workload, the execution of the workload based on the first configuration and the second configuration, and, in response to the evaluation parameter satisfying a threshold, execute the one or more machine-learning models in the first configuration on the hardware in the second configuration, the one or more machine-learning models and the hardware to execute the workload.
    Type: Application
    Filed: December 21, 2021
    Publication date: April 14, 2022
    Inventors: Eriko Nurvitadhi, Rajesh Poornachandran, Abhijit Davare, Nilesh Jain, Chaunte Lacewell, Anahita Bhiwandiwalla, Juan Pablo Munoz, Andrew Boutros, Yash Akhauri
  • Publication number: 20220114644
    Abstract: A recommendation system includes a recommendation model for generating a recommendation score for a user with respect to an item. The model is configured to receive a set of dense features, describing numerical information, and a set of sparse features, representing a subset of items from a relatively large group of items. To represent the subset of items in the sparse features, each item (or a symbol thereof) is processed by an encoder to represent each item with a plurality of positions in a sparse binary representation of the subset of items. The sparse binary representation is then processed by a model that determines a vector representation of the sparse category features used in the prediction in conjunction with the dense features.
    Type: Application
    Filed: December 21, 2021
    Publication date: April 14, 2022
    Inventors: Gopi Krishna Jha, Anthony Thomas, Nilesh Jain
  • Publication number: 20220108054
    Abstract: An architecture search system evaluates a search space of neural network and hardware architectures with a plurality of candidate controllers. Each controller attempts to identify an optimized architecture using a different optimization algorithm. To identify a controller for the search space, the architecture search system samples subspaces of the search space having a portion of the neural network search space and a portion of the hardware search space. For each subspace, candidate controllers are scored with respect to the optimized design determined by the respective candidate controllers. Using the scores for the various candidate controllers across the sampled subspaces, a controller is selected to optimize the overall network architecture search space.
    Type: Application
    Filed: December 16, 2021
    Publication date: April 7, 2022
    Applicant: Intel Corporation
    Inventors: Yash Akhauri, Nilesh Jain, Juan Pablo Munoz Chiabrando, Adithya M. Niranjan
  • Publication number: 20220092391
    Abstract: An apparatus is provided to use NEMO search to train GNNs that can be used for mixed-precision quantization of DNNs. For example, the apparatus generates a plurality of GNNs. The apparatus further generates a plurality of new GNNs based on the plurality of GNNs. The apparatus also generates a sequential graph for a first DNN. The first DNN includes a sequence of quantizable operations, each of which includes quantizable parameters and is represented by a different node in the sequential graph. The apparatus inputs the sequential graph into the GNNs and new GNNs and evaluates outputs of the GNNs and new GNNs based on conflicting objectives of reducing precisions of the quantizable parameters of the first DNN. The apparatus then selects a GNN from the GNNs and new GNNs based on the evaluation. The GNN is to be used for reducing precisions of quantizable parameters of a second DNN.
    Type: Application
    Filed: December 7, 2021
    Publication date: March 24, 2022
    Inventors: Santiago Miret, Vui Seng Chua, Mattias Marder, Mariano J. Phielipp, Nilesh Jain, Somdeb Majumdar