Patents by Inventor Ashish Verma

Ashish Verma has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230086499
    Abstract: Thin film transistors having fin structures integrated with two-dimensional (2D) channel materials are described. In an example, an integrated circuit structure includes a plurality of insulator fins above a substrate. A two-dimensional (2D) material layer is over the plurality of insulator fins. A gate dielectric layer is on the 2D material layer. A gate electrode is on the gate dielectric layer. A first conductive contact is on the 2D material layer adjacent to a first side of the gate electrode. A second conductive contact is on the 2D material layer adjacent to a second side of the gate electrode, the second side opposite the first side.
    Type: Application
    Filed: September 20, 2021
    Publication date: March 23, 2023
    Inventors: Kirby MAXEY, Ashish Verma PENUMATCHA, Kevin P. O'BRIEN, Chelsey DOROW, Uygar E. AVCI, Sudarat LEE, Carl NAYLOR, Tanay GOSAVI
  • Patent number: 11605624
    Abstract: Describe is a resonator that uses ferroelectric (FE) material in a capacitive structure. The resonator includes a first plurality of metal lines extending in a first direction; an array of capacitors comprising ferroelectric material; a second plurality of metal lines extending in the first direction, wherein the array of capacitors is coupled between the first and second plurality of metal lines; and a circuitry to switch polarization of at least one capacitor of the array of capacitors. The switching of polarization regenerates acoustic waves. In some embodiments, the acoustic mode of the resonator is isolated using phononic gratings all around the resonator using metal lines above and adjacent to the FE based capacitors.
    Type: Grant
    Filed: January 2, 2019
    Date of Patent: March 14, 2023
    Assignee: Intel Corporation
    Inventors: Tanay Gosavi, Chia-ching Lin, Raseong Kim, Ashish Verma Penumatcha, Uygar Avci, Ian Young
  • Publication number: 20230069913
    Abstract: Techniques for utilizing model and hyperparameter optimization for multi-objective machine learning are disclosed. In one example, a method comprises the following steps. One of a plurality of hyperparameter optimization operations and a plurality of model parameter optimization operations are performed to generate a first solution set. The other of the plurality of hyperparameter optimization operations and the plurality of model parameter optimization operations are performed to generate a second solution set. At least a portion of the first solution set and at least a portion of the second solution set are combined to generate a third solution set.
    Type: Application
    Filed: September 9, 2021
    Publication date: March 9, 2023
    Inventors: Aswin Kannan, Vaibhav Saxena, Anamitra Roy Choudhury, Yogish Sabharwal, Parikshit Ram, Ashish Verma, Saurabh Manish Raje
  • Publication number: 20230065198
    Abstract: A memory device, an integrated circuit component including an array of the memory devices, and an integrated device assembly including the integrated circuit component. The memory devices includes a first electrode; a second electrode including an antiferromagnetic (AFM) material; and a memory stack including: a first layer adjacent the second electrode and including a multilayer stack of adjacent layers comprising ferromagnetic materials; a second layer adjacent the first layer; and a third layer adjacent the second layer at one side thereof, and adjacent the first electrode at another side thereof, the second layer between the first layer and the third layer, the third layer including a ferromagnetic material. The memory device may correspond to a magnetic tunnel junction (MTJ) magnetic random access memory bit cell, and the memory stack may correspond to a MTJ device.
    Type: Application
    Filed: September 2, 2021
    Publication date: March 2, 2023
    Applicant: Intel Corporation
    Inventors: Ian Alexander Young, Dmitri Evgenievich Nikonov, Chia-Ching Lin, Tanay A. Gosavi, Ashish Verma Penumatcha, Kaan Oguz, Punyashloka Debashis
  • Publication number: 20230058938
    Abstract: A pbit device, in one embodiment, includes a first field-effect transistor (FET) that includes a source region, a drain region, a source electrode on the source region, a drain electrode on the drain region, a channel region between the source and drain regions, a dielectric layer on a surface over the channel region, an electrode layer above the dielectric layer, and a ferroelectric (FE) material layer between the dielectric layer and the electrode layer. The pbit device also includes a second FET comprising a source electrode, a drain electrode, and a gate electrode. The drain electrode of the second FET is connected to the drain electrode of the first FET.
    Type: Application
    Filed: August 23, 2021
    Publication date: February 23, 2023
    Applicant: Intel Corporation
    Inventors: Punyashloka Debashis, Dmitri Evgenievich Nikonov, Hai Li, Chia-Ching Lin, Raseong Kim, Tanay A. Gosavi, Ashish Verma Penumatcha, Uygar E. Avci, Marko Radosavljevic, Ian Alexander Young
  • Patent number: 11586932
    Abstract: A computer-implemented machine learning model training method and resulting machine learning model. One embodiment of the method may comprise receiving at a computer memory training data; and training on a computer processor a machine learning model on the received training data using a plurality of batch sizes to produce a trained processor. The training may include calculating a plurality of activations during a forward pass of the training and discarding at least some of the calculated plurality of activations after the forward pass of the training.
    Type: Grant
    Filed: March 10, 2020
    Date of Patent: February 21, 2023
    Assignee: International Business Machines Corporation
    Inventors: Saurabh Goyal, Anamitra Roy Choudhury, Yogish Sabharwal, Ashish Verma
  • Patent number: 11586475
    Abstract: One embodiment provides a method, including: receiving at least one deep learning job for scheduling and running on a distributed system comprising a plurality of nodes; receiving a batch size range indicating a minimum batch size and a maximum batch size that can be utilized for running the at least one deep learning job; determining a plurality of runtime estimations for running the at least one deep learning job; creating a list of optimal combinations of (i) batch sizes and (ii) numbers of the plurality of nodes for running both (a) the at least one deep learning job and (b) current deep learning jobs; and scheduling the at least one deep-learning job at the distributed system, responsive to identifying, by utilizing the list, that the distributed system has necessary processing resources for running both (iii) the at least one deep learning job and (iv) the current deep learning jobs.
    Type: Grant
    Filed: February 28, 2020
    Date of Patent: February 21, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Saurav Basu, Vaibhav Saxena, Yogish Sabharwal, Ashish Verma, Jayaram Kallapalayam Radhakrishnan
  • Publication number: 20230015895
    Abstract: Methods, systems, and computer program products for accelerating inference of transformer-based models are provided herein. A computer-implemented method includes obtaining a machine learning model comprising a plurality of transformer blocks, a task, and a natural language dataset; generating a compressed version of the machine learning model based on the task and the natural language dataset, wherein the generating comprises: obtaining at least one set of tokens, wherein each token in the set corresponds to one of the items in the natural language dataset, identifying and removing one or more redundant output activations of different ones of the plurality of transformer blocks for the at least one set of tokens, and adding one or more input activations corresponding to the one or more removed output activations into the machine learning model at subsequent ones of the plurality of the transformer blocks; and outputting the compressed version of the machine learning model to at least one user.
    Type: Application
    Filed: July 12, 2021
    Publication date: January 19, 2023
    Inventors: Saurabh Goyal, Anamitra Roy Choudhury, Saurabh Manish Raje, Venkatesan T. Chakaravarthy, Yogish Sabharwal, Ashish Verma
  • Patent number: 11551145
    Abstract: Systems, computer-implemented methods, and computer program products that can facilitate switching a model training process from a ground truth training phase to an adversarial training phase based on performance of a model trained in the ground truth training phase are provided. According to an embodiment, a system can comprise a memory that stores computer executable components and a processor that executes the computer executable components stored in the memory. The computer executable components can comprise an analysis component that identifies a performance condition of a model trained in a model training process. The computer executable components can further comprise a trainer component that switches the model training process from a ground truth training process to an adversarial training process based on the identified performance condition.
    Type: Grant
    Filed: February 5, 2020
    Date of Patent: January 10, 2023
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Sidharth Gupta, Parijat Dube, Ashish Verma
  • Patent number: 11532439
    Abstract: Described is an ultra-dense ferroelectric memory. The memory is fabricated using a patterning method by that applies atomic layer deposition with selective dry and/or wet etch to increase memory density at a given via opening. A ferroelectric capacitor in one example comprises: a first structure (e.g., first electrode) comprising metal; a second structure (e.g., a second electrode) comprising metal; and a third structure comprising ferroelectric material, wherein the third structure is between and adjacent to the first and second structures, wherein a portion of the third structure is interdigitated with the first and second structures to increase surface area of the third structure. The increased surface area allows for higher memory density.
    Type: Grant
    Filed: March 7, 2019
    Date of Patent: December 20, 2022
    Assignee: Intel Corporation
    Inventors: Chia-Ching Lin, Sou-Chi Chang, Nazila Haratipour, Seung Hoon Sung, Ashish Verma Penumatcha, Jack Kavalieros, Uygar E. Avci, Ian A. Young
  • Publication number: 20220374762
    Abstract: Techniques for distributed federated learning leverage a multi-layered defense strategy to provide for reduced information leakage. In lieu of aggregating model updates centrally, an aggregation function is decentralized into multiple independent and functionally-equivalent execution entities, each running within its own trusted executed environment (TEE). The TEEs enable confidential and remote-attestable federated aggregation. Preferably, each aggregator entity runs within an encrypted virtual machine that support runtime in-memory encryption. Each party remotely authenticates the TEE before participating in the training. By using multiple decentralized aggregators, parties are enabled to partition their respective model updates at model-parameter granularity, and can map single weights to a specific aggregator entity. Parties also can dynamically shuffle fragmentary model updates at each training iteration to further obfuscate the information dispatched to each aggregator execution entity.
    Type: Application
    Filed: May 18, 2021
    Publication date: November 24, 2022
    Applicant: International Business Machines Corporation
    Inventors: Jayaram Kallapalayam Radhakrishnan, Ashish Verma, Zhongshu Gu, Enriquillo Valdez, Pau-Chen Cheng, Hani Talal Jamjoom
  • Publication number: 20220374763
    Abstract: Techniques for distributed federated learning leverage a multi-layered defense strategy to provide for reduced information leakage. In lieu of aggregating model updates centrally, an aggregation function is decentralized into multiple independent and functionally-equivalent execution entities, each running within its own trusted executed environment (TEE). The TEEs enable confidential and remote-attestable federated aggregation. Preferably, each aggregator entity runs within an encrypted virtual machine that support runtime in-memory encryption. Each party remotely authenticates the TEE before participating in the training. By using multiple decentralized aggregators, parties are enabled to partition their respective model updates at model-parameter granularity, and can map single weights to a specific aggregator entity. Parties also can dynamically shuffle fragmentary model updates at each training iteration to further obfuscate the information dispatched to each aggregator execution entity.
    Type: Application
    Filed: May 18, 2021
    Publication date: November 24, 2022
    Applicant: International Business Machines Corporation
    Inventors: Zhongshu Gu, Jayaram Kallapalayam Radhakrishnan, Ashish Verma, Enriquillo Valdez, Pau-Chen Cheng, Hani Talal Jamjoom, Kevin Eykholt
  • Publication number: 20220358358
    Abstract: Methods, systems, and computer program products for accelerating inference of neural network models via dynamic early exits are provided herein. A computer-implemented method includes determining a plurality of candidate exit points of a neural network model; obtaining a plurality of outputs of the neural network model for data samples in a target dataset, wherein the plurality of outputs comprises early outputs of the neural network model from the plurality of candidate exit points and regular outputs of the neural network model; and a set of one or more exit points from the plurality of candidate exits points that are dependent on the target dataset based at least in part on the plurality of outputs.
    Type: Application
    Filed: May 4, 2021
    Publication date: November 10, 2022
    Inventors: Saurabh Manish Raje, Saurabh Goyal, Anamitra Roy Choudhury, Yogish Sabharwal, Ashish Verma
  • Publication number: 20220343218
    Abstract: Embodiments relate to an input-encoding technique in conjunction with federation. Participating entities are arranged in a collaborative relationship. Each participating entity trains a machine learning model with an encoder on a training data set. The performance of each of the models is measured and at least one of the models is selectively identified based on the measured performance. An encoder of the selectively identified machine learning model is shared with each of the participating entities. The shared encoder is configured to be applied by the participating entities to train the first and second machine learning models, which are configured to be merged and shared in the federated learning environment.
    Type: Application
    Filed: April 26, 2021
    Publication date: October 27, 2022
    Applicant: International Business Machines Corporation
    Inventors: Hazar Yueksel, Brian E. D. Kingsbury, Kush Raj Varshney, Pradip Bose, Dinesh C. Verma, Shiqiang Wang, Augusto Vega, ASHISH VERMA, SUPRIYO CHAKRABORTY
  • Patent number: 11418060
    Abstract: Systems, methods and apparatus for wireless charging are disclosed. A charging device has a resonant circuit comprising one or more transmitting coils, a driver circuit configured to provide a charging current to the resonant circuit, a zero-crossing detector configured to provide a zero-crossing signal that includes edges corresponding to transitions of a voltage measured across the resonant circuit through a zero volt level or corresponding to transitions of a current in the resonant circuit through a zero ampere level and a controller. The controller may be configured to cause the driver circuit to provide the charging current to the resonant circuit when a receiving device is present on a surface of the charging device, and control a level of power that is wirelessly transferred to the receiving device by phase-aligning the charging current with a phase-modulation signal generated from the zero-crossing signal.
    Type: Grant
    Filed: June 4, 2020
    Date of Patent: August 16, 2022
    Assignee: AIRA, INC.
    Inventors: Eric Heindel Goodchild, James Scott, Ashish Verma
  • Patent number: 11410083
    Abstract: Methods, systems, and computer program products for determining operating range of hyperparameters are provided herein. A computer-implemented method includes obtaining a machine learning model, a list of candidate values for a hyperparameter, and a dataset; performing one or more hyperparameter range trials based on the machine learning model, the list of candidate values for the hyperparameter, and the dataset; automatically determining an operating range of the hyperparameter based on the one or more hyperparameter range trials; and training the machine learning model to convergence based at least in part on the determined operating range.
    Type: Grant
    Filed: January 7, 2020
    Date of Patent: August 9, 2022
    Assignee: International Business Machines Corporation
    Inventors: Shrihari Vasudevan, Alind Khare, Koyel Mukherjee, Yogish Sabharwal, Ashish Verma
  • Publication number: 20220215290
    Abstract: Methods, systems, and computer program products for cognitive disambiguation of problem-solving tasks involving a power grid are provided herein. A computer-implemented method includes capturing user feedback pertaining to relevance of remote terminal unit measurements related to a grid event through user interface interactions carried out by the user, wherein the user interface is communicatively linked to at least one computing device; automatically inferring rules related to the grid event to curate remote terminal unit measurements across iterations of analysis by recognizing irrelevant data and/or distractions in a visual display associated with the user interface, wherein said automatically inferring comprises implementing machine learning via the at least one computing device based on the user feedback; and outputting candidate solutions to a problem-solving task involving the grid based on the inferred rules, wherein said outputting is carried out by the at least one computing device.
    Type: Application
    Filed: November 15, 2021
    Publication date: July 7, 2022
    Applicant: Utopus Insights, Inc.
    Inventors: Chumki Basu, Ashish Verma
  • Publication number: 20220199519
    Abstract: Metal insulator metal capacitors are described. In an example, a metal-insulator-metal (MIM) capacitor includes a first electrode plate, and a first capacitor dielectric on the first electrode plate. The first capacitor dielectric is or includes a perovskite high-k dielectric material. A second electrode plate is on the first capacitor dielectric and has a portion over and parallel with the first electrode plate, and a second capacitor dielectric is on the second electrode plate. A third electrode plate is on the second capacitor dielectric and has a portion over and parallel with the second electrode plate.
    Type: Application
    Filed: December 21, 2020
    Publication date: June 23, 2022
    Inventors: Chia-Ching LIN, Sou-Chi CHANG, Kaan OGUZ, I-Cheng TUNG, Arnab SEN GUPTA, Ian A. YOUNG, Uygar E. AVCI, Matthew V. METZ, Ashish Verma PENUMATCHA, Anandi ROY
  • Publication number: 20220199838
    Abstract: A transistor includes a channel layer including a transition metal dichalcogenide (TMD) material, an encapsulation layer on a first portion of the channel layer, a gate electrode above the encapsulation layer, a gate dielectric layer between the gate electrode and the encapsulation layer. The transistor further includes a source contact on a second portion of the channel layer and a drain contact on a third portion of the channel layer, where the gate structure is between drain contact and the source contact.
    Type: Application
    Filed: December 23, 2020
    Publication date: June 23, 2022
    Applicant: Intel Corporation
    Inventors: Chelsey Dorow, Kevin O'Brien, Carl Naylor, Uygar Avci, Sudarat Lee, Ashish Verma Penumatcha, Chia-Ching LIn, Tanay Gosavi, Shriram Shivaraman, Kirby Maxey
  • Publication number: 20220199783
    Abstract: A transistor includes a first channel layer over a second channel layer, where the first and the second channel layers include a monocrystalline transition metal dichalcogenide (TMD). The transistor structure further includes a source structure coupled to a first end of the first and second channel layers, a drain structure coupled to a second end of the first and second channel layers, a gate structure between the source material and the drain material, and between the first channel layer and the second channel layer. The transistor further includes a spacer laterally between the gate structure and the and the source structure and between the gate structure and the drain structure. A liner is between the spacer and the gate structure. The liner is in contact with the first channel layer and the second channel layer and extends between the gate structure and the respective source structure and the drain structure.
    Type: Application
    Filed: December 23, 2020
    Publication date: June 23, 2022
    Applicant: Intel Corporation
    Inventors: Ashish Verma Penumatcha, Kevin O'Brien, Chelsey Dorow, Kirby Maxey, Carl Naylor, Tanay Gosavi, Sudarat Lee, Chia-Ching Lin, Seung Hoon Sung, Uygar Avci