Patents by Inventor Suyog Gupta

Suyog Gupta has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20180130892
    Abstract: A method includes forming a tunneling dielectric layer on a semiconductor substrate, a first portion of the tunneling dielectric layer is directly above a channel region in the semiconductor substrate and a second portion of the tunneling dielectric layer is directly above source-drain regions located on opposing sides of the channel region, the second portion of the tunneling dielectric layer is thicker than the first portion of the tunneling dielectric layer, forming a floating gate directly above the first portion of the tunneling dielectric layer and the second portion of the tunneling dielectric layer, and forming a control dielectric layer directly above the floating gate.
    Type: Application
    Filed: January 5, 2018
    Publication date: May 10, 2018
    Inventors: Suyog Gupta, Bahman Hekmatshoartabari
  • Patent number: 9899485
    Abstract: A method includes forming a tunneling dielectric layer on a semiconductor substrate, a first portion of the tunneling dielectric layer is directly above a channel region in the semiconductor substrate and a second portion of the tunneling dielectric layer is directly above source-drain regions located on opposing sides of the channel region, the second portion of the tunneling dielectric layer is thicker than the first portion of the tunneling dielectric layer, forming a floating gate directly above the first portion of the tunneling dielectric layer and the second portion of the tunneling dielectric layer, and forming a control dielectric layer directly above the floating gate.
    Type: Grant
    Filed: June 7, 2016
    Date of Patent: February 20, 2018
    Assignee: International Business Machines Corporation
    Inventors: Suyog Gupta, Bahman Hekmatshoartabari
  • Publication number: 20170351530
    Abstract: Techniques facilitating synchronization of processing engines for parallel deep learning are provided. In one example, a first processing component associated with a processor and processing components can: generate first output data based on input data associated with a machine learning process, wherein the processing components are communicatively coupled with an assignment component via a network; transmit the first output data to a second processing component of the processing components, wherein the first processing component and the second processing component comprise a first group of the processing components and the first group of the processing components is determined by the assignment component based on a first defined criterion; receive communication data generated by the second processing component; and generate second output data based on the communication data, wherein the second output data is an updated version of the first output data stored in the memory of the first processing component.
    Type: Application
    Filed: April 29, 2016
    Publication date: December 7, 2017
    Inventors: Suyog Gupta, Ravi Nair
  • Publication number: 20170352734
    Abstract: A method includes forming a tunneling dielectric layer on a semiconductor substrate, a first portion of the tunneling dielectric layer is directly above a channel region in the semiconductor substrate and a second portion of the tunneling dielectric layer is directly above source-drain regions located on opposing sides of the channel region, the second portion of the tunneling dielectric layer is thicker than the first portion of the tunneling dielectric layer, forming a floating gate directly above the first portion of the tunneling dielectric layer and the second portion of the tunneling dielectric layer, and forming a control dielectric layer directly above the floating gate.
    Type: Application
    Filed: June 7, 2016
    Publication date: December 7, 2017
    Inventors: Suyog Gupta, Bahman Hekmatshoartabari
  • Publication number: 20170161022
    Abstract: A method (and system) for generating random numbers includes setting a drain voltage Vd on an MOSFET device to maximize a transconductance of the MOSFET device and setting a gate voltage Vg of the MOSFET device to tune as desired a random number statistical distribution of an output of the MOSFET device> The MOSFET device includes a gate structure with an oxide layer including at least one artificial trapping layer in which carrier traps are designed to occupy a predetermined distance from conduction and valance bands of material of the artificial trapping layer.
    Type: Application
    Filed: December 2, 2015
    Publication date: June 8, 2017
    Inventors: Chia-yu CHEN, Damon Farmer, Suyog Gupta, Shu-jen Han
  • Publication number: 20170103309
    Abstract: Technical solutions are described to accelerate training of a multi-layer convolutional neural network. According to one aspect, a computer implemented method is described. A convolutional layer includes input maps, convolutional kernels, and output maps. The method includes a forward pass, a backward pass, and an update pass that each include convolution calculations. The described method performs the convolutional operations involved in the forward, the backward, and the update passes based on a first, a second, and a third perforation map respectively. The perforation maps are stochastically generated, and distinct from each other. The method further includes interpolating results of the selective convolution operations to obtain remaining results. The method includes iteratively repeating the forward pass, the backward pass, and the update pass until the convolutional neural network is trained. Other aspects such as a system, apparatus, and computer program product are also described.
    Type: Application
    Filed: November 30, 2015
    Publication date: April 13, 2017
    Inventors: LELAND CHANG, SUYOG GUPTA
  • Publication number: 20170103308
    Abstract: Technical solutions are described to accelerate training of a multi-layer convolutional neural network. According to one aspect, a computer implemented method is described. A convolutional layer includes input maps, convolutional kernels, and output maps. The method includes a forward pass, a backward pass, and an update pass that each include convolution calculations. The described method performs the convolutional operations involved in the forward, the backward, and the update passes based on a first, a second, and a third perforation map respectively. The perforation maps are stochastically generated, and distinct from each other. The method further includes interpolating results of the selective convolution operations to obtain remaining results. The method includes iteratively repeating the forward pass, the backward pass, and the update pass until the convolutional neural network is trained. Other aspects such as a system, apparatus, and computer program product are also described.
    Type: Application
    Filed: October 8, 2015
    Publication date: April 13, 2017
    Inventors: LELAND CHANG, SUYOG GUPTA
  • Patent number: 9330907
    Abstract: Suspended structures are provided using selective etch technology. Such structures can be protected on all sides when the selective undercut etch is performed, thereby providing excellent control of feature geometry combined with superior material quality.
    Type: Grant
    Filed: October 10, 2014
    Date of Patent: May 3, 2016
    Assignee: The Board of Trustees of the Leland Stanford Junior University
    Inventors: Robert Chen, James S. Harris, Jr., Suyog Gupta
  • Publication number: 20150102465
    Abstract: Suspended structures are provided using selective etch technology. Such structures can be protected on all sides when the selective undercut etch is performed, thereby providing excellent control of feature geometry combined with superior material quality.
    Type: Application
    Filed: October 10, 2014
    Publication date: April 16, 2015
    Inventors: Robert Chen, James S. Harris, JR., Suyog Gupta