Patents by Inventor Kiran Kolar CHANDRASEKHARAN

Kiran Kolar CHANDRASEKHARAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11915118
    Abstract: A method and an apparatus for processing layers in a neural network fetch Input Feature Map (IFM) tiles of an IFM tensor and kernel tiles of a kernel tensor, perform a convolutional operation on the IFM tiles and the kernel tiles by exploiting IFM sparsity and kernel sparsity, and generate a plurality of OFM tiles corresponding to the IFM tiles.
    Type: Grant
    Filed: February 8, 2023
    Date of Patent: February 27, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Saptarsi Das, Sabitha Kusuma, Sehwan Lee, Ankur Deshwal, Kiran Kolar Chandrasekharan
  • Publication number: 20230325462
    Abstract: A processor-implemented apparatus includes a forward transform module configured to transform input feature maps (IFMs) by performing a forward transform operation in a Winograd convolution (WinConv) domain, multiply and accumulate array (MAA) units configured to multiply the transformed IFMs by transformed kernels and perform a first inverse transform operation based on results of the multiplying, and an inverse transform module configured to generate output feature maps (OFMs) based on a result of the first inverse transform operation.
    Type: Application
    Filed: April 5, 2023
    Publication date: October 12, 2023
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Gopinath Vasanth MAHALE, Pramod Parameshwara UDUPA, Jun-Woo JANG, Kiran Kolar CHANDRASEKHARAN, Sehwan LEE
  • Publication number: 20230186050
    Abstract: A method and an apparatus for processing layers in a neural network fetch Input Feature Map (IFM) tiles of an IFM tensor and kernel tiles of a kernel tensor, perform a convolutional operation on the IFM tiles and the kernel tiles by exploiting IFM sparsity and kernel sparsity, and generate a plurality of OFM tiles corresponding to the IFM tiles.
    Type: Application
    Filed: February 8, 2023
    Publication date: June 15, 2023
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Saptarsi DAS, Sabitha KUSUMA, Sehwan LEE, Ankur DESHWAL, Kiran Kolar CHANDRASEKHARAN
  • Patent number: 11604958
    Abstract: A method and an apparatus for processing layers in a neural network fetch Input Feature Map (IFM) tiles of an IFM tensor and kernel tiles of a kernel tensor, perform a convolutional operation on the IFM tiles and the kernel tiles by exploiting IFM sparsity and kernel sparsity, and generate a plurality of OFM tiles corresponding to the IFM tiles.
    Type: Grant
    Filed: March 12, 2020
    Date of Patent: March 14, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Saptarsi Das, Sabitha Kusuma, Sehwan Lee, Ankur Deshwal, Kiran Kolar Chandrasekharan
  • Publication number: 20220036243
    Abstract: An apparatus includes a global memory and a systolic array. The global memory is configured to store and provide an input feature map (IFM) vector stream from an IFM tensor and a kernel vector stream from a kernel tensor. The systolic array is configured to receive the IFM vector stream and the kernel vector stream from the global memory. The systolic array is on-chip together with the global memory. The systolic array includes a plurality of processing elements (PEs) each having a plurality of vector units, each of the plurality of vector units being configured to perform a dot-product operation on at least one IFM vector of the IFM vector stream and at least one kernel vector of the kernel vector stream per unit clock cycle to generate a plurality of output feature maps (OFMs).
    Type: Application
    Filed: January 13, 2021
    Publication date: February 3, 2022
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Saptarsi Das, Sabitha Kusuma, Arnab Roy, Ankur Deshwal, Kiran Kolar Chandrasekharan, Sehwan Lee
  • Publication number: 20210263738
    Abstract: A method for performing a pooling operation in bitwise manner, the method includes performing a pooling operation on ternary data upon receiving an input ternary vector, receiving an input binary vector, providing a fused hardware for performing the pooling operation on any of the received binary and the ternary data, and executing the pooling operation performed bitwise through the fused hardware.
    Type: Application
    Filed: February 26, 2021
    Publication date: August 26, 2021
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Arnab ROY, Kiran Kolar CHANDRASEKHARAN, Sehwan LEE
  • Publication number: 20210117755
    Abstract: Disclosed is a hybrid traversal apparatus and method for a convolution neural network (CNN) accelerator architecture that receives input feature map (IFM) microbatches from a pixel memory and receiving kernel microbatches from a kernel memory, multiplies the IFM microbatches by the kernel microbatches while reusing the kernel microbatches based on a kernel reuse factor for at least one of a direct convolution (DConv) or a Winograd convolution (WgConv), to obtain output feature map (OFM) microbatches, and writes the generated OFM microbatches to the pixel memory, after quantization, non-linear function, and pooling on a result of the multiplying.
    Type: Application
    Filed: September 25, 2020
    Publication date: April 22, 2021
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Gopinath Vasanth Mahale, Pramod Parameshwara Udupa, Kiran Kolar Chandrasekharan, SEHWAN LEE
  • Publication number: 20210027151
    Abstract: A processor-implemented method for generating Output Feature Map (OFM) channels using a Convolutional Neural Network (CNN), include a plurality of kernels, includes generating at least one encoded Similar or Identical Inter-Kernel Weight (S/I-IKW) stream, converting, similar and identical weights in the at least one non-pivot kernel to zero to introduce sparsity into the at least one non-pivot kernel, broadcasting at least one value to the at least one non-pivot kernel, and generating at least one OFM channel by accumulating an at least one previous OFM value with any one or any combination of any two or more of a convolution of non-zero weights of the pivot kernel and pixels of the Input Feature Map (IFM), the at least one broadcasted value, and a convolution of non-zero weights of the at least one non-pivot kernel and pixels of the IFM.
    Type: Application
    Filed: July 22, 2020
    Publication date: January 28, 2021
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Pramod Parameshwara UDUPA, Kiran Kolar CHANDRASEKHARAN, Sehwan LEE
  • Publication number: 20200293858
    Abstract: A method and an apparatus for processing layers in a neural network fetch Input Feature Map (IFM) tiles of an IFM tensor and kernel tiles of a kernel tensor, perform a convolutional operation on the IFM tiles and the kernel tiles by exploiting IFM sparsity and kernel sparsity, and generate a plurality of OFM tiles corresponding to the IFM tiles.
    Type: Application
    Filed: March 12, 2020
    Publication date: September 17, 2020
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Saptarsi DAS, Sabitha KUSUMA, Sehwan LEE, Ankur DESHWAL, Kiran Kolar CHANDRASEKHARAN