Patents by Inventor BIJOY PAZHANIMALA

BIJOY PAZHANIMALA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11640537
    Abstract: An apparatus to facilitate execution of non-linear functions operations is disclosed. The apparatus comprises accelerator circuitry including a compute grid having a plurality of processing elements to execute neural network computations, store values resulting from the neural network computations, and perform piecewise linear (PWL) approximations of one or more non-linear functions using the stored values as input data.
    Type: Grant
    Filed: April 8, 2019
    Date of Patent: May 2, 2023
    Assignee: Intel Corporation
    Inventors: Bharat Daga, Krishnakumar Nair, Pradeep Janedula, Aravind Babu Srinivasan, Bijoy Pazhanimala, Ambili Vengallur
  • Patent number: 11544191
    Abstract: Hardware accelerators for accelerated grouped convolution operations. A first buffer of a hardware accelerator may receive a first row of an input feature map (IFM) from a memory. A first group comprising a plurality of tiles may receive a first row of the IFM. A plurality of processing elements of the first group may compute a portion of a first row of an output feature map (OFM) based on the first row of the IFM and a kernel. A second buffer of the accelerator may receive a third row of the IFM from the memory. A second group comprising a plurality of tiles may receive the third row of the IFM. A plurality of processing elements of the second group may compute a portion of a third row of the OFM based on the third row of the IFM and the kernel as part of a grouped convolution operation.
    Type: Grant
    Filed: March 26, 2020
    Date of Patent: January 3, 2023
    Assignee: INTEL CORPORATION
    Inventors: Ambili Vengallur, Bharat Daga, Pradeep K. Janedula, Bijoy Pazhanimala, Aravind Babu Srinivasan
  • Publication number: 20220043884
    Abstract: One embodiment provides a compute apparatus to perform machine learning operations, the compute apparatus comprising a hardware accelerator including a compute unit to perform a Winograd convolution, the compute unit configurable to perform the Winograd convolution for a first kernel size using a transform associated with a second kernel size.
    Type: Application
    Filed: April 22, 2021
    Publication date: February 10, 2022
    Applicant: Intel Corporation
    Inventors: Pradeep K. Janedula, Bijoy Pazhanimala, Bharat Daga, Saurabh M. Dhoble
  • Patent number: 10990648
    Abstract: One embodiment provides a compute apparatus to perform machine learning operations, the compute apparatus comprising a hardware accelerator including a compute unit to perform a Winograd convolution, the compute unit configurable to perform the Winograd convolution for a first kernel size using a transform associated with a second kernel size.
    Type: Grant
    Filed: August 7, 2017
    Date of Patent: April 27, 2021
    Assignee: INTEL CORPORATION
    Inventors: Pradeep Janedula, Bijoy Pazhanimala, Bharat Daga, Saurabh Dhoble
  • Publication number: 20200320403
    Abstract: An apparatus to facilitate execution of non-linear functions operations is disclosed. The apparatus comprises accelerator circuitry including a compute grid having a plurality of processing elements to execute neural network computations, store values resulting from the neural network computations, and perform piecewise linear (PWL) approximations of one or more non-linear functions using the stored values as input data.
    Type: Application
    Filed: April 8, 2019
    Publication date: October 8, 2020
    Applicant: Intel Corporation
    Inventors: Bharat Daga, Krishnakumar Nair, Pradeep Janedula, Aravind Babu Srinivasan, Bijoy Pazhanimala, Ambili Vengallur
  • Publication number: 20200233803
    Abstract: Hardware accelerators for accelerated grouped convolution operations. A first buffer of a hardware accelerator may receive a first row of an input feature map (IFM) from a memory. A first group comprising a plurality of tiles may receive a first row of the IFM. A plurality of processing elements of the first group may compute a portion of a first row of an output feature map (OFM) based on the first row of the IFM and a kernel. A second buffer of the accelerator may receive a third row of the IFM from the memory. A second group comprising a plurality of tiles may receive the third row of the IFM. A plurality of processing elements of the second group may compute a portion of a third row of the OFM based on the third row of the IFM and the kernel as part of a grouped convolution operation.
    Type: Application
    Filed: March 26, 2020
    Publication date: July 23, 2020
    Applicant: Intel Corporation
    Inventors: AMBILI VENGALLUR, BHARAT DAGA, PRADEEP K. JANEDULA, BIJOY PAZHANIMALA, ARAVIND BABU SRINIVASAN
  • Publication number: 20190042923
    Abstract: One embodiment provides a compute apparatus to perform machine learning operations, the compute apparatus comprising a hardware accelerator including a compute unit to perform a Winograd convolution, the compute unit configurable to perform the Winograd convolution for a first kernel size using a transform associated with a second kernel size.
    Type: Application
    Filed: August 7, 2017
    Publication date: February 7, 2019
    Applicant: Intel Corporation
    Inventors: PRADEEP JANEDULA, BIJOY PAZHANIMALA, BHARAT DAGA, SAURABH DHOBLE