Patents by Inventor Mikhail Volkov

Mikhail Volkov has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20220027716
    Abstract: Neural network inference may be performed by an apparatus or integrated circuit configured to perform mathematical operations on activation data stored in an activation data memory and weight values stored in a weight memory, to store values resulting from the mathematical operations onto an accumulation memory, to perform activation operations on the values stored in the accumulation memory, to store resulting activation data onto the activation data memory, and to perform inference of a neural network by feeding and synchronizing instructions from an external memory.
    Type: Application
    Filed: October 4, 2021
    Publication date: January 27, 2022
    Inventors: Nikolay Nez, Antonio Tomas Nevado Vilchez, Hamid Reza Zohouri, Mikhail Volkov, Oleg Khavin, Sakyasingha Dasgupta
  • Publication number: 20210357732
    Abstract: Neural network accelerator hardware-specific division of inference may be performed by operations including obtaining a computational graph and a hardware chip configuration. The operations also include dividing inference of the plurality of layers into a plurality of groups. Each group includes a number of sequential layers based on an estimate of duration and energy consumption by the hardware chip to perform inference of the neural network by performing the mathematical operations on activation data, sequentially by layer, of corresponding portions of layers of each group. The operations further include generating instructions for the hardware chip to perform inference of the neural network, sequentially by group, of the plurality of groups.
    Type: Application
    Filed: February 26, 2021
    Publication date: November 18, 2021
    Inventors: Nikolay NEZ, Antonio Tomas Nevado VILCHEZ, Hamid Reza ZOHOURI, Mikhail VOLKOV, Oleg KHAVIN, Sakyasingha DASGUPTA
  • Patent number: 11176449
    Abstract: Neural network accelerator hardware-specific division of inference may be performed by operations including obtaining a computational graph and a hardware chip configuration. The operations also include dividing inference of the plurality of layers into a plurality of groups. Each group includes a number of sequential layers based on an estimate of duration and energy consumption by the hardware chip to perform inference of the neural network by performing the mathematical operations on activation data, sequentially by layer, of corresponding portions of layers of each group. The operations further include generating instructions for the hardware chip to perform inference of the neural network, sequentially by group, of the plurality of groups.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: November 16, 2021
    Assignee: EDGECORTIX PTE. LTD.
    Inventors: Nikolay Nez, Antonio Tomas Nevado Vilchez, Hamid Reza Zohouri, Mikhail Volkov, Oleg Khavin, Sakyasingha Dasgupta