Patents Assigned to EDGECORTIX PTE. LTD.
  • Patent number: 11657260
    Abstract: Neural network hardware acceleration data parallelism is performed by an integrated circuit including a plurality of memory banks, each memory bank among the plurality of memory banks configured to store values and to transmit stored values, a plurality of computation units, each computation unit among the plurality of computation units including a processor including circuitry configured to perform a mathematical operation on an input data value and a weight value to produce a resultant data value, and a computation controller configured to cause a value transmission to be received by more than one computation unit or memory bank.
    Type: Grant
    Filed: October 26, 2021
    Date of Patent: May 23, 2023
    Assignee: EDGECORTIX PTE. LTD.
    Inventors: Nikolay Nez, Oleg Khavin, Tanvir Ahmed, Jens Huthmann, Sakyasingha Dasgupta
  • Patent number: 11521052
    Abstract: Hardware and neural architecture co-search may be performed by operations including obtaining a specification of a function and a plurality of hardware design parameters. The hardware design parameters include a memory capacity, a number of computational resources, a communication bandwidth, and a template configuration for performing neural architecture inference. The operations further include determining, for each neural architecture among a plurality of neural architectures, an overall latency of performance of inference of the neural architecture by an accelerator within the hardware design parameters. Each neural architecture having been trained to perform the function with an accuracy. The operations further include selecting, from among the plurality of neural architectures, a neural architecture based on the overall latency and the accuracy.
    Type: Grant
    Filed: June 30, 2021
    Date of Patent: December 6, 2022
    Assignee: EDGECORTIX PTE. LTD.
    Inventors: Sakyasingha Dasgupta, Weiwen Jiang, Yiyu Shi
  • Patent number: 11188300
    Abstract: Preparation and execution of quantized scaling may be performed by operations including obtaining an original array and a scaling factor representing a ratio of a size of the original array to a size of a scaled array, determining, for each column of the scaled array, a horizontal coordinate of each of two nearest elements in the horizontal dimension of the original array, and, for each row of the scaled array, a vertical coordinate of each of two nearest elements in the vertical dimension of the original array, calculating, for each row of the scaled array and each column of the scaled array, a linear interpolation coefficient, converting each value of the original array from a floating point number into a quantized number, converting each linear interpolation coefficient from a floating point number into a fixed point number, storing, in a memory, the horizontal coordinates and vertical coordinates as integers, the values as quantized numbers, and the linear interpolation coefficients as fixed point numbers
    Type: Grant
    Filed: June 18, 2021
    Date of Patent: November 30, 2021
    Assignee: EDGECORTIX PTE. LTD.
    Inventors: Oleg Khavin, Nikolay Nez, Sakyasingha Dasgupta, Antonio Tomas Nevado Vilchez
  • Patent number: 11176449
    Abstract: Neural network accelerator hardware-specific division of inference may be performed by operations including obtaining a computational graph and a hardware chip configuration. The operations also include dividing inference of the plurality of layers into a plurality of groups. Each group includes a number of sequential layers based on an estimate of duration and energy consumption by the hardware chip to perform inference of the neural network by performing the mathematical operations on activation data, sequentially by layer, of corresponding portions of layers of each group. The operations further include generating instructions for the hardware chip to perform inference of the neural network, sequentially by group, of the plurality of groups.
    Type: Grant
    Filed: February 26, 2021
    Date of Patent: November 16, 2021
    Assignee: EDGECORTIX PTE. LTD.
    Inventors: Nikolay Nez, Antonio Tomas Nevado Vilchez, Hamid Reza Zohouri, Mikhail Volkov, Oleg Khavin, Sakyasingha Dasgupta
  • Patent number: 11144822
    Abstract: Neural network inference may be performed by configuration of a device including a plurality of convolution modules, a plurality of adder modules, an accumulation memory, and a convolution output interconnect control module configured to open and close convolution output interconnects among a plurality of convolution output interconnects connecting the plurality of convolution modules, the plurality of adder modules, and the accumulation memory. Inference may be performed while the device is configured according to at least one convolution output connection scheme whereby each convolution module has no more than one open direct connection through the plurality of convolution output interconnects to the accumulation memory or one of the plurality of adder modules. The device includes a convolution output interconnect control module to configure the plurality of convolution output interconnects according to the at least one convolution output connection scheme.
    Type: Grant
    Filed: January 4, 2021
    Date of Patent: October 12, 2021
    Assignee: EDGECORTIX PTE. LTD.
    Inventors: Nikolay Nez, Hamid Reza Zohouri, Oleg Khavin, Antonio Tomas Nevado Vilchez, Sakyasingha Dasgupta