Patents Assigned to SiMa Technologies, Inc.
  • Patent number: 12067465
    Abstract: A machine learning network is implemented by executing a computer program of instructions on a machine learning accelerator (MLA) comprising a plurality of interconnected storage elements (SEs) and processing elements (PEs. The instructions are partitioned into blocks, which are retrieved from off-chip memory. The block includes a set of deterministic instructions to be executed by on-chip storage elements and/or processing elements according to a static schedule. The block also includes the number of non-deterministic instructions to be executed prior to executing the set of deterministic instructions in this block. These non-deterministic instructions may be instructions for storage elements to retrieve data from off-chip memory and are contained in one or more prior blocks. The execution of these non-deterministic instructions is counted, for example through the use of tokens.
    Type: Grant
    Filed: December 17, 2020
    Date of Patent: August 20, 2024
    Assignee: SiMa Technologies, Inc.
    Inventor: Subba Rao Venkata Kalari
  • Patent number: 12026510
    Abstract: A machine learning accelerator (MLA) implemented on a semiconductor die includes a computing mesh of interconnected compute elements that includes storage elements (SEs) and processing elements (PEs). The compute elements execute a program of instructions to implement a machine learning network according to a static schedule for execution of the instructions. A compiler determines allowable time windows for the transfer of instructions and/or data from off-chip memory to the compute elements in order to fulfill the static schedule. If instructions/data are available before the time window opens, they are held until the window opens. If the window is about to close and the transfer of instructions/data is not yet complete, the execution of statically scheduled instructions is suspended to allow the transfer to complete within the window.
    Type: Grant
    Filed: January 10, 2023
    Date of Patent: July 2, 2024
    Assignee: SIMA TECHNOLOGIES, INC.
    Inventors: Subba Rao Venkata Kalari, Saurabh Jain
  • Patent number: 11989581
    Abstract: A method, system, and apparatus are disclosed herein for bridging a deterministic phase of instructions with a non-deterministic phase of instructions when those instructions are executed by a machine learning accelerator while executing a machine learning network. Specifically, data is transferred from off-chip memory to on-chip memory (non-deterministic phase of instructions). The data transfer involves determining whether certain on-chip memory is already storing data that has not been consumed yet (e.g., certain memory locations on-chip may be storing data for future consumption and should not be overwritten). Based on determining that the certain on-chip memory is not storing data that has not been consumed yet, the data may be transferred from the off-chip memory to the on-chip memory and the target memory locations may be marked as storing data that has not been consumed yet. The deterministic phase of instructions may be started subsequently.
    Type: Grant
    Filed: April 17, 2020
    Date of Patent: May 21, 2024
    Assignee: SiMa Technologies, Inc.
    Inventors: Nishit Shah, Reed Kotler
  • Patent number: 11886981
    Abstract: A compiler generates a computer program implementing a machine learning network on a machine learning accelerator (MLA) including interconnected processing elements. The computer program includes data transfer instructions for non-colliding data transfers between the processing elements. To generate the data transfer instructions, the compiler determines non-conflicting data transfer paths for data transfers based on a topology of the interconnections between processing elements, on dependencies of the instructions and on a duration for execution of the instructions. Each data transfer path specifies a routing and a time slot for the data transfer. The compiler generates data transfer instructions that specify routing of the data transfers and generates a static schedule that schedules execution of the data transfer instructions during the time slots for the data transfers.
    Type: Grant
    Filed: May 1, 2020
    Date of Patent: January 30, 2024
    Assignee: SiMa Technologies, Inc.
    Inventors: Nishit Shah, Srivathsa Dhruvanarayan, Reed Kotler
  • Patent number: 11803740
    Abstract: A compiler manages memory usage in the machine learning accelerator by intelligently ordering computations of a machine learning network. The compiler identifies partial networks of the machine learning network representing portions of the machine learning network across multiple layers on which an output or set of outputs are dependent. Because any given output may depend on only a limited subset of intermediate outputs from the prior layers, each partial network may include only a small fraction of the intermediate outputs from each layer. Instead of implementing the MLN by computing one layer at a time, the compiler schedules instructions to sequentially implement partial networks. As each layer of a partial network is completed, the intermediate outputs can be released from memory. The described technique enables intermediate outputs to be directly streamed between processing elements of the machine learning accelerator without requiring large transfers to and from external memory.
    Type: Grant
    Filed: February 8, 2023
    Date of Patent: October 31, 2023
    Assignee: SiMa Technologies, Inc.
    Inventors: Reed Kotler, Nishit Shah
  • Patent number: 11782757
    Abstract: A machine learning network is implemented by executing a computer program of instructions on a machine learning accelerator (MLA) comprising a plurality of interconnected storage elements (SEs) and processing elements (PEs). The instructions are partitioned into blocks, which are retrieved from off-chip memory. The block includes a set of deterministic instructions (MLA instructions) to be executed by on-chip storage elements and/or processing elements according to a static schedule from a compiler. The MLA instructions may require data retrieved from off-chip memory by memory access instructions contained in prior blocks. The compiler also schedules the memory access instructions in a manner that avoids contention for access to the off-chip memory. By avoiding contention, the execution time of off-chip memory accesses becomes predictable enough and short enough that the memory access instructions may be scheduled so that they are known to complete before the retrieved data is required.
    Type: Grant
    Filed: May 7, 2021
    Date of Patent: October 10, 2023
    Assignee: SiMa Technologies, Inc.
    Inventor: Reed Kotler
  • Patent number: 11734605
    Abstract: A compiler receives a description of a machine learning network and generates a computer program that implements the machine learning network. The compiler allocates instructions of the computer program to different groups of processing elements (Tiles) for execution such that different groups of Tiles implement different layers of the machine learning network. The compiler may determine the size of the different groups based on a partial computation metric associated with the computations performed to implement the corresponding layer. Furthermore, the compiler may assign specific Tiles to each group based on a set of predefined layout constraints. The compiler may statically schedule at least a portion of the instructions into one or more deterministic phases for execution by the groups of Tiles.
    Type: Grant
    Filed: April 29, 2020
    Date of Patent: August 22, 2023
    Assignee: SiMa Technologies, Inc.
    Inventors: Reed Kotler, Nishit Shah
  • Patent number: 11734549
    Abstract: A compiler receives a description of a machine learning network (MLN) and generates a computer program that implements the MLN on a machine learning accelerator (MLA). To implement the MLN, the compiler generates compute instructions that implement computations of the MLN on different processing units (Tiles), and data transfer instructions that transfer data used in the computations. The compiler may statically schedule at least a portion of the instructions for execution by the Tiles according to fixed timing. The compiler may initially implement data transfers between non-adjacent Tiles (or external memories) by implementing a sequence of transfers through one or more intermediate Tiles (or external memories) in accordance with a set of default routing rules that dictates the data path. The computer program may then be simulated to identify routing conflicts. When routing conflicts are detected, the compiler updates the computer program in a manner that avoids the conflicts.
    Type: Grant
    Filed: April 21, 2020
    Date of Patent: August 22, 2023
    Assignee: SiMa Technologies, Inc.
    Inventors: Reed Kotler, Nishit Shah
  • Patent number: 11631001
    Abstract: A system-on-chip (SoC) integrated circuit product includes a machine learning accelerator (MLA). It also includes other processor cores, such as general purpose processors and application-specific processors. It also includes a network-on-chip for communication between the different modules. The SoC implements a heterogeneous compute environment because the processor cores are customized for different purposes and typically will use different instruction sets. Applications may use some or all of the functionalities offered by the processor cores, and the processor cores may be programmed into different pipelines to perform different tasks.
    Type: Grant
    Filed: April 10, 2020
    Date of Patent: April 18, 2023
    Assignee: SiMa Technologies, Inc.
    Inventors: Srivathsa Dhruvanarayan, Nishit Shah, Bradley Taylor, Moenes Zaher Iskarous
  • Patent number: 11586894
    Abstract: A compiler efficiently manages memory usage in the machine learning accelerator by intelligently ordering computations of a machine learning network. The compiler identifies a set of partial networks of the machine learning network representing portions of the machine learning network across multiple layers on which an output or set of outputs are dependent. Because any given output may depend on only a limited subset of intermediate outputs from the prior layers, each partial network may include only a small fraction of the intermediate outputs from each layer. Instead of implementing the MLN by computing one layer at a time, the compiler schedules instructions to sequentially implement partial networks. As each layer of a partial network is completed, the intermediate outputs can be released from memory. The described technique enables intermediate outputs to be directly streamed between processing elements of the machine learning accelerator without requiring large transfers to and from external memory.
    Type: Grant
    Filed: May 4, 2020
    Date of Patent: February 21, 2023
    Assignee: SiMa Technologies, Inc.
    Inventors: Reed Kotler, Nishit Shah
  • Patent number: 11488066
    Abstract: Convolutions of an input sample with multiple kernels is decomposed into matrix multiplications of a V×C matrix of input values times a C×K matrix of kernel values, producing a V×K product. For the second matrix, C is a channel dimension (i.e., each row of the second matrix is a different channel of the input sample and kernel) and K is the kernel dimension (i.e., each column of the second matrix is a different kernel), but all the values correspond to the same pixel position in the kernel. In the matrix product, V is the output dimension and K is the kernel dimension. Thus, each value in the output matrix is a partial product for a certain output pixel and kernel, and the matrix multiplication parallelizes the convolutions by calculating partial products for multiple output pixels and multiple kernels.
    Type: Grant
    Filed: April 21, 2020
    Date of Patent: November 1, 2022
    Assignee: SiMa Technologies, Inc.
    Inventors: Nishit Shah, Srivathsa Dhruvanarayan
  • Patent number: 11403519
    Abstract: A compiler receives a description of a machine learning network and generates a computer program that implements the machine learning network. The computer program includes statically scheduled instructions that are executed by a mesh of processing elements (Tiles). The instructions executed by the Tiles are statically scheduled because the compiler can determine which instructions are executed by which Tiles at what times. For example, for the statically scheduled instructions, there are no conditions, branching or data dependencies that can be resolved only at run-time, and which would affect the timing and order of the execution of the instructions.
    Type: Grant
    Filed: April 6, 2020
    Date of Patent: August 2, 2022
    Assignee: SiMa Technologies, Inc.
    Inventors: Nishit Shah, Reed Kotler, Srivathsa Dhruvanarayan, Moenes Zaher Iskarous, Kavitha Prasad, Yogesh Laxmikant Chobe, Sedny S. J Attia, Spenser Don Gilliland
  • Patent number: 11354570
    Abstract: A compiler receives a description of a machine learning network and generates a computer program that implements the machine learning network. The computer program includes statically scheduled instructions that are executed by a mesh of processing elements (Tiles). The instructions executed by the Tiles are statically scheduled because the compiler can determine which instructions are executed by which Tiles at what times. For example, for the statically scheduled instructions, there are no conditions, branching or data dependencies that can be resolved only at run-time, and which would affect the timing and order of the execution of the instructions.
    Type: Grant
    Filed: April 6, 2020
    Date of Patent: June 7, 2022
    Assignee: SiMa Technologies, Inc.
    Inventors: Nishit Shah, Reed Kotler, Srivathsa Dhruvanarayan, Moenes Zaher Iskarous, Kavitha Prasad, Yogesh Laxmikant Chobe, Sedny S. J Attia, Spenser Don Gilliland
  • Patent number: 11321607
    Abstract: A compiler receives a description of a machine learning network and generates a computer program that implements the machine learning network. The computer program includes statically scheduled instructions that are executed by a mesh of processing elements (Tiles). The instructions executed by the Tiles are statically scheduled because the compiler can determine which instructions are executed by which Tiles at what times. For example, for the statically scheduled instructions, there are no conditions, branching or data dependencies that can be resolved only at run-time, and which would affect the timing and order of the execution of the instructions.
    Type: Grant
    Filed: April 3, 2020
    Date of Patent: May 3, 2022
    Assignee: SiMa Technologies, Inc.
    Inventors: Nishit Shah, Reed Kotler, Srivathsa Dhruvanarayan, Moenes Zaher Iskarous, Kavitha Prasad, Yogesh Laxmikant Chobe, Sedny S. J Attia, Spenser Don Gilliland