Patents by Inventor Marko Vitez

Marko Vitez has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11861337
    Abstract: A method of compiling neural network code to executable instructions for execution by a computational acceleration system having a memory circuit and one or more acceleration circuits having a maps data buffer and a kernel data buffer is disclosed, such as for execution by an inference engine circuit architecture which includes a matrix-matrix (MM) accelerator circuit having multiple operating modes to provide a complete matrix multiplication. A representative compiling method includes generating a list of neural network layer model objects; fusing available functions and layers in the list; selecting a cooperative mode, an independent mode, or a combined cooperative and independent mode for execution; selecting a data movement mode and an ordering of computations which reduces usage of the memory circuit; generating an ordered sequence of load objects, compute objects, and store objects; and converting the ordered sequence of load objects, compute objects, and store objects into the executable instructions.
    Type: Grant
    Filed: August 26, 2020
    Date of Patent: January 2, 2024
    Assignee: Micron Technology, Inc.
    Inventors: Andre Xian Ming Chang, Aliasger Zaidy, Eugenio Culurciello, Marko Vitez
  • Publication number: 20220147811
    Abstract: Systems, devices, and methods related to a Deep Learning Accelerator and memory are described. For example, an integrated circuit device may be configured to execute instructions with matrix operands and configured with random access memory (RAM). A compiler can identify a plurality of portions of an artificial neural network for implementation on a plurality of such integrated circuit devices respectively. The compiler converts a description of the artificial neural network into a plurality of compiler outputs executable on the plurality of devices to generate an output of the artificial neural network response to an input to the artificial neural network. Intermediate results are communicated among the devices in generating the output of the artificial neural network.
    Type: Application
    Filed: November 6, 2020
    Publication date: May 12, 2022
    Inventors: Jaime Cummins, Marko Vitez, Eugenio Culurciello, Andre Xian Ming Chang, Aliasger Tayeb Zaidy
  • Publication number: 20220147810
    Abstract: Systems, devices, and methods related to a Deep Learning Accelerator and memory are described. For example, an integrated circuit device may be configured to execute instructions with matrix operands and configured with random access memory. A computing device running a compiler can interact and/or probe an integrated circuit device to identify hardware characteristics of the integrated circuit device in performing matrix computations. The compiler can generate and optimize a result of compilation from a description of an artificial neural network based at least in part on the hardware characteristics of the integrated circuit device. The result of compilation can include first data representative of parameters of the artificial neural network and second data representative of instructions executable by the integrated circuit device to generate an output of the artificial neural network based on the first data and an input to the artificial neural network.
    Type: Application
    Filed: November 6, 2020
    Publication date: May 12, 2022
    Inventors: Aliasger Tayeb Zaidy, Marko Vitez, Eugenio Culurciello, Jaime Cummins, Andre Xian Ming Chang
  • Publication number: 20220147812
    Abstract: Systems, devices, and methods related to a Deep Learning Accelerator and memory are described. For example, an integrated circuit device may be configured to execute instructions with matrix operands and configured with random access memory (RAM). A compiler has an artificial neural network configured to identify an optimized compilation option for an artificial neural network to be compiled by the compiler and/or for a hardware platform of Deep Learning Accelerators. The artificial neural network of the compiler can be trained via machine learning to identify the optimized compilation option based on the features of the artificial neural network to be compiled and/or features of the hardware platform on which the compiler output will be executed.
    Type: Application
    Filed: November 6, 2020
    Publication date: May 12, 2022
    Inventors: Andre Xian Ming Chang, Aliasger Tayeb Zaidy, Marko Vitez, Michael Cody Glapa, Abhishek Chaurasia, Eugenio Culurciello
  • Publication number: 20220147808
    Abstract: Systems, devices, and methods related to a Deep Learning Accelerator and memory are described. For example, an integrated circuit device may be configured to execute instructions with matrix operands and configured with random access memory (RAM). A compiler can convert a description of an artificial neural network into a generic result of compilation according to a specification of a generic Deep Learning Accelerator and then map the first result of compilation into a platform-specific result according to a specification of a specific hardware platform of Deep Learning Accelerators. The platform-specific result can be stored into the RAM of the integrated circuit device to enable the integrated circuit device to autonomously perform the computation of the artificial neural network in generating an output in response to an input to the artificial neural network.
    Type: Application
    Filed: November 6, 2020
    Publication date: May 12, 2022
    Inventors: Andre Xian Ming Chang, Aliasger Tayeb Zaidy, Eugenio Culurciello, Jaime Cummins, Marko Vitez
  • Publication number: 20220147813
    Abstract: Systems, devices, and methods related to a Deep Learning Accelerator and memory are described. For example, an integrated circuit device may be configured to execute instructions with matrix operands and configured with random access memory (RAM). A compiler is configured to generate instructions executable by the Deep Learning Accelerator from a description of a target artificial neural network. The instructions may call routines in a runtime library that has an embedded artificial neural network configured to predict optimized execution options available to implement the routines. The prediction is based at least in part on a pattern of data being processed in the target artificial neural network and/or a pattern of usages of the routines by the instructions.
    Type: Application
    Filed: November 6, 2020
    Publication date: May 12, 2022
    Inventors: Andre Xian Ming Chang, Aliasger Tayeb Zaidy, Marko Vitez, Eugenio Culurciello
  • Publication number: 20220147809
    Abstract: Systems, devices, and methods related to a Deep Learning Accelerator and memory are described. For example, an integrated circuit device may be configured to execute instructions with matrix operands and configured with random access memory. A compiler can convert a description of an artificial neural network into a compiler output through optimization and/or selection of hardware options of the integrated circuit device. The compiler output can include parameters of the artificial neural network, instructions executable by processing units of the Deep Learning Accelerator to generate an output of the artificial neural network responsive to an input to the artificial neural network, and hardware options to be stored in registers connected to control hardware configurations of the processing units.
    Type: Application
    Filed: November 6, 2020
    Publication date: May 12, 2022
    Inventors: Aliasger Tayeb Zaidy, Marko Vitez, Eugenio Culurciello, Jaime Cummins, Andre Xian Ming Chang
  • Publication number: 20220066760
    Abstract: A method of compiling neural network code to executable instructions for execution by a computational acceleration system having a memory circuit and one or more acceleration circuits having a maps data buffer and a kernel data buffer is disclosed, such as for execution by an inference engine circuit architecture which includes a matrix-matrix (MM) accelerator circuit having multiple operating modes to provide a complete matrix multiplication. A representative compiling method includes generating a list of neural network layer model objects; fusing available functions and layers in the list; selecting a cooperative mode, an independent mode, or a combined cooperative and independent mode for execution; selecting a data movement mode and an ordering of computations which reduces usage of the memory circuit; generating an ordered sequence of load objects, compute objects, and store objects; and converting the ordered sequence of load objects, compute objects, and store objects into the executable instructions.
    Type: Application
    Filed: August 26, 2020
    Publication date: March 3, 2022
    Inventors: Andre Xian Ming Chang, Aliasger Zaidy, Eugenio Culurciello, Marko Vitez