Patents by Inventor Vladimir KIBARDIN

Vladimir KIBARDIN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230125522
    Abstract: Techniques in optimized placement for efficiency for accelerated deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements comprising a portion of a neural network accelerator performs flow-based computations on wavelets of data. Each processing element comprises a compute element to execute programmed instructions using the data and a router to route the wavelets. The routing is in accordance with virtual channel specifiers of the wavelets and controlled by routing configuration information of the router. A software stack determines optimized placement based on a description of a neural network. The determined placement is used to configure the routers including usage of the respective colors. The determined placement is used to configure the compute elements including the respective programmed instructions each is configured to execute.
    Type: Application
    Filed: October 30, 2020
    Publication date: April 27, 2023
    Inventors: Vladimir KIBARDIN, Michael Edwin JAMES, Michael MORRISON, Sean LIE, Gary R. LAUTERBACH, Stanislav FUNIAK
  • Publication number: 20230071424
    Abstract: Techniques in placement of compute and memory for accelerated deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements comprising a portion of a neural network accelerator performs flow-based computations on wavelets of data. Each processing element comprises a compute element to execute programmed instructions using the data and a router to route the wavelets. The routing is in accordance with virtual channel specifiers of the wavelets and controlled by routing configuration information of the router. A software stack determines placement of compute resources and memory resources based on a description of a neural network. The determined placement is used to configure the routers including usage of the respective colors. The determined placement is used to configure the compute elements including the respective programmed instructions each is configured to execute.
    Type: Application
    Filed: October 29, 2020
    Publication date: March 9, 2023
    Inventors: Vladimir KIBARDIN, Michael Edwin JAMES, Michael MORRISON, Sean LIE, Gary R. LAUTERBACH, Stanislav FUNIAK
  • Publication number: 20220374288
    Abstract: Techniques in distributed placement of linear operators for accelerated deep learning provide improvements in one or more of accuracy, performance, and energy efficiency. An array of processing elements comprising a portion of a neural network accelerator performs flow-based computations on wavelets of data. Each processing element comprises a compute element to execute programmed instructions using the data and a router to route the wavelets. The routing is in accordance with virtual channel specifiers of the wavelets and controlled by routing configuration information of the router. A software stack determines distributed placement of linear operators based on a description of a neural network. The determined placement is used to configure the routers including usage of the respective colors. The determined placement is used to configure the compute elements including the respective programmed instructions each is configured to execute.
    Type: Application
    Filed: October 30, 2020
    Publication date: November 24, 2022
    Inventors: Vladimir KIBARDIN, Michael Edwin JAMES, Michael MORRISON, Sean LIE, Gary R. LAUTERBACH, Stanislav FUNIAK