Patents by Inventor Stephen Sangho YOUN

Stephen Sangho YOUN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230342320
    Abstract: The present disclosure relates to devices for using a configurable stacked architecture for a fixed function datapath with an accelerator for accelerating an operation or a layer of a deep neural network (DNN). The stacked architecture may have a fixed function datapath that includes one or more configurable micro-execution units that execute a series of vector, scalar, reduction, broadcasting, and normalization operations for a DNN layer operation. The fixed function datapath may be customizable based on the DNN or the operation.
    Type: Application
    Filed: July 5, 2023
    Publication date: October 26, 2023
    Inventors: Stephen Sangho YOUN, Steven Karl REINHARDT, Jeremy Halden FOWERS, Lok Chand KOPPAKA, Kalin OVTCHAROV
  • Publication number: 20230305967
    Abstract: The present disclosure relates to devices and methods for using a banked memory structure with accelerators. The devices and methods may segment and isolate dataflows in datapath and memory of the accelerator. The devices and methods may provide each data channel with its own register memory bank. The devices and methods may use a memory address decoder to place the local variables in the proper memory bank.
    Type: Application
    Filed: May 30, 2023
    Publication date: September 28, 2023
    Inventors: Stephen Sangho YOUN, Steven Karl REINHARDT, Hui GENG
  • Patent number: 11734214
    Abstract: The present disclosure relates to devices for using a configurable stacked architecture for a fixed function datapath with an accelerator for accelerating an operation or a layer of a deep neural network (DNN). The stacked architecture may have a fixed function datapath that includes one or more configurable micro-execution units that execute a series of vector, scalar, reduction, broadcasting, and normalization operations for a DNN layer operation. The fixed function datapath may be customizable based on the DNN or the operation.
    Type: Grant
    Filed: March 25, 2021
    Date of Patent: August 22, 2023
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Stephen Sangho Youn, Steven Karl Reinhardt, Jeremy Halden Fowers, Lok Chand Koppaka, Kalin Ovtcharov
  • Patent number: 11704251
    Abstract: The present disclosure relates to devices and methods for using a banked memory structure with accelerators. The devices and methods may segment and isolate dataflows in datapath and memory of the accelerator. The devices and methods may provide each data channel with its own register memory bank. The devices and methods may use a memory address decoder to place the local variables in the proper memory bank.
    Type: Grant
    Filed: April 27, 2022
    Date of Patent: July 18, 2023
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Stephen Sangho Youn, Steven Karl Reinhardt, Hui Geng
  • Publication number: 20220253384
    Abstract: The present disclosure relates to devices and methods for using a banked memory structure with accelerators. The devices and methods may segment and isolate dataflows in datapath and memory of the accelerator. The devices and methods may provide each data channel with its own register memory bank. The devices and methods may use a memory address decoder to place the local variables in the proper memory bank.
    Type: Application
    Filed: April 27, 2022
    Publication date: August 11, 2022
    Inventors: Stephen Sangho YOUN, Steven Karl REINHARDT, Hui GENG
  • Publication number: 20220245083
    Abstract: The present disclosure relates to devices for using a configurable stacked architecture for a fixed function datapath with an accelerator for accelerating an operation or a layer of a deep neural network (DNN). The stacked architecture may have a fixed function datapath that includes one or more configurable micro-execution units that execute a series of vector, scalar, reduction, broadcasting, and normalization operations for a DNN layer operation. The fixed function datapath may be customizable based on the DNN or the operation.
    Type: Application
    Filed: March 25, 2021
    Publication date: August 4, 2022
    Inventors: Stephen Sangho YOUN, Steven Karl REINHARDT, Jeremy Halden FOWERS, Lok Chand KOPPAKA, Kalin OVTCHAROV
  • Patent number: 11347652
    Abstract: The present disclosure relates to devices and methods for using a banked memory structure with accelerators. The devices and methods may segment and isolate dataflows in datapath and memory of the accelerator. The devices and methods may provide each data channel with its own register memory bank. The devices and methods may use a memory address decoder to place the local variables in the proper memory bank.
    Type: Grant
    Filed: November 13, 2020
    Date of Patent: May 31, 2022
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Stephen Sangho Youn, Steven Karl Reinhardt, Hui Geng
  • Publication number: 20220066943
    Abstract: The present disclosure relates to devices and methods for using a banked memory structure with accelerators. The devices and methods may segment and isolate dataflows in datapath and memory of the accelerator. The devices and methods may provide each data channel with its own register memory bank. The devices and methods may use a memory address decoder to place the local variables in the proper memory bank.
    Type: Application
    Filed: November 13, 2020
    Publication date: March 3, 2022
    Inventors: Stephen Sangho YOUN, Steven Karl REINHARDT, Hui GENG
  • Publication number: 20210312266
    Abstract: Deep neural network accelerators (DNNs) with independent datapaths for simultaneous processing of different classes of operations and related methods are described. An example DNN accelerator includes an instruction dispatcher for receiving chains of instructions having both instructions for performing a first class of operations and a second class of operations corresponding to a neural network model. The DNN accelerator further includes a first datapath and a second datapath, where each is configured to execute at least one instruction chain locally before outputting any results. The instruction dispatcher is configured to forward instructions for performing the first class of operations to the first datapath and forward instructions for performing the second class of operations to the second datapath to overlap in time a performance of at least a subset of the first class of operations with a performance of at least a subset of the second class of operations.
    Type: Application
    Filed: April 1, 2020
    Publication date: October 7, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Stephen Sangho YOUN, Lok Chand KOPPAKA, Steven Karl REINHARDT