Patents by Inventor Jintaek KANG

Jintaek KANG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230342311
    Abstract: An accelerator, an operation method of the accelerator, and an accelerator system including the accelerator are disclosed. The operation method includes receiving one or more workloads assigned on an accelerator, determining reuse data of the workloads based on hardware resource information and/or a memory access cost of the accelerator when a plurality of processing units included in the accelerator performs the workloads, and providing a result of performing the workloads.
    Type: Application
    Filed: June 22, 2023
    Publication date: October 26, 2023
    Applicants: SAMSUNG ELECTRONICS CO., LTD, SNU R&DB FOUNDATION
    Inventors: Seung Wook LEE, Soojung RYU, Jintaek KANG, Sunjung LEE
  • Patent number: 11763153
    Abstract: A processor-implemented neural network method includes: generating a bit vector based on whether each of a plurality of input activations within a neural network is 0; merging the bit vector into the input activations such that bit values within the neural network included in the bit vector are most significant bits (MSBs) of multi bit expressions of the input activations; merging the bit vector into weights such that the bit values included in the bit vector are MSBs of multi bit expressions of the weights; sorting the input activations and the weights based on bits corresponding to the MSBs; and implementing the neural network, including performing operations between the sorted input activations and the sorted weights.
    Type: Grant
    Filed: October 12, 2022
    Date of Patent: September 19, 2023
    Assignees: Samsung Electronics Co., Ltd., Seoul National University R&DB Foundation
    Inventors: Yoojin Kim, Soonhoi Ha, Donghyun Kang, Jintaek Kang
  • Patent number: 11726929
    Abstract: An accelerator, an operation method of the accelerator, and an accelerator system including the accelerator are disclosed. The operation method includes receiving one or more workloads assigned by a host controller, determining reuse data of the workloads based on hardware resource information and/or a memory access cost of the accelerator when a plurality of processing units included in the accelerator performs the workloads, and providing a result of performing the workloads.
    Type: Grant
    Filed: February 2, 2021
    Date of Patent: August 15, 2023
    Assignees: Samsung Electronics Co., Ltd., SNU R&DB FOUNDATION
    Inventors: Seung Wook Lee, Soojung Ryu, Jintaek Kang, Sunjung Lee
  • Publication number: 20230229931
    Abstract: A processor-implemented method of a neural network includes obtaining intermediate pooling results, respectively corresponding to sub-pooling kernels obtained by decomposing an original pooling kernel, by performing a pooling operation on input pixels included in a current window in an input feature map with the sub-pooling kernels, obtaining a final pooling result corresponding to the current window by post-processing the intermediate pooling results, and determining an output pixel value of an output feature map, based on the final pooling result, wherein the current window is determined according to the original pooling kernel having been slid, according to a raster scan order, in the input feature map.
    Type: Application
    Filed: March 18, 2023
    Publication date: July 20, 2023
    Applicants: Samsung Electronics Co., Ltd., Seoul National University R&DB Foundation
    Inventors: Hyunsun PARK, Soonhoi HA, Donghyun KANG, Jintaek KANG
  • Patent number: 11640538
    Abstract: A processor-implemented method of a neural network includes obtaining intermediate pooling results, respectively corresponding to sub-pooling kernels obtained by decomposing an original pooling kernel, by performing a pooling operation on input pixels included in a current window in an input feature map with the sub-pooling kernels, obtaining a final pooling result corresponding to the current window by post-processing the intermediate pooling results, and determining an output pixel value of an output feature map, based on the final pooling result, wherein the current window is determined according to the original pooling kernel having been slid, according to a raster scan order, in the input feature map.
    Type: Grant
    Filed: March 23, 2020
    Date of Patent: May 2, 2023
    Assignees: Samsung Electronics Co., Ltd., Seoul National University R&DB Foundation
    Inventors: Hyunsun Park, Soonhoi Ha, Donghyun Kang, Jintaek Kang
  • Publication number: 20230031471
    Abstract: A processor-implemented neural network method includes: generating a bit vector based on whether each of a plurality of input activations within a neural network is 0; merging the bit vector into the input activations such that bit values within the neural network included in the bit vector are most significant bits (MSBs) of multi bit expressions of the input activations; merging the bit vector into weights such that the bit values included in the bit vector are MSBs of multi bit expressions of the weights; sorting the input activations and the weights based on bits corresponding to the MSBs; and implementing the neural network, including performing operations between the sorted input activations and the sorted weights.
    Type: Application
    Filed: October 12, 2022
    Publication date: February 2, 2023
    Applicants: Samsung Electronics Co., Ltd., Seoul National University R&DB Foundation
    Inventors: Yoojin KIM, Soonhoi HA, Donghyun KANG, Jintaek KANG
  • Patent number: 11501166
    Abstract: A processor-implemented neural network method includes: generating a bit vector based on whether each of a plurality of input activations within a neural network is 0; merging the bit vector into the input activations such that bit values within the neural network included in the bit vector are most significant bits (MSBs) of multi bit expressions of the input activations; merging the bit vector into weights such that the bit values included in the bit vector are MSBs of multi bit expressions of the weights; sorting the input activations and the weights based on bits corresponding to the MSBs; and implementing the neural network, including performing operations between the sorted input activations and the sorted weights.
    Type: Grant
    Filed: April 24, 2020
    Date of Patent: November 15, 2022
    Assignees: Samsung Electronics Co., Ltd., SEOUL NATIONAL UNIVERSITY R&DB FOUNDATION
    Inventors: Yoojin Kim, Soonhoi Ha, Donghyun Kang, Jintaek Kang
  • Publication number: 20210263865
    Abstract: An accelerator, an operation method of the accelerator, and an accelerator system including the accelerator are disclosed. The operation method includes receiving one or more workloads assigned by a host controller, determining reuse data of the workloads based on hardware resource information and/or a memory access cost of the accelerator when a plurality of processing units included in the accelerator performs the workloads, and providing a result of performing the workloads.
    Type: Application
    Filed: February 2, 2021
    Publication date: August 26, 2021
    Applicants: SAMSUNG ELECTRONICS CO., LTD, SNU R&DB FOUNDATION
    Inventors: Seung Wook LEE, Soojung RYU, Jintaek KANG, Sunjung LEE
  • Publication number: 20210117781
    Abstract: A processor-implemented neural network method includes: generating a bit vector based on whether each of a plurality of input activations within a neural network is 0; merging the bit vector into the input activations such that bit values within the neural network included in the bit vector are most significant bits (MSBs) of multi bit expressions of the input activations; merging the bit vector into weights such that the bit values included in the bit vector are MSBs of multi bit expressions of the weights; sorting the input activations and the weights based on bits corresponding to the MSBs; and implementing the neural network, including performing operations between the sorted input activations and the sorted weights.
    Type: Application
    Filed: April 24, 2020
    Publication date: April 22, 2021
    Applicants: Samsung Electronics Co., Ltd., Seoul National University R&DB Foundation
    Inventors: Yoojin KIM, Soonhoi HA, Donghyun KANG, Jintaek KANG
  • Publication number: 20210097403
    Abstract: A processor-implemented method of a neural network includes obtaining intermediate pooling results, respectively corresponding to sub-pooling kernels obtained by decomposing an original pooling kernel, by performing a pooling operation on input pixels included in a current window in an input feature map with the sub-pooling kernels, obtaining a final pooling result corresponding to the current window by post-processing the intermediate pooling results, and determining an output pixel value of an output feature map, based on the final pooling result, wherein the current window is determined according to the original pooling kernel having been slid, according to a raster scan order, in the input feature map.
    Type: Application
    Filed: March 23, 2020
    Publication date: April 1, 2021
    Applicants: Samsung Electronics Co., Ltd., Seoul National University R&DB Foundation
    Inventors: Hyunsun PARK, Soonhoi HA, Donghyun KANG, Jintaek KANG