Patents by Inventor Jaehyeong SIM

Jaehyeong SIM has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12175299
    Abstract: A computing device and method is disclosed. The computing device includes a plurality of processing cores, and a tile scheduler configured to update a cost matrix of each of the plurality of processing cores based on meta information of each of first tiles previously allocated to the plurality of processing cores and meta information of each of second tiles, and allocate the second tiles with respect to the plurality of processing cores using the updated cost matrix of each of the plurality of processing cores.
    Type: Grant
    Filed: April 6, 2021
    Date of Patent: December 24, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Jae-Eon Jo, Hyung-Dal Kwon, Hanmin Park, Jaehyeong Sim, Seung Wook Lee
  • Patent number: 12130756
    Abstract: An accelerator, a method of operating the accelerator, and an electronic device including the accelerator. A method of operating the accelerator configured to perform a target operation includes packing input data with a data layout determined based on a word width of a memory in the accelerator and a spatial size of a filter to be applied to the target operation and storing the packed input data in the memory, and performing the target operation between a portion of the input data stored in a same word in the memory and weights of the filter.
    Type: Grant
    Filed: August 3, 2023
    Date of Patent: October 29, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hanmin Park, Hyung-Dal Kwon, Jaehyeong Sim, Seungwook Lee, Jae-Eon Jo
  • Publication number: 20240004809
    Abstract: An accelerator, a method of operating the accelerator, and an electronic device including the accelerator. A method of operating the accelerator configured to perform a target operation includes packing input data with a data layout determined based on a word width of a memory in the accelerator and a spatial size of a filter to be applied to the target operation and storing the packed input data in the memory, and performing the target operation between a portion of the input data stored in a same word in the memory and weights of the filter.
    Type: Application
    Filed: August 3, 2023
    Publication date: January 4, 2024
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Hanmin PARK, Hyung-Dal KWON, Jaehyeong SIM, Seungwook LEE, Jae-Eon JO
  • Patent number: 11741026
    Abstract: An accelerator, a method of operating the accelerator, and an electronic device including the accelerator. A method of operating the accelerator configured to perform a target operation includes packing input data with a data layout determined based on a word width of a memory in the accelerator and a spatial size of a filter to be applied to the target operation and storing the packed input data in the memory, and performing the target operation between a portion of the input data stored in a same word in the memory and weights of the filter.
    Type: Grant
    Filed: February 23, 2021
    Date of Patent: August 29, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hanmin Park, Hyung-Dal Kwon, Jaehyeong Sim, Seungwook Lee, Jae-Eon Jo
  • Publication number: 20220164164
    Abstract: An apparatus with deep learning includes: a systolic adder tree including adder trees connected in row and column directions; and an input multiplexer connected to an input register of at least one of the adder trees and configured to determine column directional data movement between the adder trees based on operation modes.
    Type: Application
    Filed: June 24, 2021
    Publication date: May 26, 2022
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Hyung-Dal KWON, Ho Young KIM, Hanmin PARK, Jaehyeong SIM, Seung Wook LEE, Jae-Eon JO
  • Publication number: 20220083390
    Abstract: A computing device and method is disclosed. The computing device includes a plurality of processing cores, and a tile scheduler configured to update a cost matrix of each of the plurality of processing cores based on meta information of each of first tiles previously allocated to the plurality of processing cores and meta information of each of second tiles, and allocate the second tiles with respect to the plurality of processing cores using the updated cost matrix of each of the plurality of processing cores.
    Type: Application
    Filed: April 6, 2021
    Publication date: March 17, 2022
    Applicant: SAMSUNG ELECTRONICS CO., LTD
    Inventors: Jae-Eon JO, Hyung-Dal KWON, Hanmin PARK, Jaehyeong SIM, Seung Wook LEE
  • Publication number: 20220066960
    Abstract: An accelerator, a method of operating the accelerator, and an electronic device including the accelerator. A method of operating the accelerator configured to perform a target operation includes packing input data with a data layout determined based on a word width of a memory in the accelerator and a spatial size of a filter to be applied to the target operation and storing the packed input data in the memory, and performing the target operation between a portion of the input data stored in a same word in the memory and weights of the filter.
    Type: Application
    Filed: February 23, 2021
    Publication date: March 3, 2022
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Hanmin PARK, Hyung-Dal KWON, Jaehyeong SIM, Seungwook LEE, Jae-Eon JO
  • Patent number: 10909418
    Abstract: A processor-implemented neural network method includes: obtaining, from a memory, data of an input feature map and kernels having a binary-weight, wherein the kernels are to be processed in a layer of a neural network; decomposing each of the kernels into a first type sub-kernel reconstructed with weights of a same sign, and a second type sub-kernel for correcting a difference between a respective kernel, among the kernels, and the first type sub-kernel; performing a convolution operation by using the input feature map and the first type sub-kernels and the second type sub-kernels decomposed from each of the kernels; and obtaining an output feature map by combining results of the convolution operation.
    Type: Grant
    Filed: May 27, 2020
    Date of Patent: February 2, 2021
    Assignees: Samsung Electronics Co., Ltd., Korea Advanced Institute of Science and Technology
    Inventors: Sehwan Lee, Leesup Kim, Hyeonuk Kim, Jaehyeong Sim, Yeongjae Choi
  • Publication number: 20200285887
    Abstract: A processor-implemented neural network method includes: obtaining, from a memory, data of an input feature map and kernels having a binary-weight, wherein the kernels are to be processed in a layer of a neural network; decomposing each of the kernels into a first type sub-kernel reconstructed with weights of a same sign, and a second type sub-kernel for correcting a difference between a respective kernel, among the kernels, and the first type sub-kernel; performing a convolution operation by using the input feature map and the first type sub-kernels and the second type sub-kernels decomposed from each of the kernels; and obtaining an output feature map by combining results of the convolution operation.
    Type: Application
    Filed: May 27, 2020
    Publication date: September 10, 2020
    Applicants: Samsung Electronics Co., Ltd., Korea Advanced Institute of Science and Technology
    Inventors: Sehwan LEE, Leesup KIM, Hyeonuk KIM, Jaehyeong SIM, Yeongjae CHOI
  • Patent number: 10699160
    Abstract: A processor-implemented neural network method includes: obtaining, from a memory, data an input feature map and kernels having a binary-weight, wherein the kernels are to be processed in a layer of a neural network; decomposing each of the kernels into a first type sub-kernel reconstructed with weights of a same sign, and a second type sub-kernel for correcting a difference between a respective kernel, among the kernels, and the first type sub-kernel; performing a convolution operation by using the input feature map and the first type sub-kernels and the second type sub-kernels decomposed from each of the kernels; and obtaining an output feature map by combining results of the convolution operation.
    Type: Grant
    Filed: August 23, 2018
    Date of Patent: June 30, 2020
    Assignees: Samsung Electronics Co., Ltd., Korea Advanced Institute of Science and Technology
    Inventors: Sehwan Lee, Leesup Kim, Hyeonuk Kim, Jaehyeong Sim, Yeongjae Choi
  • Publication number: 20190065896
    Abstract: A processor-implemented neural network method includes: obtaining, from a memory, data an input feature map and kernels having a binary-weight, wherein the kernels are to be processed in a layer of a neural network; decomposing each of the kernels into a first type sub-kernel reconstructed with weights of a same sign, and a second type sub-kernel for correcting a difference between a respective kernel, among the kernels, and the first type sub-kernel; performing a convolution operation by using the input feature map and the first type sub-kernels and the second type sub-kernels decomposed from each of the kernels; and obtaining an output feature map by combining results of the convolution operation.
    Type: Application
    Filed: August 23, 2018
    Publication date: February 28, 2019
    Applicants: Samsung Electronics Co., Ltd., Korea Advanced Institute of Science and Technology
    Inventors: Sehwan LEE, Leesup KIM, Hyeonuk KIM, Jaehyeong SIM, Yeongjae CHOI