Patents by Inventor Hyeonuk SIM

Hyeonuk SIM has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240143274
    Abstract: A neural network operation apparatus and method are disclosed. A neural network operation apparatus includes a receiver that receives data for a neural network operation, and a processor that performs a scaling operation by multiplying the data by a constant, performs a rounding operation by truncating bits forming a result of the scaling operation, performs a scaling back operation based on a result of the rounding operation, and generates a neural network operation result by accumulating results of the scaling back operation.
    Type: Application
    Filed: May 3, 2023
    Publication date: May 2, 2024
    Inventors: Hyeonuk SIM, Jongeun LEE, Azat AZAMAT
  • Publication number: 20240046082
    Abstract: A neural network device including an on-chip buffer memory that stores an input feature map of a first layer of a neural network, a computational circuit that receives the input feature map of the first layer through a single port of the on-chip buffer memory and performs a neural network operation on the input feature map of the first layer to output an output feature map of the first layer corresponding to the input feature map of the first layer, and a controller that transmits the output feature map of the first layer to the on-chip buffer memory through the single port to store the output feature map of the first layer and the input feature map of the first layer together in the on-chip buffer memory.
    Type: Application
    Filed: October 18, 2023
    Publication date: February 8, 2024
    Applicants: Samsung Electronics Co., Ltd., UNIST(ULSAN NATIONAL INSTITUTE OF SCIENCE AND TECHNOLOGY)
    Inventors: Hyeongseok YU, Hyeonuk SIM, Jongeun LEE
  • Patent number: 11829862
    Abstract: A neural network device includes: an on-chip buffer memory that stores an input feature map of a first layer of a neural network, a computational circuit that receives the input feature map of the first layer through a single port of the on-chip buffer memory and performs a neural network operation on the input feature map of the first layer to output an output feature map of the first layer corresponding to the input feature map of the first layer, and a controller that transmits the output feature map of the first layer to the on-chip buffer memory through the single port to store the output feature map of the first layer and the input feature map of the first layer together in the on-chip buffer memory.
    Type: Grant
    Filed: June 5, 2020
    Date of Patent: November 28, 2023
    Assignees: Samsung Electronics Co., Ltd., UNIST(ULSAN NATIONAL INSTITUTE OF SCIENCE AND TECHNOLOGY)
    Inventors: Hyeongseok Yu, Hyeonuk Sim, Jongeun Lee
  • Publication number: 20230085442
    Abstract: A processor-implemented method includes determining a first quantization value by performing log quantization on a parameter from one of input activation values and weight values in a layer of a neural network, comparing a threshold value with an error between a first dequantization value obtained by dequantization of the first quantization value and the parameter, determining a second quantization value by performing log quantization on the error in response to the error being greater than the threshold value as a result of the comparing; and quantizing the parameter to a value in which the first quantization value and the second quantization value are grouped.
    Type: Application
    Filed: November 15, 2022
    Publication date: March 16, 2023
    Applicants: SAMSUNG ELECTRONICS CO., LTD., UNIST(ULSAN NATIONAL INSTITUTE OF SCIENCE AND TECHNOLOGY)
    Inventors: Hyeongseok YU, Hyeonuk SIM, Jongeun LEE
  • Patent number: 11531893
    Abstract: A processor-implemented method includes determining a first quantization value by performing log quantization on a parameter from one of input activation values and weight values in a layer of a neural network, comparing a threshold value with an error between a first dequantization value obtained by dequantization of the first quantization value and the parameter, determining a second quantization value by performing log quantization on the error in response to the error being greater than the threshold value as a result of the comparing; and quantizing the parameter to a value in which the first quantization value and the second quantization value are grouped.
    Type: Grant
    Filed: June 2, 2020
    Date of Patent: December 20, 2022
    Assignees: Samsung Electronics Co., Ltd., UNIST(ULSAN NATIONAL INSTITUTE OF SCIENCE AND TECHNOLOGY)
    Inventors: Hyeongseok Yu, Hyeonuk Sim, Jongeun Lee
  • Publication number: 20220284262
    Abstract: A neural network operation apparatus and method implementing quantization is disclosed. The neural network operation method may include receiving a weight of a neural network, a candidate set of quantization points, and a bitwidth for representing the weight, extracting a subset of quantization points from the candidate set of quantization points based on the bitwidth, calculating a quantization loss based on the weight of the neural network and the subset of quantization points, and generating a target subset of quantization points based on the quantization loss.
    Type: Application
    Filed: July 6, 2021
    Publication date: September 8, 2022
    Applicants: SAMSUNG ELECTRONICS CO., LTD., UNIST(ULSAN NATIONAL INSTITUTE OF SCIENCE AND TECHNOLOGY)
    Inventors: Sehwan LEE, Hyeonuk SIM, Jongeun LEE
  • Publication number: 20210174177
    Abstract: A neural network device includes: an on-chip buffer memory that stores an input feature map of a first layer of a neural network, a computational circuit that receives the input feature map of the first layer through a single port of the on-chip buffer memory and performs a neural network operation on the input feature map of the first layer to output an output feature map of the first layer corresponding to the input feature map of the first layer, and a controller that transmits the output feature map of the first layer to the on-chip buffer memory through the single port to store the output feature map of the first layer and the input feature map of the first layer together in the on-chip buffer memory.
    Type: Application
    Filed: June 5, 2020
    Publication date: June 10, 2021
    Applicants: Samsung Electronics Co., Ltd., UNIST(ULSAN NATIONAL INSTITUTE OF SCIENCE AND TECHNOLOGY)
    Inventors: Hyeongseok YU, Hyeonuk SIM, Jongeun LEE
  • Publication number: 20200380360
    Abstract: A processor-implemented method includes determining a first quantization value by performing log quantization on a parameter from one of input activation values and weight values in a layer of a neural network, comparing a threshold value with an error between a first dequantization value obtained by dequantization of the first quantization value and the parameter, determining a second quantization value by performing log quantization on the error in response to the error being greater than the threshold value as a result of the comparing; and quantizing the parameter to a value in which the first quantization value and the second quantization value are grouped.
    Type: Application
    Filed: June 2, 2020
    Publication date: December 3, 2020
    Applicants: Samsung Electronics Co., Ltd., UNIST(ULSAN NATIONAL INSTITUTE OF SCIENCE AND TECHNOLOGY)
    Inventors: Hyeongseok YU, Hyeonuk SIM, Jongeun LEE