Patents by Inventor Hyeonuk SIM

Hyeonuk SIM has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240411516
    Abstract: A precision allocation method and device based on a neural network processor are provided. The precision allocation method includes allocating a weight of a neural network to a multiplier column of a neural network processor, determining a lower tolerance for the multiplier column, and selecting a first data type for the multiplier column from a plurality of data types based on the lower tolerance, wherein each of the plurality of data types corresponds to a different precision level, and performing, by the neural network processor, a multiplication operation based on the weight and the first data type.
    Type: Application
    Filed: June 5, 2024
    Publication date: December 12, 2024
    Inventors: Penghui WEI, Gang SUN, Jiao WU, Hyeonuk SIM
  • Publication number: 20240184533
    Abstract: A computing apparatus include a processing circuitry configured to detect a weight depth field, related to a range of a weight value of a plurality of weight values, within the weight value, and detect an activation depth field, related to a range of an activation value of a plurality of activation values, within the activation value; identify a first operand in the weight value, and identify a second operand in the activation value; and generate an output value having a resultant depth field determined based on the weight depth field and the activation depth field, by performing an operation based on the identified first and second operands.
    Type: Application
    Filed: May 31, 2023
    Publication date: June 6, 2024
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventor: Hyeonuk SIM
  • Publication number: 20240169190
    Abstract: An electronic device includes: a shifter configured to perform a shift operation based on a codebook supporting a plurality of quantization levels preset for data bits of a data set; and a decoder configured to control the shifter by setting quantization scales of the data bits differently for preset groups, wherein the shifter is configured to quantize and output the data bits by control of the decoder.
    Type: Application
    Filed: May 9, 2023
    Publication date: May 23, 2024
    Applicants: SAMSUNG ELECTRONICS CO., LTD., UNIST(ULSAN NATIONAL INSTITUTE OF SCIENCE AND TECHNOLOGY)
    Inventors: Hyeonuk SIM, Sangyun OH, Jongeun LEE
  • Publication number: 20240143274
    Abstract: A neural network operation apparatus and method are disclosed. A neural network operation apparatus includes a receiver that receives data for a neural network operation, and a processor that performs a scaling operation by multiplying the data by a constant, performs a rounding operation by truncating bits forming a result of the scaling operation, performs a scaling back operation based on a result of the rounding operation, and generates a neural network operation result by accumulating results of the scaling back operation.
    Type: Application
    Filed: May 3, 2023
    Publication date: May 2, 2024
    Inventors: Hyeonuk SIM, Jongeun LEE, Azat AZAMAT
  • Publication number: 20240046082
    Abstract: A neural network device including an on-chip buffer memory that stores an input feature map of a first layer of a neural network, a computational circuit that receives the input feature map of the first layer through a single port of the on-chip buffer memory and performs a neural network operation on the input feature map of the first layer to output an output feature map of the first layer corresponding to the input feature map of the first layer, and a controller that transmits the output feature map of the first layer to the on-chip buffer memory through the single port to store the output feature map of the first layer and the input feature map of the first layer together in the on-chip buffer memory.
    Type: Application
    Filed: October 18, 2023
    Publication date: February 8, 2024
    Applicants: Samsung Electronics Co., Ltd., UNIST(ULSAN NATIONAL INSTITUTE OF SCIENCE AND TECHNOLOGY)
    Inventors: Hyeongseok YU, Hyeonuk SIM, Jongeun LEE
  • Patent number: 11829862
    Abstract: A neural network device includes: an on-chip buffer memory that stores an input feature map of a first layer of a neural network, a computational circuit that receives the input feature map of the first layer through a single port of the on-chip buffer memory and performs a neural network operation on the input feature map of the first layer to output an output feature map of the first layer corresponding to the input feature map of the first layer, and a controller that transmits the output feature map of the first layer to the on-chip buffer memory through the single port to store the output feature map of the first layer and the input feature map of the first layer together in the on-chip buffer memory.
    Type: Grant
    Filed: June 5, 2020
    Date of Patent: November 28, 2023
    Assignees: Samsung Electronics Co., Ltd., UNIST(ULSAN NATIONAL INSTITUTE OF SCIENCE AND TECHNOLOGY)
    Inventors: Hyeongseok Yu, Hyeonuk Sim, Jongeun Lee
  • Publication number: 20230085442
    Abstract: A processor-implemented method includes determining a first quantization value by performing log quantization on a parameter from one of input activation values and weight values in a layer of a neural network, comparing a threshold value with an error between a first dequantization value obtained by dequantization of the first quantization value and the parameter, determining a second quantization value by performing log quantization on the error in response to the error being greater than the threshold value as a result of the comparing; and quantizing the parameter to a value in which the first quantization value and the second quantization value are grouped.
    Type: Application
    Filed: November 15, 2022
    Publication date: March 16, 2023
    Applicants: SAMSUNG ELECTRONICS CO., LTD., UNIST(ULSAN NATIONAL INSTITUTE OF SCIENCE AND TECHNOLOGY)
    Inventors: Hyeongseok YU, Hyeonuk SIM, Jongeun LEE
  • Patent number: 11531893
    Abstract: A processor-implemented method includes determining a first quantization value by performing log quantization on a parameter from one of input activation values and weight values in a layer of a neural network, comparing a threshold value with an error between a first dequantization value obtained by dequantization of the first quantization value and the parameter, determining a second quantization value by performing log quantization on the error in response to the error being greater than the threshold value as a result of the comparing; and quantizing the parameter to a value in which the first quantization value and the second quantization value are grouped.
    Type: Grant
    Filed: June 2, 2020
    Date of Patent: December 20, 2022
    Assignees: Samsung Electronics Co., Ltd., UNIST(ULSAN NATIONAL INSTITUTE OF SCIENCE AND TECHNOLOGY)
    Inventors: Hyeongseok Yu, Hyeonuk Sim, Jongeun Lee
  • Publication number: 20220284262
    Abstract: A neural network operation apparatus and method implementing quantization is disclosed. The neural network operation method may include receiving a weight of a neural network, a candidate set of quantization points, and a bitwidth for representing the weight, extracting a subset of quantization points from the candidate set of quantization points based on the bitwidth, calculating a quantization loss based on the weight of the neural network and the subset of quantization points, and generating a target subset of quantization points based on the quantization loss.
    Type: Application
    Filed: July 6, 2021
    Publication date: September 8, 2022
    Applicants: SAMSUNG ELECTRONICS CO., LTD., UNIST(ULSAN NATIONAL INSTITUTE OF SCIENCE AND TECHNOLOGY)
    Inventors: Sehwan LEE, Hyeonuk SIM, Jongeun LEE
  • Publication number: 20210174177
    Abstract: A neural network device includes: an on-chip buffer memory that stores an input feature map of a first layer of a neural network, a computational circuit that receives the input feature map of the first layer through a single port of the on-chip buffer memory and performs a neural network operation on the input feature map of the first layer to output an output feature map of the first layer corresponding to the input feature map of the first layer, and a controller that transmits the output feature map of the first layer to the on-chip buffer memory through the single port to store the output feature map of the first layer and the input feature map of the first layer together in the on-chip buffer memory.
    Type: Application
    Filed: June 5, 2020
    Publication date: June 10, 2021
    Applicants: Samsung Electronics Co., Ltd., UNIST(ULSAN NATIONAL INSTITUTE OF SCIENCE AND TECHNOLOGY)
    Inventors: Hyeongseok YU, Hyeonuk SIM, Jongeun LEE
  • Publication number: 20200380360
    Abstract: A processor-implemented method includes determining a first quantization value by performing log quantization on a parameter from one of input activation values and weight values in a layer of a neural network, comparing a threshold value with an error between a first dequantization value obtained by dequantization of the first quantization value and the parameter, determining a second quantization value by performing log quantization on the error in response to the error being greater than the threshold value as a result of the comparing; and quantizing the parameter to a value in which the first quantization value and the second quantization value are grouped.
    Type: Application
    Filed: June 2, 2020
    Publication date: December 3, 2020
    Applicants: Samsung Electronics Co., Ltd., UNIST(ULSAN NATIONAL INSTITUTE OF SCIENCE AND TECHNOLOGY)
    Inventors: Hyeongseok YU, Hyeonuk SIM, Jongeun LEE