Patents by Inventor Ju Yeob Kim
Ju Yeob Kim has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240062809Abstract: Disclosed herein is an Artificial Intelligence (AI) processor. The AI processor includes multiple NVM AI cores for respectively performing basic unit operations required for a deep-learning operation based on data stored in NVM; SRAM for storing at least some of the results of the basic unit operations; and an AI core for performing an accumulation operation on the results of the basic unit operation.Type: ApplicationFiled: November 1, 2023Publication date: February 22, 2024Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jin-Ho HAN, Byung-Jo KIM, Ju-Yeob KIM, Hye-Ji KIM, Joo-Hyun LEE, Seong-Min KIM
-
Patent number: 11893392Abstract: A method for processing floating point operations in a multi-processor system including a plurality of single processor cores is provided. In this method, upon receiving a group setting for performing an operation, the plurality of single processor cores are grouped into at least one group according to the group setting, and a single processor core set as a master in the group loads an instruction for performing the operation from an external memory, and performs parallel operations by utilizing floating point units (FUPs) of all single processor cores in the group according to the instructions.Type: GrantFiled: November 30, 2021Date of Patent: February 6, 2024Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Ju-Yeob Kim, Jin Ho Han
-
Patent number: 11842764Abstract: Disclosed herein is an Artificial Intelligence (AI) processor. The AI processor includes multiple NVM AI cores for respectively performing basic unit operations required for a deep-learning operation based on data stored in NVM; SRAM for storing at least some of the results of the basic unit operations; and an AI core for performing an accumulation operation on the results of the basic unit operation.Type: GrantFiled: December 7, 2021Date of Patent: December 12, 2023Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jin-Ho Han, Byung-Jo Kim, Ju-Yeob Kim, Hye-Ji Kim, Joo-Hyun Lee, Seong-Min Kim
-
Publication number: 20230259581Abstract: Disclosed herein is a method for outer-product-based matrix multiplication for a floating-point data type includes receiving first floating-point data and second floating-point data and performing matrix multiplication on the first floating-point data and the second floating-point data, and the result value of the matrix multiplication is calculated based on the suboperation result values of floating-point units.Type: ApplicationFiled: February 14, 2023Publication date: August 17, 2023Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Won JEON, Young-Su KWON, Ju-Yeob KIM, Hyun-Mi KIM, Hye-Ji KIM, Chun-Gi LYUH, Mi-Young LEE, Jae-Hoon CHUNG, Yong-Cheol CHO, Jin-Ho HAN
-
Patent number: 11494630Abstract: The neuromorphic arithmetic device comprises an input monitoring circuit that outputs a monitoring result by monitoring that first bits of at least one first digit of a plurality of feature data and a plurality of weight data are all zeros, a partial sum data generator that skips an arithmetic operation that generates a first partial sum data corresponding to the first bits of a plurality of partial sum data in response to the monitoring result while performing the arithmetic operation of generating the plurality of partial sum data, based on the plurality of feature data and the plurality of weight data, and a shift adder that generates the first partial sum data with a zero value and result data, based on second partial sum data except for the first partial sum data among the plurality of partial sum data and the first partial sum data generated with the zero value.Type: GrantFiled: January 14, 2020Date of Patent: November 8, 2022Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Young-deuk Jeon, Byung Jo Kim, Ju-Yeob Kim, Jin Kyu Kim, Ki Hyuk Park, Mi Young Lee, Joo Hyun Lee, Min-Hyung Cho
-
Patent number: 11488003Abstract: An artificial neural network apparatus and an operating method including a plurality of layer processors for performing operations on input data are disclosed. The artificial neural network apparatus may include: a flag layer processor for outputting a flag according to a comparison result between a pooling output value of a current frame and a pooling output value of a previous frame; and a controller for stopping operation of a layer processor which performs operations after the flag layer processor among the plurality of layer processors when the flag is outputted from the flag layer processor, wherein the flag layer processor is a layer processor that performs a pooling operation first among the plurality of layer processors.Type: GrantFiled: May 10, 2019Date of Patent: November 1, 2022Assignee: Electronics and Telecommunications Research InstituteInventors: Ju-Yeob Kim, Byung Jo Kim, Seong Min Kim, Jin Kyu Kim, Mi Young Lee, Joo Hyun Lee
-
Patent number: 11455539Abstract: An embodiment of the present invention provides a quantization method for weights of a plurality of batch normalization layers, including: receiving a plurality of previously learned first weights of the plurality of batch normalization layers; obtaining first distribution information of the plurality of first weights; performing a first quantization on the plurality of first weights using the first distribution information to obtain a plurality of second weights; obtaining second distribution information of the plurality of second weights; and performing a second quantization on the plurality of second weights using the second distribution information to obtain a plurality of final weights, and thereby reducing an error that may occur when quantizing the weight of the batch normalization layer.Type: GrantFiled: August 15, 2019Date of Patent: September 27, 2022Assignee: Electronics and Telecommunications Research InstituteInventors: Mi Young Lee, Byung Jo Kim, Seong Min Kim, Ju-Yeob Kim, Jin Kyu Kim, Joo Hyun Lee
-
Patent number: 11449720Abstract: Provided is an image recognition device. The image recognition device includes a frame data change detector that sequentially receives a plurality of frame data and detects a difference between two consecutive frame data, an ensemble section controller that sets an ensemble section in the plurality of frame data, based on the detected difference, an image recognizer that sequentially identifies classes respectively corresponding to a plurality of section frame data by applying different neural network classifiers to the plurality of section frame data in the ensemble section, and a recognition result classifier that sequentially identifies ensemble classes respectively corresponding to the plurality of section frame data by combining the classes in the ensemble section.Type: GrantFiled: May 8, 2020Date of Patent: September 20, 2022Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Ju-Yeob Kim, Byung Jo Kim, Seong Min Kim, Jin Kyu Kim, Ki Hyuk Park, Mi Young Lee, Joo Hyun Lee, Young-deuk Jeon, Min-Hyung Cho
-
Publication number: 20220180919Abstract: Disclosed herein is an Artificial Intelligence (AI) processor. The AI processor includes multiple NVM AI cores for respectively performing basic unit operations required for a deep-learning operation based on data stored in NVM; SRAM for storing at least some of the results of the basic unit operations; and an AI core for performing an accumulation operation on the results of the basic unit operation.Type: ApplicationFiled: December 7, 2021Publication date: June 9, 2022Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jin-Ho HAN, Byung-Jo KIM, Ju-Yeob KIM, Hye-Ji KIM, Joo-Hyun LEE, Seong-Min KIM
-
Publication number: 20220171631Abstract: A method for processing floating point operations in a multi-processor system including a plurality of single processor cores is provided. In this method, upon receiving a group setting for performing an operation, the plurality of single processor cores are grouped into at least one group according to the group setting, and a single processor core set as a master in the group loads an instruction for performing the operation from an external memory, and performs parallel operations by utilizing floating point units (FUPs) of all single processor cores in the group according to the instructions.Type: ApplicationFiled: November 30, 2021Publication date: June 2, 2022Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Ju-Yeob KIM, Jin Ho HAN
-
Patent number: 11204876Abstract: A method for controlling a memory from which data is transferred to a neural network processor and an apparatus thereof are provided, the method including: generating prefetch information of data by using a blob descriptor and a reference prediction table after history information is input; reading the data in the memory based on the pre-fetch information and temporarily archiving read data in a prefetch buffer; and accessing next data in the memory based on the prefetch information and temporarily archiving the next data in the prefetch buffer after the data is transferred to the neural network from the prefetch buffer.Type: GrantFiled: November 19, 2020Date of Patent: December 21, 2021Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Byung Jo Kim, Joo Hyun Lee, Seong Min Kim, Ju-Yeob Kim, Jin Kyu Kim, Mi Young Lee
-
Publication number: 20210357753Abstract: A method and apparatus for multi-level stepwise quantization for neural network are provided. The apparatus sets a reference level by selecting a value from among values of parameters of the neural network in a direction from a high value equal to or greater than a predetermined value to a lower value, and performs learning based on the reference level. The setting of a reference level and the performing of learning are iteratively performed until the result of the reference level learning satisfies a predetermined value and there is no variable parameter that is updated during learning among the parameters.Type: ApplicationFiled: May 11, 2021Publication date: November 18, 2021Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jin Kyu KIM, Byung Jo KIM, Seong Min KIM, Ju-Yeob KIM, Ki Hyuk PARK, Mi Young LEE, Joo Hyun LEE, Young-deuk JEON, Min-Hyung CHO
-
Publication number: 20210303982Abstract: Disclosed is a neural network computing device. The neural network computing device includes a neural network accelerator including an analog MAC, a controller controlling the neural network accelerator in one of a first mode and a second mode, and a calibrator that calibrating a gain and a DC offset of the analog MAC. The calibrator includes a memory storing weight data, calibration weight data, and calibration input data, a gain and offset calculator reading the calibration weight data and the calibration input data from the memory, inputting the calibration weight data and the calibration input data to the analog MAC, receiving calibration output data from the analog MAC, and calculating the gain and the DC offset of the analog MAC, and an on-device quantizer reading the weight data, receiving the gain and the DC offset, generating quantized weight data, based on the gain and the DC offset.Type: ApplicationFiled: March 18, 2021Publication date: September 30, 2021Applicant: Electronics and Telecommunications Research InstituteInventors: Mi Young LEE, Young-deuk JEON, Byung Jo KIM, Ju-Yeob KIM, Jin Kyu KIM, Ki Hyuk PARK, JOO HYUN LEE, MIN-HYUNG CHO
-
Publication number: 20210157734Abstract: A method for controlling a memory from which data is transferred to a neural network processor and an apparatus thereof are provided, the method including: generating prefetch information of data by using a blob descriptor and a reference prediction table after history information is input; reading the data in the memory based on the pre-fetch information and temporarily archiving read data in a prefetch buffer; and accessing next data in the memory based on the prefetch information and temporarily archiving the next data in the prefetch buffer after the data is transferred to the neural network from the prefetch buffer.Type: ApplicationFiled: November 19, 2020Publication date: May 27, 2021Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Byung Jo KIM, Joo Hyun LEE, Seong Min KIM, Ju-Yeob KIM, Jin Kyu KIM, Mi Young LEE
-
Patent number: 11003985Abstract: Provided is a convolutional neural network system including a data selector configured to output an input value corresponding to a position of a sparse weight from among input values of input data on a basis of a sparse index indicating the position of a nonzero value in a sparse weight kernel, and a multiply-accumulate (MAC) computator configured to perform a convolution computation on the input value output from the data selector by using the sparse weight kernel.Type: GrantFiled: November 7, 2017Date of Patent: May 11, 2021Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jin Kyu Kim, Byung Jo Kim, Seong Min Kim, Ju-Yeob Kim, Mi Young Lee, Joo Hyun Lee
-
Publication number: 20200356804Abstract: Provided is an image recognition device. The image recognition device includes a frame data change detector that sequentially receives a plurality of frame data and detects a difference between two consecutive frame data, an ensemble section controller that sets an ensemble section in the plurality of frame data, based on the detected difference, an image recognizer that sequentially identifies classes respectively corresponding to a plurality of section frame data by applying different neural network classifiers to the plurality of section frame data in the ensemble section, and a recognition result classifier that sequentially identifies ensemble classes respectively corresponding to the plurality of section frame data by combining the classes in the ensemble section.Type: ApplicationFiled: May 8, 2020Publication date: November 12, 2020Inventors: Ju-Yeob KIM, Byung Jo KIM, Seong Min KIM, Jin Kyu KIM, Ki Hyuk PARK, Mi Young LEE, Joo Hyun LEE, Young-deuk JEON, Min-Hyung CHO
-
Publication number: 20200226456Abstract: The neuromorphic arithmetic device comprises an input monitoring circuit that outputs a monitoring result by monitoring that first bits of at least one first digit of a plurality of feature data and a plurality of weight data are all zeros, a partial sum data generator that skips an arithmetic operation that generates a first partial sum data corresponding to the first bits of a plurality of partial sum data in response to the monitoring result while performing the arithmetic operation of generating the plurality of partial sum data, based on the plurality of feature data and the plurality of weight data, and a shift adder that generates the first partial sum data with a zero value and result data, based on second partial sum data except for the first partial sum data among the plurality of partial sum data and the first partial sum data generated with the zero value.Type: ApplicationFiled: January 14, 2020Publication date: July 16, 2020Inventors: Young-deuk JEON, Byung Jo KIM, Ju-Yeob KIM, Jin Kyu KIM, Ki Hyuk PARK, Mi Young LEE, Joo Hyun LEE, Min-Hyung CHO
-
Publication number: 20200151568Abstract: An embodiment of the present invention provides a quantization method for weights of a plurality of batch normalization layers, including: receiving a plurality of previously learned first weights of the plurality of batch normalization layers; obtaining first distribution information of the plurality of first weights; performing a first quantization on the plurality of first weights using the first distribution information to obtain a plurality of second weights; obtaining second distribution information of the plurality of second weights; and performing a second quantization on the plurality of second weights using the second distribution information to obtain a plurality of final weights, and thereby reducing an error that may occur when quantizing the weight of the batch normalization layer.Type: ApplicationFiled: August 15, 2019Publication date: May 14, 2020Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Mi Young LEE, Byung Jo KIM, Seong Min KIM, Ju-Yeob KIM, Jin Kyu KIM, Joo Hyun LEE
-
Publication number: 20200143228Abstract: An embodiment of the present invention provides a neural network operator that performs a plurality of processes for each of a plurality of layers of a neural network, including: a memory that includes a data-storing space storing a plurality of data for performing the plurality of processes and a synapse code-storing space storing a plurality of descriptors with respect to the plurality of processes; a memory-transmitting processor that obtains the plurality of descriptors and transmits the plurality of data to the neural network operator based on the plurality of descriptors; an embedded instruction processor that obtains the plurality of descriptors from the memory-transmitting processor, transmits a first data set in a first descriptor to the neural network operator based on the first descriptor corresponding to the first process among the plurality of processes, reads a second descriptor corresponding to a second process, which is a next operation of the first process, based on the first descriptor, andType: ApplicationFiled: August 15, 2019Publication date: May 7, 2020Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Mi Young LEE, Joo Hyun LEE, Byung Jo KIM, Ju-Yeob KIM, Jin Kyu KIM
-
Publication number: 20190362224Abstract: An artificial neural network apparatus and an operating method including a plurality of layer processors for performing operations on input data are disclosed. The artificial neural network apparatus may include: a flag layer processor for outputting a flag according to a comparison result between a pooling output value of a current frame and a pooling output value of a previous frame; and a controller for stopping operation of a layer processor which performs operations after the flag layer processor among the plurality of layer processors when the flag is outputted from the flag layer processor, wherein the flag layer processor is a layer processor that performs a pooling operation first among the plurality of layer processors.Type: ApplicationFiled: May 10, 2019Publication date: November 28, 2019Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Ju-Yeob KIM, Byung Jo KIM, Seong Min KIM, Jin Kyu KIM, Mi Young LEE, Joo Hyun LEE