Patents by Inventor Yong Cheol Peter CHO
Yong Cheol Peter CHO has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12210952Abstract: A reorganizable neural network computing device is provided. The computing device includes a data processing array unit including a plurality of operators disposed at locations corresponding to a row and a column. One or more chaining paths which transfer the first input data from the operator of the first row of the data processing array to the operator of the second row are optionally formed. The plurality of first data input processors of the computing device transfer the first input data for a layer of the neural network to the operators along rows of the data processing array unit, and the plurality of second data input processors of the computing device transfer the second input data to the operators along the columns of the data processing array.Type: GrantFiled: November 27, 2018Date of Patent: January 28, 2025Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Young-Su Kwon, Chan Kim, Hyun Mi Kim, Jeongmin Yang, Chun-Gi Lyuh, Jaehoon Chung, Yong Cheol Peter Cho
-
Patent number: 11507429Abstract: Provided is a neural network accelerator which performs a calculation of a neural network provided with layers, the neural network accelerator including a kernel memory configured to store kernel data related to a filter, a feature map memory configured to store feature map data which are outputs of the layers, and a Processing Element (PE) array including PEs arranged along first and second directions, wherein each of the PEs performs a calculation using the feature map data transmitted in the first direction from the feature map memory and the kernel data transmitted in the second direction from the kernel memory, and transmits a calculation result to the feature map memory in a third direction opposite to the first direction.Type: GrantFiled: July 18, 2018Date of Patent: November 22, 2022Assignee: Electronics and Telecommunications Research InstituteInventors: Chun-Gi Lyuh, Young-Su Kwon, Chan Kim, Hyun Mi Kim, Jeongmin Yang, Jaehoon Chung, Yong Cheol Peter Cho
-
Patent number: 11494623Abstract: A processing element and an operating method thereof in a neural network are disclosed. The processing element may include a first multiplexer selecting one of a first value stored in a first memory and a second value stored in a second memory, a second multiplexer selecting one of a first data input signal and an output value of the first multiplexer, a third multiplexer selecting one of the output value of the first multiplexer and a second data input signal, a multiplier multiplying an output value of the second multiplexer by an output value of the third multiplexer, a fourth multiplexer for selecting one of the output value of the second multiplexer and an output value of the multiplier, and a third memory storing an output value of the fourth multiplexer.Type: GrantFiled: November 30, 2018Date of Patent: November 8, 2022Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Yong Cheol Peter Cho, Young-Su Kwon
-
Patent number: 11176395Abstract: Provided is an image recognition processor. The image recognition processor includes a plurality of nano cores arranged in rows and columns and configured to perform a pattern recognition operation on an input feature using a kernel coefficient in response to each instruction, an instruction memory configured to provide the instruction to each of the plurality of nano cores, a feature memory configured to provide the input feature to each of the plurality of nano cores, a kernel memory configured to provide the kernel coefficients to the plurality of nano cores, and a functional safety processor core configured to receive a result of a pattern recognition operation outputted from the plurality of nano cores to detect the presence of a recognition error, and perform a fault tolerance function on the detected recognition error.Type: GrantFiled: November 25, 2019Date of Patent: November 16, 2021Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jin Ho Han, Young-Su Kwon, Yong Cheol Peter Cho, Min-Seok Choi
-
Publication number: 20200175355Abstract: A neural network accelerator in which processing elements are configured in a systolic array structure includes a memory to store a plurality of feature data including first and second feature data and a plurality of kernel data including first and second kernel data, a first processing element to perform an operation based on the first feature data and the first kernel data and output the first feature data, a selection circuit to select one of the first feature data and the second feature data, based on a control signal, and output the selected feature data, a second processing element to perform an operation based on the selected feature data and one of the first and the second kernel data, and a controller to generate the control signal, based on a neural network characteristic associated with the plurality of feature data and kernel data.Type: ApplicationFiled: November 8, 2019Publication date: June 4, 2020Inventors: Jaehoon CHUNG, Young-Su KWON, Chun-Gi LYUH, Chan KIM, Hyun Mi KIM, Jeongmin YANG, Yong Cheol Peter CHO
-
Publication number: 20200175293Abstract: Provided is an image recognition processor. The image recognition processor includes a plurality of nano cores arranged in rows and columns and configured to perform a pattern recognition operation on an input feature using a kernel coefficient in response to each instruction, an instruction memory configured to provide the instruction to each of the plurality of nano cores, a feature memory configured to provide the input feature to each of the plurality of nano cores, a kernel memory configured to provide the kernel coefficients to the plurality of nano cores, and a functional safety processor core configured to receive a result of a pattern recognition operation outputted from the plurality of nano cores to detect the presence of a recognition error, and perform a fault tolerance function on the detected recognition error.Type: ApplicationFiled: November 25, 2019Publication date: June 4, 2020Inventors: Jin Ho HAN, Young-Su KWON, Yong Cheol Peter CHO, Min-Seok CHOI
-
Publication number: 20190244084Abstract: The processing element may include a first multiplexer selecting one of a first value stored in a first memory and a second value stored in a second memory, a second multiplexer selecting one of a first data input signal and an output value of the first multiplexer, a third multiplexer selecting one of the output value of the first multiplexer and a second data input signal, a multiplier multiplying an output value of the second multiplexer by an output value of the third multiplexer, a fourth multiplexer for selecting one of the output value of the second multiplexer and an output value of the multiplier, and a third memory storing an output value of the fourth multiplexer.Type: ApplicationFiled: November 30, 2018Publication date: August 8, 2019Inventors: Yong Cheol Peter CHO, Young-Su KWON
-
Publication number: 20190164037Abstract: In the present invention, by providing an apparatus for processing a convolutional neural network (CNN), including a weight memory configured to store a first weight group of a first layer, a feature map memory configured to store an input feature map where the first weight group is to be applied, an address generator configured to determine a second position spaced from a first position of a first input pixel of the input feature map based on a size of the first weight group, and determine a plurality of adjacent pixels adjacent to the second position; and a processor configured to apply the first weight group to the plurality of adjacent pixels to obtain a first output pixel corresponding to the first position, a memory space may be efficiently used by saving the memory space.Type: ApplicationFiled: November 29, 2018Publication date: May 30, 2019Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Chan KIM, Young-Su KWON, Hyun Mi KIM, Chun-Gi LYUH, Yong Cheol Peter CHO, Min-Seok CHOI, Jeongmin YANG, Jaehoon CHUNG
-
Publication number: 20190164035Abstract: A reorganizable neural network computing device is provided. The computing device includes a data processing array unit including a plurality of operators disposed at locations corresponding to a row and a column. One or more chaining paths which transfer the first input data from the operator of the first row of the data processing array to the operator of the second row are optionally formed. The plurality of first data input processors of the computing device transfer the first input data for a layer of the neural network to the operators along rows of the data processing array unit, and the plurality of second data input processors of the computing device transfer the second input data to the operators along the columns of the data processing array.Type: ApplicationFiled: November 27, 2018Publication date: May 30, 2019Inventors: Young-Su KWON, Chan KIM, Hyun Mi KIM, Jeongmin YANG, Chun-Gi LYUH, Jaehoon CHUNG, Yong Cheol Peter CHO
-
Publication number: 20190079801Abstract: Provided is a neural network accelerator which performs a calculation of a neural network provided with layers, the neural network accelerator including a kernel memory configured to store kernel data related to a filter, a feature map memory configured to store feature map data which are outputs of the layers, and a Processing Element (PE) array including PEs arranged along first and second directions, wherein each of the PEs performs a calculation using the feature map data transmitted in the first direction from the feature map memory and the kernel data transmitted in the second direction from the kernel memory, and transmits a calculation result to the feature map memory in a third direction opposite to the first direction.Type: ApplicationFiled: July 18, 2018Publication date: March 14, 2019Applicant: Electronics and Telecommunications Research InstituteInventors: Chun-Gi LYUH, Young-Su KWON, Chan KIM, Hyun Mi KIM, Jeongmin YANG, Jaehoon CHUNG, Yong Cheol Peter CHO