Patents by Inventor ChanNoh KIM
ChanNoh KIM has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240232091Abstract: A computing method and device with data sharing re provided. The method includes loading, by a loader, input data of an input feature map stored in a memory in loading units according to a loading order, storing, by a buffer controller, the loaded input data in a reuse buffer of an address rotationally allocated according to the loading order, and transmitting, by each of a plurality of senders, to an executer respective input data corresponding to each output data of respective convolution operations among the input data stored in the reuse buffer, wherein portions of the transmitted respective input data overlap other.Type: ApplicationFiled: March 27, 2024Publication date: July 11, 2024Applicant: Samsung Electronics Co., LtdInventors: Yoojin KIM, Channoh KIM, Hyun Sun PARK, Sehwan LEE, Jun-Woo JANG
-
Patent number: 11971823Abstract: A computing method and device with data sharing are provided. The method includes loading, by a loader, input data of an input feature map stored in a memory in loading units according to a loading order, storing, by a buffer controller, the loaded input data in a reuse buffer of an address rotationally allocated according to the loading order, and transmitting, by each of a plurality of senders, to an executer respective input data corresponding to each output data of respective convolution operations among the input data stored in the reuse buffer, wherein portions of the transmitted respective input data overlap other.Type: GrantFiled: May 11, 2021Date of Patent: April 30, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Yoojin Kim, Channoh Kim, Hyun Sun Park, Sehwan Lee, Jun-Woo Jang
-
Patent number: 11797461Abstract: A data transmission method for a convolution operation, and a convolution operation apparatus including a fetcher that includes a loader, at least one sender, a buffer controller, and a reuse buffer. The method includes loading, by the loader, input data of an input feature map according to a loading order, based on input data stored in the reuse buffer, a shape of a kernel to be used for a convolution operation, and two-dimensional (2D) zero-value information of weights of the kernel; storing, by the buffer controller, the loaded input data in the reuse buffer of an address cyclically assigned according to the loading order; and selecting, by each of the at least one sender, input data corresponding to each output data of a convolution operation among the input data stored in the reuse buffer, based on one-dimensional (1D) zero-value information of the weights, and outputting the selected input data.Type: GrantFiled: July 6, 2022Date of Patent: October 24, 2023Assignee: Samsung Electronics Co., Ltd.Inventors: Hyunsun Park, Jun-Woo Jang, Yoojin Kim, Channoh Kim
-
Publication number: 20230065528Abstract: An apparatus with multi-format data support includes: a receiver configured to receive a plurality of data corresponding to a plurality of data formats; one or more processors configured to: multiply the plurality of data using one or more multipliers; perform a first alignment on a result of the multiplication based on an exponent value of the plurality of data; add a result of the first alignment; and perform a second alignment on a result of the addition based on the exponent value and an operation result of a previous cycle.Type: ApplicationFiled: August 9, 2022Publication date: March 2, 2023Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Hyeongseok YU, Donghyuk KWON, Channoh KIM, Seongwook PARK, Yeongon CHO
-
Publication number: 20220342833Abstract: A data transmission method for a convolution operation, and a convolution operation apparatus including a fetcher that includes a loader, at least one sender, a buffer controller, and a reuse buffer. The method includes loading, by the loader, input data of an input feature map according to a loading order, based on input data stored in the reuse buffer, a shape of a kernel to be used for a convolution operation, and two-dimensional (2D) zero-value information of weights of the kernel; storing, by the buffer controller, the loaded input data in the reuse buffer of an address cyclically assigned according to the loading order; and selecting, by each of the at least one sender, input data corresponding to each output data of a convolution operation among the input data stored in the reuse buffer, based on one-dimensional (1D) zero-value information of the weights, and outputting the selected input data.Type: ApplicationFiled: July 6, 2022Publication date: October 27, 2022Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Hyunsun PARK, Jun-Woo JANG, Yoojin KIM, Channoh KIM
-
Publication number: 20220269597Abstract: A memory mapping method includes storing a feature map including a plurality of sets of data used for a neural network operation in a memory, shifting a position of a portion of the data included in the feature map that is stored in the memory based on a parameter of the neural network operation, and outputting data requested by the neural network operation from the feature map in which the position of the portion is shifted based on a memory bandwidth.Type: ApplicationFiled: June 21, 2021Publication date: August 25, 2022Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Channoh KIM, Yoojin KIM
-
Patent number: 11409675Abstract: A data transmission method for a convolution operation, and a convolution operation apparatus including a fetcher that includes a loader, at least one sender, a buffer controller, and a reuse buffer. The method includes loading, by the loader, input data of an input feature map according to a loading order, based on input data stored in the reuse buffer, a shape of a kernel to be used for a convolution operation, and two-dimensional (2D) zero-value information of weights of the kernel; storing, by the buffer controller, the loaded input data in the reuse buffer of an address cyclically assigned according to the loading order; and selecting, by each of the at least one sender, input data corresponding to each output data of a convolution operation among the input data stored in the reuse buffer, based on one-dimensional (1D) zero-value information of the weights, and outputting the selected input data.Type: GrantFiled: May 25, 2021Date of Patent: August 9, 2022Assignee: Samsung Electronics Co., Ltd.Inventors: Hyunsun Park, Jun-Woo Jang, Yoojin Kim, Channoh Kim
-
Publication number: 20220197834Abstract: A data transmission method for a convolution operation, and a convolution operation apparatus including a fetcher that includes a loader, at least one sender, a buffer controller, and a reuse buffer. The method includes loading, by the loader, input data of an input feature map according to a loading order, based on input data stored in the reuse buffer, a shape of a kernel to be used for a convolution operation, and two-dimensional (2D) zero-value information of weights of the kernel; storing, by the buffer controller, the loaded input data in the reuse buffer of an address cyclically assigned according to the loading order; and selecting, by each of the at least one sender, input data corresponding to each output data of a convolution operation among the input data stored in the reuse buffer, based on one-dimensional (1D) zero-value information of the weights, and outputting the selected input data.Type: ApplicationFiled: May 25, 2021Publication date: June 23, 2022Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Hyunsun PARK, Jun-Woo JANG, Yoojin KIM, Channoh KIM
-
Publication number: 20220164674Abstract: A neural network device includes: a memory configured to store a first feature map and a second feature map; and a neural network processor configured to operate a neural network, and comprising: a fetcher configured to fetch input data from the first feature map of the memory; a buffer configured to store the input data; an operator configured to generate output data by performing a convolution operation between the input data and a kernel; a writer configured to write the output data in the second feature map of the memory; and a controller configured to control the fetcher to fetch the input data and control the writer to write the output data, according to one or more intervals and one or more offsets determined based on a dilation rate of the kernel in multiple steps.Type: ApplicationFiled: November 10, 2021Publication date: May 26, 2022Applicant: Samsung Electronics Co., Ltd.Inventors: Jun-Woo JANG, Yoojin KIM, Channoh KIM, Hyun Sun PARK
-
Publication number: 20220164289Abstract: A computing method and device with data sharing re provided. The method includes loading, by a loader, input data of an input feature map stored in a memory in loading units according to a loading order, storing, by a buffer controller, the loaded input data in a reuse buffer of an address rotationally allocated according to the loading order, and transmitting, by each of a plurality of senders, to an executer respective input data corresponding to each output data of respective convolution operations among the input data stored in the reuse buffer, wherein portions of the transmitted respective input data overlap other.Type: ApplicationFiled: May 11, 2021Publication date: May 26, 2022Applicant: Samsung Electronics Co., LtdInventors: Yoojin KIM, Channoh KIM, Hyun Sun PARK, Sehwan LEE, Jun-Woo JANG
-
Patent number: 10977012Abstract: Provided is a computing device according to an embodiment of the present disclosure including an integrated register file configured to store a first variable type and a first variable value of a first variable, and a second variable type and a second variable value of a second variable, a calculator configured to perform a first calculation on the first and second variables according to the first and second variable types, and output a first calculation result, and a type rule table comprising a plurality of entries and, when there is an entry corresponding to a type of the first calculation, and the first and second variable types, configured to output a type of the first calculation result.Type: GrantFiled: February 7, 2019Date of Patent: April 13, 2021Assignee: Seoul National University R&DB FoundationInventors: Jae Wook Lee, Channoh Kim, Jaehyeok Kim, Sungmin Kim
-
Patent number: 10732977Abstract: The bytecode processing device includes a branch target buffer including a tag field, a target address field corresponding to the tag field and an operation code bit field for representing whether a value stored in the tag field is an operation code, a bytecode fetch unit configured to fetch a bytecode including an operation code, an operation code extraction unit configured to extract the operation code from the bytecode, a branch target buffer search unit configured to perform a search to determine whether the extracted operation code exists in the tag field of the branch target buffer, and if the operation code exists in the tag field, extract a target address corresponding to the operation code from the target address field, and a bytecode execution unit configured to execute the bytecode by branching to the target address.Type: GrantFiled: June 13, 2018Date of Patent: August 4, 2020Assignee: Seoul National University R&DB FoundationInventors: JaeWook Lee, ChanNoh Kim, SungMin Kim
-
Publication number: 20190310833Abstract: Provided is a computing device according to an embodiment of the present disclosure including an integrated register file configured to store a first variable type and a first variable value of a first variable, and a second variable type and a second variable value of a second variable, a calculator configured to perform a first calculation on the first and second variables according to the first and second variable types, and output a first calculation result, and a type rule table comprising a plurality of entries and, when there is an entry corresponding to a type of the first calculation, and the first and second variable types, configured to output a type of the first calculation result.Type: ApplicationFiled: February 7, 2019Publication date: October 10, 2019Applicant: SEOUL NATIONAL UNIVERSITY R&DB FOUNDATIONInventors: Jae Wook LEE, Channoh Kim, Jaehyeok Kim, Sungmin Kim
-
Publication number: 20180365015Abstract: The bytecode processing device includes a branch target buffer including a tag field, a target address field corresponding to the tag field and an operation code bit field for representing whether a value stored in the tag field is an operation code, a bytecode fetch unit configured to fetch a bytecode including an operation code, an operation code extraction unit configured to extract the operation code from the bytecode, a branch target buffer search unit configured to perform a search to determine whether the extracted operation code exists in the tag field of the branch target buffer, and if the operation code exists in the tag field, extract a target address corresponding to the operation code from the target address field, and a bytecode execution unit configured to execute the bytecode by branching to the target address.Type: ApplicationFiled: June 13, 2018Publication date: December 20, 2018Applicant: SEOUL NATIONAL UNIVERSITY R&DB FOUNDATIONInventors: JaeWook LEE, ChanNoh KIM, SungMin KIM