Patents by Inventor Sang-Hyuck Ha
Sang-Hyuck Ha has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11823028Abstract: An artificial neural network (ANN) quantization method for generating an output ANN by quantizing an input ANN includes: obtaining second parameters by quantizing first parameters of the input ANN; obtaining a sample distribution from an intermediate ANN in which the obtained second parameters have been applied to the input ANN; and obtaining a fractional length for the sample distribution by quantizing the obtained sample distribution.Type: GrantFiled: July 24, 2018Date of Patent: November 21, 2023Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Do-yun Kim, Han-young Yim, Byeoung-su Kim, Nak-woo Sung, Jong-han Lim, Sang-hyuck Ha
-
Patent number: 11789865Abstract: A semiconductor device is provided. The semiconductor device comprises a first memory unit including a first memory area, and a first logic area electrically connected to the first memory area, the first logic area including a cache memory and an interface port. The first memory unit executes a data transmission and reception operation with a memory unit adjacent to the first memory unit via the first interface port and the cache memory.Type: GrantFiled: December 8, 2021Date of Patent: October 17, 2023Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Sang Soo Ko, Jae Gon Kim, Kyoung Young Kim, Sang Hyuck Ha
-
Patent number: 11784760Abstract: Apparatuses (including user equipment (UE) and modern chips for UEs), systems, and methods for UE downlink Hybrid Automatic Repeat reQuest (HARQ) buffer memory management are described. In one method, the entire UE DL HARQ buffer memory space is pre-partitioned according to the number and capacities of the UE's active carrier components. In another method, the UE DL HARQ buffer is split between on-chip and off-chip memory so that each partition and sub-partition is allocated between the on-chip and off-chip memories in accordance with an optimum ratio.Type: GrantFiled: December 1, 2020Date of Patent: October 10, 2023Inventors: Mostafa El-Khamy, Arvind Yedla, Sang-Hyuck Ha, Hyunsang Cho, Inyup Kang
-
Patent number: 11775807Abstract: An artificial neural network (ANN) system includes a processor, a virtual overflow detection circuit and a data format controller. The processor performs node operations with respect to a plurality of nodes included in each layer of an ANN to obtain a plurality of result values of the node operations and performs a quantization operation on the obtained plurality of result values based on a k-th fixed-point format for a current quantization of the each layer to obtain a plurality of quantization values. The virtual overflow detection circuit generates a virtual overflow information indicating a distribution of valid bit numbers of the obtained plurality of quantization values. The data format controller determines a (k+1)-th fixed-point format for a next quantization of the each layer based on the generated virtual overflow information. An overflow and/or an underflow are prevented efficiently by controlling the fixed-point format using the virtual overflow.Type: GrantFiled: March 28, 2019Date of Patent: October 3, 2023Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Jae-Gon Kim, Kyoung-Young Kim, Do-Yun Kim, Jun-Seok Park, Sang-Hyuck Ha
-
Patent number: 11694073Abstract: A method and apparatus for generating a fixed point neural network are provided. The method includes selecting at least one layer of a neural network as an object layer, wherein the neural network includes a plurality of layers, each of the plurality of layers corresponding to a respective one of plurality of quantization parameters; forming a candidate parameter set including candidate parameter values with respect to a quantization parameter of the plurality of quantization parameters corresponding to the object layer; determining an update parameter value from among the candidate parameter values based on levels of network performance of the neural network, wherein each of the levels of network performance correspond to a respective one of the candidate parameter values; and updating the quantization parameter with respect to the object layer based on the update parameter value.Type: GrantFiled: November 20, 2018Date of Patent: July 4, 2023Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Han-young Yim, Do-yun Kim, Byeoung-su Kim, Nak-Woo Sung, Jong-han Lim, Sang-hyuck Ha
-
Patent number: 11373087Abstract: A method of generating a fixed-point type neural network by quantizing a floating-point type neural network, includes obtaining, by a device, a plurality of post-activation values by applying an activation function to a plurality of activation values that are received from a layer included in the floating-point type neural network, and deriving, by the device, a plurality of statistical characteristics for at least some of the plurality of post-activation values. The method further includes determining, by the device, a step size for the quantizing of the floating-point type neural network, based on the plurality of statistical characteristics, and determining, by the device, a final fraction length for the fixed-point type neural network, based on the step size.Type: GrantFiled: July 12, 2018Date of Patent: June 28, 2022Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Han-young Yim, Do-yun Kim, Byeoung-su Kim, Nak-woo Sung, Jong-han Lim, Sang-hyuck Ha
-
Publication number: 20220100656Abstract: A semiconductor device is provided. The semiconductor device comprises a first memory unit including a first memory area, and a first logic area electrically connected to the first memory area, the first logic area including a cache memory and an interface port. The first memory unit executes a data transmission and reception operation with a memory unit adjacent to the first memory unit via the first interface port and the cache memory.Type: ApplicationFiled: December 8, 2021Publication date: March 31, 2022Inventors: SANG SOO KO, JAE GON KIM, KYOUNG YOUNG KIM, SANG HYUCK HA
-
Patent number: 11275986Abstract: A method of quantizing an artificial neural network includes dividing an input distribution of the artificial neural network into a plurality of segments, generating an approximated density function by approximating each of the plurality of segments, calculating at least one quantization error corresponding to at least one step size for quantizing the artificial neural network, based on the approximated density function, and determining a final step size for quantizing the artificial neural network based on the at least one quantization error.Type: GrantFiled: June 14, 2018Date of Patent: March 15, 2022Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Do-yun Kim, Han-young Yim, In-yup Kang, Byeoung-su Kim, Nak-woo Sung, Jong-Han Lim, Sang-hyuck Ha
-
Patent number: 11200165Abstract: A semiconductor device is provided. The semiconductor device comprises a first memory unit including a first memory area, and a first logic area electrically connected to the first memory area, the first logic area including a cache memory and an interface port. The first memory unit executes a data transmission and reception operation with a memory unit adjacent to the first memory unit via the first interface port and the cache memory.Type: GrantFiled: July 30, 2019Date of Patent: December 14, 2021Assignee: SAMSUNG ELECTRONICS CO., LTD.Inventors: Sang Soo Ko, Jae Gon Kim, Kyoung Young Kim, Sang Hyuck Ha
-
Publication number: 20210083809Abstract: Apparatuses (including user equipment (UE) and modern chips for UEs), systems, and methods for UE downlink Hybrid Automatic Repeat reQuest (HARQ) buffer memory management are described. In one method, the entire UE DL HARQ buffer memory space is pre-partitioned according to the number and capacities of the UE's active carrier components. In another method, the UE DL HARQ buffer is split between on-chip and off-chip memory so that each partition and sub-partition is allocated between the on-chip and off-chip memories in accordance with an optimum ratio.Type: ApplicationFiled: December 1, 2020Publication date: March 18, 2021Inventors: Mostafa EL-KHAMY, Arvind YEDLA, Sang-Hyuck HA, Hyunsang CHO, Inyup KANG
-
Publication number: 20200210836Abstract: A neural network optimizing device includes a performance estimating module that outputs estimated performance according to performing operations of a neural network based on limitation requirements on resources used to perform the operations of the neural network. A portion selecting module receives the estimated performance from the performance estimating module and selects a portion of the neural network which deviates from the limitation requirements. A new neural network generating module generates, through reinforcement learning, a subset by changing a layer structure included in the selected portion of the neural network, determines an optimal layer structure based on the estimated performance provided from the performance estimating module, and changes the selected portion to the optimal layer structure to generate a new neural network. A final neural network output module outputs the new neural network generated by the new neural network generating module as a final neural network.Type: ApplicationFiled: August 24, 2019Publication date: July 2, 2020Inventors: KYOUNG YOUNG KIM, SANG SOO KO, BYEOUNG-SU KIM, JAE GON KIM, DO YUN KIM, SANG HYUCK HA
-
Publication number: 20200174928Abstract: A semiconductor device is provided. The semiconductor device comprises a first memory unit including a first memory area, and a first logic area electrically connected to the first memory area, the first logic area including a cache memory and an interface port. The first memory unit executes a data transmission and reception operation with a memory unit adjacent to the first memory unit via the first interface port and the cache memory.Type: ApplicationFiled: July 30, 2019Publication date: June 4, 2020Inventors: SANG SOO KO, JAE GON KIM, KYOUNG YOUNG KIM, SANG HYUCK HA
-
Publication number: 20200074285Abstract: An artificial neural network (ANN) system includes a processor, a virtual overflow detection circuit and a data format controller. The processor performs node operations with respect to a plurality of nodes included in each layer of an ANN to obtain a plurality of result values of the node operations and performs a quantization operation on the obtained plurality of result values based on a k-th fixed-point format for a current quantization of the each layer to obtain a plurality of quantization values. The virtual overflow detection circuit generates a virtual overflow information indicating a distribution of valid bit numbers of the obtained plurality of quantization values. The data format controller determines a (k+1)-th fixed-point format for a next quantization of the each layer based on the generated virtual overflow information. An overflow and/or an underflow are prevented efficiently by controlling the fixed-point format using the virtual overflow.Type: ApplicationFiled: March 28, 2019Publication date: March 5, 2020Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Jae-Gon KIM, Kyoung-Young KIM, Do-Yun KIM, Jun-Seok PARK, Sang-Hyuck HA
-
Publication number: 20190180177Abstract: A method and apparatus for generating a fixed point neural network are provided. The method includes selecting at least one layer of a neural network as an object layer, wherein the neural network includes a plurality of layers, each of the plurality of layers corresponding to a respective one of plurality of quantization parameters; forming a candidate parameter set including candidate parameter values with respect to a quantization parameter of the plurality of quantization parameters corresponding to the object layer; determining an update parameter value from among the candidate parameter values based on levels of network performance of the neural network, wherein each of the levels of network performance correspond to a respective one of the candidate parameter values; and updating the quantization parameter with respect to the object layer based on the update parameter value.Type: ApplicationFiled: November 20, 2018Publication date: June 13, 2019Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Han-young Yim, Do-yun Kim, Byeoung-su Kim, Nak-woo Sung, Jong-han Lim, Sang-hyuck Ha
-
Publication number: 20190147322Abstract: An artificial neural network (ANN) quantization method for generating an output ANN by quantizing an input ANN includes: obtaining second parameters by quantizing first parameters of the input ANN; obtaining a sample distribution from an intermediate ANN in which the obtained second parameters have been applied to the input ANN; and obtaining a fractional length for the sample distribution by quantizing the obtained sample distribution.Type: ApplicationFiled: July 24, 2018Publication date: May 16, 2019Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Do-yun KIM, Han-young YIM, Byeoung-su KIM, Nak-woo SUNG, Jong-han LIM, Sang-hyuck HA
-
Publication number: 20190130255Abstract: A method of generating a fixed-point type neural network by quantizing a floating-point type neural network, includes obtaining, by a device, a plurality of post-activation values by applying an activation function to a plurality of activation values that are received from a layer included in the floating-point type neural network, and deriving, by the device, a plurality of statistical characteristics for at least some of the plurality of post-activation values. The method further includes determining, by the device, a step size for the quantizing of the floating-point type neural network, based on the plurality of statistical characteristics, and determining, by the device, a final fraction length for the fixed-point type neural network, based on the step size.Type: ApplicationFiled: July 12, 2018Publication date: May 2, 2019Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Han-young Yim, Do-yun Kim, Byeoung-su Kim, Nak-woo Sung, Jong-han Lim, Sang-hyuck Ha
-
Publication number: 20190095777Abstract: A method of quantizing an artificial neural network includes dividing an input distribution of the artificial neural network into a plurality of segments, generating an approximated density function by approximating each of the plurality of segments, calculating at least one quantization error corresponding to at least one step size for quantizing the artificial neural network, based on the approximated density function, and determining a final step size for quantizing the artificial neural network based on the at least one quantization error.Type: ApplicationFiled: June 14, 2018Publication date: March 28, 2019Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Do-yun KIM, Han-young YIM, In-yup KANG, Byeoung-su KIM, Nak-woo SUNG, Jong-Han LIM, Sang-hyuck HA
-
Patent number: 7924779Abstract: Disclosed is an apparatus and a method for effectively allocating frequency resources in a mobile communication system. The method includes: determining a first offset of a preset frequency band from among all frequency bands, determining a second offset corresponding to a symbol unit in the preset frequency band, and allocating frequency resources to data, the frequency resources corresponding to a sum of the first offset and the second offset.Type: GrantFiled: March 27, 2006Date of Patent: April 12, 2011Assignee: Samsung Electronics Co., Ltd.Inventors: Hae-Dong Yeon, Yun-Sang Park, Min-Goo Kim, Bong-Gee Song, Young-Mo Gu, Jae-Yong Lee, Sung-Soo Kim, Sang-Hyuck Ha
-
Patent number: 7702970Abstract: An apparatus and method for reading written symbols by deinterleaving to decode a written encoder packet in a receiver for a mobile communication system supporting turbo coding and interleaving, such that a turbo-coded/interleaved encoder packet has a bit shift value m, an up-limit value J and a remainder R, and a stream of symbols of the encoder packet is written in order of column to row. The apparatus and method perform the operations of generating an interim address by bit reversal order (BRO) assuming that the remainder R is 0 for the received symbols; calculating an address compensation factor for compensating the interim address in consideration of a column formed with the remainder; and generating a read address by adding the interim address and the address compensation factor for a decoding-required symbol, and reading a symbol written in the generated read address.Type: GrantFiled: October 29, 2003Date of Patent: April 20, 2010Assignee: Samsung Electronics Co., Ltd.Inventors: Sang-Hyuck Ha, Seo-Weon Heo, Nam-Yul Yu, Min-Goo Kim, Seong-Woo Ahn
-
Patent number: 7639659Abstract: A Hybrid Automatic Repeat reQuest (HARQ) control apparatus and method for detecting a message received over a packet data control channel in a mobile communication system are provided in which a base station transmits a packet data control message over at least one packet data control channel and transmits packet data to a mobile station over a packet data channel. A control channel decoder decodes a control message received over the packet data control channel. Based on the decoded control message, the HARQ controller determines at least one of whether to perform demodulation and decoding on a packet received over the packet data channel, whether to update a Walsh mask, and whether to perform state transition on the mobile station. The HARQ controller based on the determination performs at least one of the following functions: outputs demodulation and decoding parameters, updates a Walsh mask, and outputs a state transition value to an upper layer.Type: GrantFiled: March 8, 2004Date of Patent: December 29, 2009Assignee: Samsung Electronics Co., Ltd.Inventors: Sang-Hyuck Ha, Jin-Woo Heo, Min-Goo Kim