Patents by Inventor Jin Ho Han

Jin Ho Han has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250199968
    Abstract: The present embodiment provides a physical layer (PHY) between a high-speed memory and a memory controller, including: an analog physical layer including a read data strobe (RDQS) delay unit that delays an RDQS signal of the high-speed memory and outputs the delayed RDQS signal and a data (DQ) delay unit that delays DQ signals and outputs the delayed DQ signals; and a digital physical layer including an asynchronous first-in first-out (FIFO) that samples the DQ signal with the RDQS signal and outputs the DQ signal, a DQ arrangement block that rearranges the DQ signal, and a validity signal forming unit that forms a validity signal indicating validity of data from data output from the asynchronous FIFO, in which the analog physical layer receives a clock having the same frequency as the high-speed memory and operates, and the asynchronous FIFO and the validity signal forming unit receive a clock having a lower frequency than the clock provided to the high-speed memory and operate.
    Type: Application
    Filed: December 18, 2024
    Publication date: June 19, 2025
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Jaewoong CHOI, Jin Ho HAN, Young-Su KWON
  • Publication number: 20250190173
    Abstract: The present embodiment relates to a computing apparatus for computing an interpolated non-linear activation function for an input. The computing apparatus includes a plurality of unit processing elements (PEs), and each unit PE includes: a multiplier that multiplies the input and an output of an accumulator, an adder that adds the output of the multiplier and a coefficient of the interpolated non-linear activation function; and an accumulator that accumulates and outputs the output of the adder.
    Type: Application
    Filed: December 10, 2024
    Publication date: June 12, 2025
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jin Kyu KIM, Jin Ho HAN
  • Patent number: 12266400
    Abstract: A method for ZQ calibration for a data transmission driving circuit of each memory die in a memory chip package in which memory dies are stacked, includes generating a reference current through a reference resistor connected between a power terminal supplying a power voltage of the data transmission driving circuit and a ground terminal and a first transistor that is diode-connected; supplying first currents corresponding to the reference currents to a pull-up driver of each memory die; performing ZQ calibration of a pull-up driver of a corresponding memory die by comparing a first voltage formed by each first current with a reference voltage formed by the reference current in each of the plurality of memory dies; and performing ZQ calibration of a pull-down driver of the corresponding memory die based on an output impedance of the ZQ calibrated pull-up driver in each of the memory dies.
    Type: Grant
    Filed: November 8, 2022
    Date of Patent: April 1, 2025
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Min-Hyung Cho, Young-Deuk Jeon, Jin Ho Han
  • Patent number: 12231100
    Abstract: An apparatus for receiving a strobe signal may include an amplifier for amplifying a strobe signal input thereto, an offset generator for controlling the setting of a threshold for detecting a preamble signal by generating an offset for the amplifier, and a preamble detector for detecting a first preamble signal occurring at a point at which the amplified strobe signal is equal to or greater than the threshold and turning off the offset generator when the first preamble signal is detected.
    Type: Grant
    Filed: September 6, 2022
    Date of Patent: February 18, 2025
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Yi-Gyeong Kim, Young-Deuk Jeon, Young-Su Kwon, Jin-Ho Han
  • Patent number: 12087392
    Abstract: Provided is a memory interface device. A memory interface device, comprising: a DQS input buffer configured to receive input data strobe signals and output a first intermediate data strobe signal, the DQS input buffer providing a static offset; an offset control circuit configured to receive the first intermediate data strobe signal and output a second intermediate data strobe signal; and a duty adjustment buffer configured to receive the second intermediate data strobe signal and output a clean data strobe signal, wherein the offset control circuit provides a dynamic offset using the clean data strobe signal.
    Type: Grant
    Filed: December 30, 2021
    Date of Patent: September 10, 2024
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Young-Deuk Jeon, Min-Hyung Cho, Young-Su Kwon, Jin Ho Han
  • Publication number: 20240211401
    Abstract: Disclosed herein are a method for supporting cache coherency based on virtual addresses for an artificial intelligence processor having large on-chip memory and an apparatus for the same. The method for supporting cache coherency according to an embodiment of the present disclosure includes, by an artificial intelligence processor including multiple processor cores and multiple caches, setting external memory address areas which do not overlap each other for respective multiple caches; and providing virtual addresses with which the multiple processor cores access the multiple caches.
    Type: Application
    Filed: November 29, 2023
    Publication date: June 27, 2024
    Inventors: Jin-Ho HAN, Young-Su KWON
  • Publication number: 20240176590
    Abstract: An embodiment of the present disclosure may provide a multiply-accumulate operation method performed by a multiply-accumulate operation apparatus, the multiply-accumulate operation method including accumulating, by an accumulation register, a value within a preset bit value of a mantissa bitwidth in a result of an addition operation of a shifted first mantissa value and a shifted second mantissa value, determining, by an overflow counter, an overflow count based on an overflow value by which the result of the addition operation of the shifted first mantissa value and the shifted second mantissa value exceeds the preset bit value of the mantissa bitwidth, performing normalization and rounding based on the value accumulated in the accumulation register and the overflow count, and updating, by an exponent updater, the exponent using a normalized and rounded value.
    Type: Application
    Filed: November 29, 2023
    Publication date: May 30, 2024
    Inventor: Jin-Ho HAN
  • Publication number: 20240062809
    Abstract: Disclosed herein is an Artificial Intelligence (AI) processor. The AI processor includes multiple NVM AI cores for respectively performing basic unit operations required for a deep-learning operation based on data stored in NVM; SRAM for storing at least some of the results of the basic unit operations; and an AI core for performing an accumulation operation on the results of the basic unit operation.
    Type: Application
    Filed: November 1, 2023
    Publication date: February 22, 2024
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jin-Ho HAN, Byung-Jo KIM, Ju-Yeob KIM, Hye-Ji KIM, Joo-Hyun LEE, Seong-Min KIM
  • Patent number: 11893392
    Abstract: A method for processing floating point operations in a multi-processor system including a plurality of single processor cores is provided. In this method, upon receiving a group setting for performing an operation, the plurality of single processor cores are grouped into at least one group according to the group setting, and a single processor core set as a master in the group loads an instruction for performing the operation from an external memory, and performs parallel operations by utilizing floating point units (FUPs) of all single processor cores in the group according to the instructions.
    Type: Grant
    Filed: November 30, 2021
    Date of Patent: February 6, 2024
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Ju-Yeob Kim, Jin Ho Han
  • Patent number: 11847465
    Abstract: Disclosed is a parallel processor. The parallel processor includes a processing element array including a plurality of processing elements arranged in rows and columns, a row memory group including row memories corresponding to rows of the processing elements, a column memory group including column memories corresponding to columns of the processing elements, and a controller to generate a first address and a second address, to send the first address to the row memory group, and to send the second address to the column memory group. The controller supports convolution operations having mutually different forms, by changing a scheme of generating the first address.
    Type: Grant
    Filed: November 23, 2021
    Date of Patent: December 19, 2023
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Chun-Gi Lyuh, Hyun Mi Kim, Young-Su Kwon, Jin Ho Han
  • Patent number: 11842764
    Abstract: Disclosed herein is an Artificial Intelligence (AI) processor. The AI processor includes multiple NVM AI cores for respectively performing basic unit operations required for a deep-learning operation based on data stored in NVM; SRAM for storing at least some of the results of the basic unit operations; and an AI core for performing an accumulation operation on the results of the basic unit operation.
    Type: Grant
    Filed: December 7, 2021
    Date of Patent: December 12, 2023
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jin-Ho Han, Byung-Jo Kim, Ju-Yeob Kim, Hye-Ji Kim, Joo-Hyun Lee, Seong-Min Kim
  • Patent number: 11782835
    Abstract: Disclosed herein is a heterogeneous system based on unified virtual memory. The heterogeneous system based on unified virtual memory may include a host for compiling a kernel program, which is source code of a user application, in a binary form and delivering the compiled kernel program to a heterogenous system architecture device, the heterogenous system architecture device for processing operation of the kernel program delivered from the host in parallel using two or more different types of processing elements, and unified virtual memory shared between the host and the heterogenous system architecture device.
    Type: Grant
    Filed: November 29, 2021
    Date of Patent: October 10, 2023
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Joo-Hyun Lee, Young-Su Kwon, Jin-Ho Han
  • Publication number: 20230259581
    Abstract: Disclosed herein is a method for outer-product-based matrix multiplication for a floating-point data type includes receiving first floating-point data and second floating-point data and performing matrix multiplication on the first floating-point data and the second floating-point data, and the result value of the matrix multiplication is calculated based on the suboperation result values of floating-point units.
    Type: Application
    Filed: February 14, 2023
    Publication date: August 17, 2023
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Won JEON, Young-Su KWON, Ju-Yeob KIM, Hyun-Mi KIM, Hye-Ji KIM, Chun-Gi LYUH, Mi-Young LEE, Jae-Hoon CHUNG, Yong-Cheol CHO, Jin-Ho HAN
  • Patent number: 11695411
    Abstract: Disclosed is a transmitter which includes a channel driver that includes a pull-up transistor and a pull-down transistor connected between a power node and a ground node and outputs a voltage between the pull-up transistor and the pull-down transistor as a transmit signal, and a pre-driver that controls the pull-up transistor and the pull-down transistor in response to a driving signal and controls the channel driver such that the transmit signal is overshot at a rising edge of the driving signal and the transmit signal is undershot at a falling edge of the driving signal.
    Type: Grant
    Filed: September 24, 2021
    Date of Patent: July 4, 2023
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Min-Hyung Cho, Young-deuk Jeon, In San Jeon, Jin Ho Han
  • Publication number: 20230168936
    Abstract: A method and apparatus for allocating tasks through relocating kernel data based on a size of systolic array included in each of the plurality of processors, and relocating input feature map (IFM) data based on a number of the plurality of processors are provided.
    Type: Application
    Filed: November 3, 2022
    Publication date: June 1, 2023
    Inventors: Jaehoon CHUNG, Young-Su KWON, Jin Ho HAN
  • Publication number: 20230147293
    Abstract: A method for ZQ calibration for a data transmission driving circuit of each memory die in a memory chip package in which memory dies are stacked, includes generating a reference current through a reference resistor connected between a power terminal supplying a power voltage of the data transmission driving circuit and a ground terminal and a first transistor that is diode-connected; supplying first currents corresponding to the reference currents to a pull-up driver of each memory die; performing ZQ calibration of a pull-up driver of a corresponding memory die by comparing a first voltage formed by each first current with a reference voltage formed by the reference current in each of the plurality of memory dies; and performing ZQ calibration of a pull-down driver of the corresponding memory die based on an output impedance of the ZQ calibrated pull-up driver in each of the memory dies.
    Type: Application
    Filed: November 8, 2022
    Publication date: May 11, 2023
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Min-Hyung CHO, Young-Deuk JEON, Jin Ho HAN
  • Publication number: 20230115549
    Abstract: Disclosed herein is an apparatus for receiving a strobe signal. The apparatus may include an amplifier for amplifying a strobe signal input thereto, an offset generator for controlling the setting of a threshold for detecting a preamble signal by generating an offset for the amplifier, and a preamble detector for detecting a first preamble signal occurring at a point at which the amplified strobe signal is equal to or greater than the threshold and turning off the offset generator when the first preamble signal is detected.
    Type: Application
    Filed: September 6, 2022
    Publication date: April 13, 2023
    Inventors: Yi-Gyeong KIM, Young-deuk JEON, Young-Su KWON, Jin-Ho HAN
  • Publication number: 20220301603
    Abstract: Provided is a memory interface device. A memory interface device, comprising: a DQS input buffer configured to receive input data strobe signals and output a first intermediate data strobe signal, the DQS input buffer providing a static offset; an offset control circuit configured to receive the first intermediate data strobe signal and output a second intermediate data strobe signal; and a duty adjustment buffer configured to receive the second intermediate data strobe signal and output a clean data strobe signal, wherein the offset control circuit provides a dynamic offset using the clean data strobe signal.
    Type: Application
    Filed: December 30, 2021
    Publication date: September 22, 2022
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Young-Deuk JEON, Min-Hyung CHO, Young-Su KWON, Jin Ho HAN
  • Publication number: 20220180919
    Abstract: Disclosed herein is an Artificial Intelligence (AI) processor. The AI processor includes multiple NVM AI cores for respectively performing basic unit operations required for a deep-learning operation based on data stored in NVM; SRAM for storing at least some of the results of the basic unit operations; and an AI core for performing an accumulation operation on the results of the basic unit operation.
    Type: Application
    Filed: December 7, 2021
    Publication date: June 9, 2022
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jin-Ho HAN, Byung-Jo KIM, Ju-Yeob KIM, Hye-Ji KIM, Joo-Hyun LEE, Seong-Min KIM
  • Publication number: 20220180162
    Abstract: Disclosed herein is an AI accelerator. The AI accelerator includes processors, each performing a deep-learning operation using multiple threads; and a cache memory including an L0 instruction cache for providing instructions to the processors and an L1 cache mapped to the multiple areas of mapped memory.
    Type: Application
    Filed: December 7, 2021
    Publication date: June 9, 2022
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Jin-Ho HAN, Young-Su KWON, Mi-Young LEE, Joo-Hyun LEE, Yong-Cheol CHO