Patents by Inventor Hoi Jun Yoo

Hoi Jun Yoo has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240362848
    Abstract: Provided is a 3D rendering accelerator based on a DNN trained using a weight of the DNN using a plurality of 2D photos obtained by imaging the same object from several directions and then configured to perform 3D rendering using the same, the 3D rendering accelerator including a VPC configured to create an image plane for a 3D rendering target from a position and a direction of an observer, divide the image plane into a plurality of tile units, and then perform brain imitation visual recognition on the divided tile-unit images to determine to reduce a DNN inference range, an HNE including a plurality of NEs having different operational efficiencies and configured to accelerate DNN inference by dividing and allocating tasks, and a DNNA core configured to generate selection information for allocating each task to one of the plurality of NEs based on a sparsity ratio.
    Type: Application
    Filed: April 8, 2024
    Publication date: October 31, 2024
    Applicant: Korea Advanced Institute of Science and Technology
    Inventors: Hoi Jun YOO, Dong hyeon HAN
  • Publication number: 20240330664
    Abstract: A signed bit slice generator includes a divider configured to divide input data, which is 2's complement data having N (where N is a natural number)-bit precision, and divide remaining bits excluding a sign bit of the input data into a predetermined number of bit slices, a sign bit adder configured to add a sign bit to each of the bit slices, a sign value setter configured to set a sign bit of an MSB slice among the bit slices to a sign value of the input data and to set sign bits of the remaining bit slices to positive sign values, and a sparse data compressor configured to perform sparse data compression on each of the signed bit slices, thereby generating a predetermined number of signed bit slices having the same number of bits where each bit slice includes a sign bit.
    Type: Application
    Filed: October 18, 2023
    Publication date: October 3, 2024
    Applicant: Korea Advanced Institute of Science and Technology
    Inventors: Hoi Jun YOO, Dong Seok IM
  • Publication number: 20240320472
    Abstract: A low-power artificial intelligence (AI) processing system combining an artificial neural network (ANN) and a spiking neural network (SNN) includes an ANN including an artificial layer, an SNN configured to output an artificial layer of the ANN as the same operation result, a main controller configured to calculate an ANN computational cost and an SNN computational cost for each artificial layer, an operation domain selector configured to select an operation domain having a lower computational cost by comparing the ANN computational cost and the SNN computational cost, and an equivalent converter configured to form a combined neural network by converting the artificial layer of the ANN into a spiking layer of the SNN according to selection of an SNN operation domain of the operation domain selector. Therefore, there is no loss of accuracy when compared to the ANN.
    Type: Application
    Filed: March 22, 2024
    Publication date: September 26, 2024
    Applicant: Korea Advanced Institute of Science and Technology
    Inventors: Hoi Jun YOO, Seong Yon HONG
  • Patent number: 12079592
    Abstract: A deep neural network accelerator includes a feature loader that stores input features, a weight memory that stores a weight, and a processing element. The processing element applies 1-bit weight values to the input features to generate results according to the 1-bit weight values, receives a target weight corresponding to the input features from the weight memory, and selects a target result corresponding to the received target weight from among the results to generate output features.
    Type: Grant
    Filed: November 20, 2019
    Date of Patent: September 3, 2024
    Assignee: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Hoi-Jun Yoo, Jin Mook Lee
  • Publication number: 20240078282
    Abstract: Disclosed is a conjugate gradient acceleration apparatus using band matrix compression in depth fusion technology including a band matrix conversion unit configured to convert an adjacency matrix for correcting depth data acquired from data of an image sensor through deep learning based on depth information acquired from a depth sensor into a band matrix using rows of the adjacency matrix as addresses of query points and columns of the adjacency matrix as the nearest neighbors at the query points, a band matrix compression unit configured to mark an index on each band in order to compress the band matrix and to compress data, a memory unit configured to store tile data of the band matrix, and a band matrix calculation unit configured to perform computation of the band matrix and a transposed band matrix or computation of a symmetric band matrix with respect to the band matrix and a vector.
    Type: Application
    Filed: April 27, 2023
    Publication date: March 7, 2024
    Applicant: Korea Advanced Institute of Science and Technology
    Inventors: Hoi Jun YOO, Dong Seok IM
  • Patent number: 11915141
    Abstract: Disclosed herein are an apparatus and method for training a deep neural network. An apparatus for training a deep neural network including N layers, each having multiple neurons, includes an error propagation processing unit configured to, when an error occurs in an N-th layer in response to initiation of training of the deep neural network, determine an error propagation value for an arbitrary layer based on the error occurring in the N-th layer and directly propagate the error propagation value to the arbitrary layer, a weight gradient update processing unit configured to update a forward weight for the arbitrary layer based on a feed-forward value input to the arbitrary layer and the error propagation value in response to the error propagation value, and a feed-forward processing unit configured to, when update of the forward weight is completed, perform a feed-forward operation in the arbitrary layer using the forward weight.
    Type: Grant
    Filed: August 10, 2020
    Date of Patent: February 27, 2024
    Assignee: Korea Advanced Institute of Science and Technology
    Inventors: Hoi Jun Yoo, Dong Hyeon Han
  • Publication number: 20230410864
    Abstract: An MRAM cell includes a switch unit configured to determine opening and closing thereof by a word line voltage and to activate a current path between a bit line and a bit line bar in an opened state thereof, first and second MTJs having opposite states, respectively, and connected in series between the bit line and the bit line bar, to constitute a storage node, and a sensing line configured to be activated in a reading mode of the MRAM cell, thereby creating data reading information based on a voltage between the first and second MTJs, wherein the first and second MTJs have different ones of a low resistance state and a high resistance state, respectively, in accordance with a voltage drop direction between the bit line and the bit line bar, thereby storing data of 0 or 1.
    Type: Application
    Filed: December 23, 2022
    Publication date: December 21, 2023
    Applicant: Korea Advanced Institute of Science and Technology
    Inventors: Hoi Jun YOO, Wenao Xie
  • Publication number: 20230376756
    Abstract: Disclosed is a 3D point cloud-based deep learning neural network acceleration apparatus including a depth image input unit configured to receive a depth image, a depth data storage unit configured to store depth data derived from the depth image, a sampling unit configured to sample the depth image in units of a sampling window having a predetermined first size, a grouping unit configured to generate a grouping window having a predetermined second size and to group inner 3D point data by grouping window, and a convolution computation unit configured to separate point feature data and group feature data, among channel-direction data of 3D point data constituting the depth image, to perform convolution computation with respect to the point feature data and the group feature data, to sum the results of convolution computation by group grouped by the grouping unit, and to derive the final result.
    Type: Application
    Filed: May 22, 2023
    Publication date: November 23, 2023
    Applicant: Korea Advanced Institute of Science and Technology
    Inventors: Hoi Jun YOO, Dong Seok IM
  • Publication number: 20230195420
    Abstract: Disclosed herein are a floating-point computation apparatus and method using Computing-in-Memory (CIM).
    Type: Application
    Filed: May 11, 2022
    Publication date: June 22, 2023
    Applicant: Korea Advanced Institute of Science and Technology
    Inventors: Hoi Jun YOO, Ju Hyoung LEE
  • Publication number: 20230098672
    Abstract: Disclosed is an energy-efficient retraining method of a generative neural network for domain-specific optimization, including (a) retraining, by a mobile device, a pretrained generative neural network model with respect to some data of a new user dataset, (b) comparing, by the mobile device, the pretrained generative neural network model and a generative neural network model retrained for each layer with each other in terms of a relative change rate of weights, (c) selecting, by the mobile device, specific layers having high relative change rate of weights, among layers of the pretrained generative neural network model, as layers to be retrained, and (d) performing, by the mobile device, weight update for only the layers selected in step (c), wherein only some of all layers are selected and trained in a retraining process that requires a large amount of operation, whereby rapid retraining is performed in the mobile device.
    Type: Application
    Filed: January 12, 2022
    Publication date: March 30, 2023
    Applicant: Korea Advanced Institute of Science and Technology
    Inventors: Hoi Jun YOO, So Yeon KIM
  • Publication number: 20230072432
    Abstract: Provided is a deep neural network (DNN) learning accelerating apparatus for deep reinforcement learning, the apparatus including: a DNN operation core configured to perform DNN learning for the deep reinforcement learning; and a weight training unit configured to train a weight parameter to accelerate the DNN learning and transmit it to the DNN operation core, the weight training unit including: a neural network weight memory storing the weight parameter; a neural network pruning unit configured to store a sparse weight pattern generated as a result of performing the weight pruning based on the weight parameter; and a weight prefetcher configured to select/align only pieces of weight data of which values are not zero (0) from the neural network weight memory using the sparse weight pattern and transmit the pieces of weight data of which the values are not zero to the DNN operation core.
    Type: Application
    Filed: August 30, 2022
    Publication date: March 9, 2023
    Applicant: Korea Advanced Institute of Science and Technology
    Inventors: Hoi Jun YOO, Juhyoung LEE
  • Publication number: 20220222523
    Abstract: Disclosed herein are an apparatus and method for training a low-bit-precision deep neural network. The apparatus includes an input unit configured to receive training data to train the deep neural network, and a training unit configured to train the deep neural network using training data, wherein the training unit includes a training module configured to perform training using first precision, a representation form determination module configured to determine a representation form for internal data generated during an operation procedure for the training and determine a position of a decimal point of the internal data so that a permissible overflow bit in a dynamic fixed-point system varies randomly, and a layer-wise precision determination module configured to determine precision of each layer during an operation in each of a feed-forward stage and an error propagation stage and automatically change the precision of a corresponding layer based on the result of determination.
    Type: Application
    Filed: March 19, 2021
    Publication date: July 14, 2022
    Applicant: Korea Advanced Institute of Science and Technology
    Inventors: Hoi Jun YOO, Dong Hyeon HAN
  • Publication number: 20220222533
    Abstract: A method of accelerating training of a low-power, high-performance artificial neural network (ANN) includes (a) performing fine-grained pruning and coarse-grained pruning to generate sparsity in weights by a pruning unit in a convolution core of a cluster in a lower-power, high-performance ANN trainer; (b) selecting and performing dual zero skipping according to input sparsity, output sparsity, and the sparsity of weights by the convolution core, and (c) restricting access to a weight memory during training by allowing a deep neural network (DNN) computation core and a weight pruning core to share weights retrieved from a memory by the convolution core.
    Type: Application
    Filed: May 12, 2021
    Publication date: July 14, 2022
    Applicant: Korea Advanced Institute of Science and Technology
    Inventors: Hoi Jun YOO, Sang Yeob KIM
  • Publication number: 20220044090
    Abstract: A computing device includes a first computing core that generates sparsity data based on a first sign bit and first exponent bits of first data and a second sign bit and second exponent bits based on second data, and a second computing core that outputs a result value of a floating point calculation of the first data and the second data as output data or skips the floating point calculation and outputs the output data having a given value, based on the sparsity data.
    Type: Application
    Filed: October 19, 2020
    Publication date: February 10, 2022
    Applicant: KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY
    Inventors: Hoi Jun YOO, Sanghoon KANG
  • Publication number: 20210056427
    Abstract: Disclosed herein are an apparatus and method for training a deep neural network. An apparatus for training a deep neural network including N layers, each having multiple neurons, includes an error propagation processing unit configured to, when an error occurs in an N-th layer in response to initiation of training of the deep neural network, determine an error propagation value for an arbitrary layer based on the error occurring in the N-th layer and directly propagate the error propagation value to the arbitrary layer, a weight gradient update processing unit configured to update a forward weight for the arbitrary layer based on a feed-forward value input to the arbitrary layer and the error propagation value in response to the error propagation value, and a feed-forward processing unit configured to, when update of the forward weight is completed, perform a feed-forward operation in the arbitrary layer using the forward weight.
    Type: Application
    Filed: August 10, 2020
    Publication date: February 25, 2021
    Applicant: Korea Advanced Institute of Science and Technology
    Inventors: Hoi Jun YOO, Dong Hyeon HAN
  • Publication number: 20200160161
    Abstract: A deep neural network accelerator includes a feature loader that stores input features, a weight memory that stores a weight, and a processing element. The processing element applies 1-bit weight values to the input features to generate results according to the 1-bit weight values, receives a target weight corresponding to the input features from the weight memory, and selects a target result corresponding to the received target weight from among the results to generate output features.
    Type: Application
    Filed: November 20, 2019
    Publication date: May 21, 2020
    Applicant: KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLGY
    Inventors: Hoi-Jun YOO, Jin Mook Lee
  • Publication number: 20180004287
    Abstract: A method for providing a user interface through a head mounted display using eye recognition and bio-signals comprises the steps of: (a) moving a cursor to a particular location at which a user gazes by referencing the eye information obtained from a first eyeball that is one of the eyeballs of the user through a camera module when the user gazes at a particular location on an output screen; and (b) supporting in order to provide detailed selection items corresponding to an entity when a certain entity exists in the certain position by referencing the movement information obtained from the eyelid corresponding to a second eyeball that is one of the eyeballs of the user through a bio-signal acquisition module.
    Type: Application
    Filed: December 4, 2015
    Publication date: January 4, 2018
    Applicants: KOREA ADVANCED INSTITUTE OF SCIENCE AND TECHNOLOGY, UX FACTORY CO., LTD.
    Inventors: Hoi Jun YOO, In Joon Hong, Kyeong Ryeol BONG, Jun Young PARK
  • Publication number: 20170095660
    Abstract: An iontophoretic drug delivery apparatus comprises an electrode unit comprising a plurality of iontophoresis electrodes and a plurality of tissue resistivity measurement electrodes; a programmable current unit configured to control a current that is supplied to the iontophoresis electrodes to thereby control the amount of drug delivered; an impedance detection unit comprising a detection mode that selectively measures a load resistance value between the iontophoresis electrodes or a tissue resistivity value between the tissue resistivity measurement electrodes so as to monitor the amount of drug delivered; and a control unit configured to control the programmable current unit by determining the amount of drug delivered or whether the drug is to be delivered, based on the load resistance or tissue resistivity value measured in the impedance detection unit.
    Type: Application
    Filed: April 29, 2014
    Publication date: April 6, 2017
    Applicant: K-HEALTHWEAR CO., Ltd.
    Inventors: Hoi Jun YOO, Ki Seok SONG
  • Publication number: 20160296135
    Abstract: Disclosed is an electrical impedance photographing patch comprising a plurality of electrodes contacting with skin, applied with an electrical signal and arranged on a flexible substrate, wherein the photographing patch measures impedance, i.e., an electrical signal, of the skin of a measurement target placed between the plurality of electrodes and transmits the electrical signal to an external device which restores the electrical signal into a three-dimensional tomographic image and displays the tomographic image.
    Type: Application
    Filed: November 19, 2014
    Publication date: October 13, 2016
    Applicant: K-HEALTHWEAR CO., LTD.
    Inventors: Hoi Jun YOO, Seon Ju HONG
  • Patent number: 9413432
    Abstract: Provided is a near field wireless communication apparatus that uses magnetic coupling and a method for operation of the apparatus, in which the magnetic coupling is used to transmit data or clock information with low power and high efficiency. A pulse generator of the near field wireless communication apparatus generates a pulse signal corresponding to transmission digital data to be transmitted. When the transmission digital data is “1,” the near field wireless communication apparatus modulates the data into the pulse signal and transmits the pulse signal. When the data is “0,” the near field wireless communication apparatus does not output the pulse.
    Type: Grant
    Filed: August 16, 2013
    Date of Patent: August 9, 2016
    Assignees: Samsung Electronics Co., Ltd, Korea Advanced Institute of Science and Technology
    Inventors: Jae-Young Huh, Hoi-Jun Yoo, Dong-Churl Kim, Kyu-Sub Kwak, Jea-Hyuck Lee, Hyun-Woo Cho, Tae-Hwan Roh, Un-Soo Ha