Patents by Inventor Dong Hyeon HAN

Dong Hyeon HAN has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240362848
    Abstract: Provided is a 3D rendering accelerator based on a DNN trained using a weight of the DNN using a plurality of 2D photos obtained by imaging the same object from several directions and then configured to perform 3D rendering using the same, the 3D rendering accelerator including a VPC configured to create an image plane for a 3D rendering target from a position and a direction of an observer, divide the image plane into a plurality of tile units, and then perform brain imitation visual recognition on the divided tile-unit images to determine to reduce a DNN inference range, an HNE including a plurality of NEs having different operational efficiencies and configured to accelerate DNN inference by dividing and allocating tasks, and a DNNA core configured to generate selection information for allocating each task to one of the plurality of NEs based on a sparsity ratio.
    Type: Application
    Filed: April 8, 2024
    Publication date: October 31, 2024
    Applicant: Korea Advanced Institute of Science and Technology
    Inventors: Hoi Jun YOO, Dong hyeon HAN
  • Patent number: 11915141
    Abstract: Disclosed herein are an apparatus and method for training a deep neural network. An apparatus for training a deep neural network including N layers, each having multiple neurons, includes an error propagation processing unit configured to, when an error occurs in an N-th layer in response to initiation of training of the deep neural network, determine an error propagation value for an arbitrary layer based on the error occurring in the N-th layer and directly propagate the error propagation value to the arbitrary layer, a weight gradient update processing unit configured to update a forward weight for the arbitrary layer based on a feed-forward value input to the arbitrary layer and the error propagation value in response to the error propagation value, and a feed-forward processing unit configured to, when update of the forward weight is completed, perform a feed-forward operation in the arbitrary layer using the forward weight.
    Type: Grant
    Filed: August 10, 2020
    Date of Patent: February 27, 2024
    Assignee: Korea Advanced Institute of Science and Technology
    Inventors: Hoi Jun Yoo, Dong Hyeon Han
  • Publication number: 20220222523
    Abstract: Disclosed herein are an apparatus and method for training a low-bit-precision deep neural network. The apparatus includes an input unit configured to receive training data to train the deep neural network, and a training unit configured to train the deep neural network using training data, wherein the training unit includes a training module configured to perform training using first precision, a representation form determination module configured to determine a representation form for internal data generated during an operation procedure for the training and determine a position of a decimal point of the internal data so that a permissible overflow bit in a dynamic fixed-point system varies randomly, and a layer-wise precision determination module configured to determine precision of each layer during an operation in each of a feed-forward stage and an error propagation stage and automatically change the precision of a corresponding layer based on the result of determination.
    Type: Application
    Filed: March 19, 2021
    Publication date: July 14, 2022
    Applicant: Korea Advanced Institute of Science and Technology
    Inventors: Hoi Jun YOO, Dong Hyeon HAN
  • Publication number: 20210056427
    Abstract: Disclosed herein are an apparatus and method for training a deep neural network. An apparatus for training a deep neural network including N layers, each having multiple neurons, includes an error propagation processing unit configured to, when an error occurs in an N-th layer in response to initiation of training of the deep neural network, determine an error propagation value for an arbitrary layer based on the error occurring in the N-th layer and directly propagate the error propagation value to the arbitrary layer, a weight gradient update processing unit configured to update a forward weight for the arbitrary layer based on a feed-forward value input to the arbitrary layer and the error propagation value in response to the error propagation value, and a feed-forward processing unit configured to, when update of the forward weight is completed, perform a feed-forward operation in the arbitrary layer using the forward weight.
    Type: Application
    Filed: August 10, 2020
    Publication date: February 25, 2021
    Applicant: Korea Advanced Institute of Science and Technology
    Inventors: Hoi Jun YOO, Dong Hyeon HAN