Patents by Inventor Bongnam Kang

Bongnam Kang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11254331
    Abstract: A method for updating an object detector of an autonomous vehicle to adapt the object detector to a driving circumstance is provided.
    Type: Grant
    Filed: April 13, 2021
    Date of Patent: February 22, 2022
    Assignee: STRADVISION, INC.
    Inventors: Wooju Ryu, Hongmo Je, Bongnam Kang, Yongjoong Kim
  • Patent number: 11250298
    Abstract: A method for training a perception network includes (a) perceiving first image-level data obtained from a first imaging device through the perception network to generate first prediction results, and training the perception network based on the first prediction results, (b) augmenting the first and second image-level data, respectively obtained from the first and a second imaging device, through a transfer network to generate first and second feature-level data, perceiving the first and the second feature-level data through the perception network to generate second prediction results, and training the transfer network based on the second prediction results, and (c) augmenting the first and the second image-level data through the transfer network to generate third feature-level data, perceiving the third feature-level data through the perception network to generate third prediction results, and retraining the perception network based on the third prediction results.
    Type: Grant
    Filed: March 26, 2021
    Date of Patent: February 15, 2022
    Assignee: Stradavision, Inc.
    Inventors: Wooju Ryu, Bongnam Kang
  • Publication number: 20210357763
    Abstract: A method for predicting behavior using explainable self-focused attention is provided. The method includes steps of: a behavior prediction device, (a) inputting test images and the sensing information acquired from a moving subject into a metadata recognition module to apply learning operation to output metadata, and inputting the metadata into a feature encoding module to output features; (b) inputting the test images, the metadata, and the features into an explaining module to generate explanation information on affecting factors affecting behavior predictions, inputting the test images and the metadata into a self-focused attention module to output attention maps, and inputting the features and the attention maps into a behavior prediction module to generate the behavior predictions; and (c) allowing an outputting module to output behavior results and allowing a visualization module to visualize and output the affecting factors by referring to the explanation information and the behavior results.
    Type: Application
    Filed: December 28, 2020
    Publication date: November 18, 2021
    Inventors: Hongmo JE, Dongkyu YU, Bongnam KANG, Yongjoong KIM
  • Publication number: 20210354721
    Abstract: A method for updating an object detector of an autonomous vehicle to adapt the object defector to a driving circumstance is provided.
    Type: Application
    Filed: April 13, 2021
    Publication date: November 18, 2021
    Inventors: Wooju Ryu, Hongmo Je, Bongnam Kang, Yongjoong Kim
  • Publication number: 20210334652
    Abstract: A method of on-vehicle active learning for training a perception network of an autonomous vehicle is provided. The method includes steps of: an on-vehicle active learning device, (a) if a driving video and sensing information are acquired from a camera and sensors on an autonomous vehicle, inputting frames of the driving video and the sensing information into a scene code assigning module to generate scene codes including information on scenes in the frames and on driving events; and (b) at least one of selecting a part of the frames, whose object detection information satisfies a condition, as specific frames by using the scene codes and the object detection information and selecting a part of the frames, matching a training policy, as the specific frames by using the scene codes and the object detection information, and storing the specific frames and specific scene codes in a frame storing part.
    Type: Application
    Filed: March 17, 2021
    Publication date: October 28, 2021
    Inventors: Hongmo Je, Bongnam Kang, Yongjoong Kim, Sung An Gweon
  • Patent number: 11157813
    Abstract: A method of on-vehicle active learning for training a perception network of an autonomous vehicle is provided. The method includes steps of: an on-vehicle active learning device, (a) if a driving video and sensing information are acquired from a camera and sensors on an autonomous vehicle, inputting frames of the driving video and the sensing information into a scene code assigning module to generate scene codes including information on scenes in the frames and on driving events; and (b) at least one of selecting a part of the frames, whose object detection information satisfies a condition, as specific frames by using the scene codes and the object detection information and selecting a part of the frames, matching a training policy, as the specific frames by using the scene codes and the object detection information, and storing the specific frames and specific scene codes in a frame storing part.
    Type: Grant
    Filed: March 17, 2021
    Date of Patent: October 26, 2021
    Assignee: Stradvision, Inc.
    Inventors: Hongmo Je, Bongnam Kang, Yongjoong Kim, Sung An Gweon
  • Patent number: 11113574
    Abstract: A method of self-supervised learning for detection network using deep Q-network includes steps of: performing object detection on first unlabeled image through the detection network trained with training database to generate first object detection information and performing learning operation on a first state set corresponding to the first object detection information to generate a Q-value, if an action of the Q-value accepts the first unlabeled image, testing the detection network, retrained with the training database additionally containing a labeled image of the first unlabeled image, to generate a first accuracy, and if the action rejects the first unlabeled image, testing the detection network without retraining, to generate a second accuracy, and storing the first state set, the action, a reward of the first or the second accuracy, and a second state set of a second unlabeled image as transition vector, and training the deep Q-network by using the transition vector.
    Type: Grant
    Filed: March 26, 2021
    Date of Patent: September 7, 2021
    Assignee: Stradvision, Inc.
    Inventors: Wooju Ryu, Bongnam Kang, Hongmo Je
  • Patent number: 11080544
    Abstract: A method for calibrating a pitch of a camera on a vehicle is provided. The method includes steps of: a calibration device (a) inputting driving images from the camera into an object detection network to detect objects and generate object detection information and into a lane detection network to detect lanes and generate lane detection information; (b) profiling the object and the lane detection information to generate object profiling information and lane profiling information, inputting the object profiling information into an object-based pitch estimation module to select a first target object and a second target object to generate a first pitch and a second pitch, and (iii) inputting vanishing point detection information and the lane profiling information into a lane-based pitch estimation module to generate a third pitch and a fourth pitch; and (c) inputting the first to the fourth pitches into a pitch-deciding module to generate a decided pitch.
    Type: Grant
    Filed: December 17, 2020
    Date of Patent: August 3, 2021
    Inventors: Yongjoong Kim, Wooju Ryu, Bongnam Kang, Sung An Gweon
  • Patent number: 10970598
    Abstract: A method for training an object detection network by using attention maps is provided. The method includes steps of: (a) an on-device learning device inputting the training images into a feature extraction network, inputting outputs of the feature extraction network into a attention network and a concatenation layer, and inputting outputs of the attention network into the concatenation layer; (b) the on-device learning device inputting outputs of the concatenation layer into an RPN and an ROI pooling layer, inputting outputs of the RPN into a binary convertor and the ROI pooling layer, and inputting outputs of the ROI pooling layer into a detection network and thus to output object detection data; and (c) the on-device learning device train at least one of the feature extraction network, the detection network, the RPN and the attention network through backpropagations using an object detection losses, an RPN losses, and a cross-entropy losses.
    Type: Grant
    Filed: December 4, 2020
    Date of Patent: April 6, 2021
    Inventors: Wooju Ryu, Hongmo Je, Bongnam Kang, Yongjoong Kim
  • Patent number: 10970633
    Abstract: A method for optimizing an on-device neural network model by using a Sub-kernel Searching Module is provided. The method includes steps of a learning device (a) if a Big Neural Network Model having a capacity capable of performing a targeted task by using a maximal computing power of an edge device has been trained to generate a first inference result on an input data, allowing the Sub-kernel Searching Module to identify constraint and a state vector corresponding to the training data, to generate architecture information on a specific sub-kernel suitable for performing the targeted task on the training data, (b) optimizing the Big Neural Network Model according to the architecture information to generate a specific Small Neural Network Model for generating a second inference result on the training data, and (c) training the Sub-kernel Searching Module by using the first and the second inference result.
    Type: Grant
    Filed: December 28, 2020
    Date of Patent: April 6, 2021
    Assignee: STRADVISION, INC.
    Inventors: Sung An Gweon, Yongjoong Kim, Bongnam Kang, Hongmo Je
  • Patent number: 10970645
    Abstract: Processes of explainable active learning, for an object detector, by using a Bayesian dual encoder is provided. The processes include: (a) inputting test images into the object detector to generate cropped images, resizing the test images and the cropped images, and inputting the resized images into a data encoder to output data codes; (b) (b1) one of (i) inputting the test images into the object detector, applying Bayesian output embedding and resizing the activation entropy maps and the cropped activation entropy maps, and (ii) inputting resized object images and applying the Bayesian output embedding and (b2) inputting the resized activation entropy maps into a model encoder to output model codes; and (c) (i) confirming reference data codes, selecting specific test images as rare samples, and updating the data codebook, and (ii) confirming reference model codes and selecting specific test images as hard samples.
    Type: Grant
    Filed: December 28, 2020
    Date of Patent: April 6, 2021
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Sung An Gweon, Yongjoong Kim, Bongnam Kang
  • Patent number: 10963792
    Abstract: A method for training a deep learning network based on artificial intelligence is provided. The method includes steps of: a learning device (a) inputting unlabeled data into an active learning network to acquire sub unlabeled data and inputting the sub unlabeled data into an auto labeling network to generate new labeled data; (b) allowing a continual learning network to sample the new labeled data and existing labeled data to generate a mini-batch, and train the existing learning network using the mini-batch to acquire a trained learning network, wherein part of the mini-batch are selected by referring to specific existing losses; and (c) (i) allowing an explainable analysis network to generate insightful results on validation data and transmit the insightful results to a human engineer to transmit an analysis of the trained learning network and (ii) modifying at least one of the active learning network and the continual learning network.
    Type: Grant
    Filed: December 4, 2020
    Date of Patent: March 30, 2021
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Hongmo Je, Bongnam Kang, Wooju Ryu
  • Patent number: 10922788
    Abstract: A method for performing continual learning on a classifier, in a client, capable of classifying images by using a continual learning server is provided. The method includes steps of: a continual learning server (a) inputting first hard images from a first classifier of a client into an Adversarial Autoencoder, to allow an encoder to output latent vectors from the first hard images, allow a decoder to output reconstructed images from the latent vectors, and allow a discriminator and a second classifier to output attribute and classification information to determine second hard images to be stored in a first training data set, and generating augmented images to be stored in a second training data set by adjusting the latent vectors of the reconstructed images determined not as the second hard images; (b) continual learning a third classifier corresponding to the first classifier; and (c) transmitting updated parameters to the client.
    Type: Grant
    Filed: November 3, 2020
    Date of Patent: February 16, 2021
    Assignee: Stradvision, Inc.
    Inventors: Dongkyu Yu, Hongmo Je, Bongnam Kang, Wooju Ryu
  • Patent number: 9208375
    Abstract: The present disclosure relates to a face recognition method, an apparatus, and a computer-readable recording medium for executing the method. According to some aspects of the present disclosure, the face recognition method includes: (a) a key point setting step of setting key points at designated positions on an input face image; (b) a key point descriptor extracting step of extracting each descriptor for each key point; and (c) a matching step of determining whether the input face image matches pre-stored face images using descriptors for key points within a designated region including each descriptor for each first key point obtained from the input face image, and second key points of pre-stored face images which correspond to first key points obtained from the input face image.
    Type: Grant
    Filed: September 27, 2012
    Date of Patent: December 8, 2015
    Assignee: Intel Corporation
    Inventors: Hyungsoo Lee, Hongmo Je, Bongnam Kang
  • Publication number: 20140147023
    Abstract: The present disclosure relates to a face recognition method, an apparatus, and a computer-readable recording medium for executing the method. According to some aspects of the present disclosure, the face recognition method includes: (a) a key point setting step of setting key points at designated positions on an input face image; (b) a key point descriptor extracting step of extracting each descriptor for each key point; and (c) a matching step of determining whether the input face image matches pre-stored face images using descriptors for key points within a designated region including each descriptor for each first key point obtained from the input face image, and second key points of pre-stored face images which correspond to first key points obtained from the input face image.
    Type: Application
    Filed: September 27, 2012
    Publication date: May 29, 2014
    Applicant: Intel Corporation
    Inventors: Hyung Soo Lee, Hongmo Je, Bongnam Kang