Patents by Inventor Hongmo Je

Hongmo Je has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11461653
    Abstract: A method for learning parameters of a CNN using a 1×K convolution operation or a K×1 convolution operation is provided to be used for hardware optimization which satisfies KPI. The method includes steps of: a learning device (a) instructing a reshaping layer to two-dimensionally concatenate features in each group comprised of corresponding K channels of a training image or its processed feature map, to thereby generate a reshaped feature map, and instructing a subsequent convolutional layer to apply the 1×K or the K×1 convolution operation to the reshaped feature map, to thereby generate an adjusted feature map; and (b) instructing an output layer to refer to features on the adjusted feature map or its processed feature map, and instructing a loss layer to calculate losses by referring to an output from the output layer and its corresponding GT.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: October 4, 2022
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 11315021
    Abstract: A method for on-device continual learning of a neural network which analyzes input data is provided to be used for smartphones, drones, vessels, or a military purpose. The method includes steps of: a learning device, (a) sampling new data to have a preset first volume, instructing an original data generator network, which has been learned, to repeat outputting synthetic previous data corresponding to a k-dimension random vector and previous data having been used for learning the original data generator network, such that the synthetic previous data has a second volume, and generating a batch for a current-learning; and (b) instructing the neural network to generate output information corresponding to the batch. The method can be performed by generative adversarial networks (GANs), online learning, and the like. Also, the present disclosure has effects of saving resources such as storage, preventing catastrophic forgetting, and securing privacy.
    Type: Grant
    Filed: January 28, 2019
    Date of Patent: April 26, 2022
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 11254331
    Abstract: A method for updating an object detector of an autonomous vehicle to adapt the object detector to a driving circumstance is provided.
    Type: Grant
    Filed: April 13, 2021
    Date of Patent: February 22, 2022
    Assignee: STRADVISION, INC.
    Inventors: Wooju Ryu, Hongmo Je, Bongnam Kang, Yongjoong Kim
  • Patent number: 11203361
    Abstract: A method for performing on-device learning of embedded machine learning network of autonomous vehicle by using multi-stage learning with adaptive hyper-parameter sets is provided. The processes include: (a) dividing the current learning into a 1-st stage learning to an n-th stage learning, assigning 1-st stage training data to n-th stage training data, generating a 1_1-st hyper-parameter set candidate to a 1_h-th hyper-parameter set candidate, training the embedded machine learning network in the 1-st stage learning, and determining a 1-st adaptive hyper-parameter set; (b) generating a k_1-st hyper-parameter set candidate to a k_h-th hyper-parameter set candidate, training the (k?1)-th stage-completed machine learning network in the k-th stage learning, and determining a k-th adaptive hyper-parameter set; and (c) generating an n-th adaptive hyper-parameter set, and executing the n-th stage learning, to thereby complete the current learning.
    Type: Grant
    Filed: April 13, 2021
    Date of Patent: December 21, 2021
    Assignee: Stradvision, Inc.
    Inventors: Hongmo Je, Yongjoong Kim, Dongkyu Yu, Sung An Gweon
  • Publication number: 20210354721
    Abstract: A method for updating an object detector of an autonomous vehicle to adapt the object defector to a driving circumstance is provided.
    Type: Application
    Filed: April 13, 2021
    Publication date: November 18, 2021
    Inventors: Wooju Ryu, Hongmo Je, Bongnam Kang, Yongjoong Kim
  • Publication number: 20210357763
    Abstract: A method for predicting behavior using explainable self-focused attention is provided. The method includes steps of: a behavior prediction device, (a) inputting test images and the sensing information acquired from a moving subject into a metadata recognition module to apply learning operation to output metadata, and inputting the metadata into a feature encoding module to output features; (b) inputting the test images, the metadata, and the features into an explaining module to generate explanation information on affecting factors affecting behavior predictions, inputting the test images and the metadata into a self-focused attention module to output attention maps, and inputting the features and the attention maps into a behavior prediction module to generate the behavior predictions; and (c) allowing an outputting module to output behavior results and allowing a visualization module to visualize and output the affecting factors by referring to the explanation information and the behavior results.
    Type: Application
    Filed: December 28, 2020
    Publication date: November 18, 2021
    Inventors: Hongmo JE, Dongkyu YU, Bongnam KANG, Yongjoong KIM
  • Publication number: 20210347379
    Abstract: A method for performing on-device learning of embedded machine learning network of autonomous vehicle by using multi-stage learning with adaptive hyper-parameter sets is provided. The processes include: (a) dividing the current learning into a 1-st stage learning to an n-th stage learning, assigning 1-st stage training data to n-th stage training data, generating a 1_1-st hyper-parameter set candidate to a 1_h-th hyper-parameter set candidate, training the embedded machine learning network in the 1-st stage learning, and determining a 1-st adaptive hyper-parameter set; (b) generating a k_1-st hyper-parameter set candidate to a k_h-th hyper-parameter set candidate, training the (k?1)-th stage-completed machine learning network in the k-th stage learning, and determining a k-th adaptive hyper-parameter set; and (c) generating an n-th adaptive hyper-parameter set, and executing the n-th stage learning, to thereby complete the current learning.
    Type: Application
    Filed: April 13, 2021
    Publication date: November 11, 2021
    Inventors: Hongmo Je, Yongjoong Kim, Dongkyu Yu, Sung An Gweon
  • Publication number: 20210334652
    Abstract: A method of on-vehicle active learning for training a perception network of an autonomous vehicle is provided. The method includes steps of: an on-vehicle active learning device, (a) if a driving video and sensing information are acquired from a camera and sensors on an autonomous vehicle, inputting frames of the driving video and the sensing information into a scene code assigning module to generate scene codes including information on scenes in the frames and on driving events; and (b) at least one of selecting a part of the frames, whose object detection information satisfies a condition, as specific frames by using the scene codes and the object detection information and selecting a part of the frames, matching a training policy, as the specific frames by using the scene codes and the object detection information, and storing the specific frames and specific scene codes in a frame storing part.
    Type: Application
    Filed: March 17, 2021
    Publication date: October 28, 2021
    Inventors: Hongmo Je, Bongnam Kang, Yongjoong Kim, Sung An Gweon
  • Patent number: 11157813
    Abstract: A method of on-vehicle active learning for training a perception network of an autonomous vehicle is provided. The method includes steps of: an on-vehicle active learning device, (a) if a driving video and sensing information are acquired from a camera and sensors on an autonomous vehicle, inputting frames of the driving video and the sensing information into a scene code assigning module to generate scene codes including information on scenes in the frames and on driving events; and (b) at least one of selecting a part of the frames, whose object detection information satisfies a condition, as specific frames by using the scene codes and the object detection information and selecting a part of the frames, matching a training policy, as the specific frames by using the scene codes and the object detection information, and storing the specific frames and specific scene codes in a frame storing part.
    Type: Grant
    Filed: March 17, 2021
    Date of Patent: October 26, 2021
    Assignee: Stradvision, Inc.
    Inventors: Hongmo Je, Bongnam Kang, Yongjoong Kim, Sung An Gweon
  • Patent number: 11132607
    Abstract: A method for explainable active learning, to be used for an object detector, by using a deep autoencoder is provided. The method includes steps of an active learning device (a) (i) inputting acquired test images into the object detector to detect objects and output bounding boxes, (ii) cropping regions, corresponding to the bounding boxes, in the test images, (iii) resizing the test images and the cropped images into a same size, and (iv) inputting the resized images into a data encoder of the deep autoencoder to output data codes, and (b) (i) confirming reference data codes corresponding to the number of the resized images less than a counter threshold by referring to a data codebook, (ii) extracting specific data codes from the data codes, (iii) selecting specific test images as rare samples, and (iv) updating the data codebook by referring to the specific data codes.
    Type: Grant
    Filed: March 24, 2021
    Date of Patent: September 28, 2021
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Hongmo Je, Yongjoong Kim, Wooju Ryu
  • Patent number: 11113574
    Abstract: A method of self-supervised learning for detection network using deep Q-network includes steps of: performing object detection on first unlabeled image through the detection network trained with training database to generate first object detection information and performing learning operation on a first state set corresponding to the first object detection information to generate a Q-value, if an action of the Q-value accepts the first unlabeled image, testing the detection network, retrained with the training database additionally containing a labeled image of the first unlabeled image, to generate a first accuracy, and if the action rejects the first unlabeled image, testing the detection network without retraining, to generate a second accuracy, and storing the first state set, the action, a reward of the first or the second accuracy, and a second state set of a second unlabeled image as transition vector, and training the deep Q-network by using the transition vector.
    Type: Grant
    Filed: March 26, 2021
    Date of Patent: September 7, 2021
    Assignee: Stradvision, Inc.
    Inventors: Wooju Ryu, Bongnam Kang, Hongmo Je
  • Patent number: 11087175
    Abstract: A method for learning a recurrent neural network to check an autonomous driving safety to be used for switching a driving mode of an autonomous vehicle is provided. The method includes steps of: a learning device (a) if training images corresponding to a front and a rear cameras of the autonomous vehicle are acquired, inputting each pair of the training images into corresponding CNNs, to concatenate the training images and generate feature maps for training, (b) inputting the feature maps for training into long short-term memory models corresponding to sequences of a forward RNN, and into those corresponding to the sequences of a backward RNN, to generate updated feature maps for training and inputting feature vectors for training into an attention layer, to generate an autonomous-driving mode value for training, and (c) allowing a loss layer to calculate losses and to learn the long short-term memory models.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: August 10, 2021
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 11074480
    Abstract: A learning method for acquiring at least one personalized reward function, used for performing a Reinforcement Learning (RL) algorithm, corresponding to a personalized optimal policy for a subject driver is provided. And the method includes steps of: (a) a learning device performing a process of instructing an adjustment reward network to generate first adjustment rewards, by referring to the information on actual actions and actual circumstance vectors in driving trajectories, a process of instructing a common reward module to generate first common rewards by referring to the actual actions and the actual circumstance vectors, and a process of instructing an estimation network to generate actual prospective values by referring to the actual circumstance vectors; and (b) the learning device instructing a first loss layer to generate an adjustment reward and to perform backpropagation to learn parameters of the adjustment reward network.
    Type: Grant
    Filed: January 10, 2020
    Date of Patent: July 27, 2021
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 11042780
    Abstract: A method for learning a recurrent neural network to check an autonomous driving safety to be used for switching a driving mode of an autonomous vehicle is provided. The method includes steps of: a learning device (a) if training images corresponding to a front and a rear cameras of the autonomous vehicle are acquired, inputting each pair of the training images into corresponding CNNs, to concatenate the training images and generate feature maps for training, (b) inputting the feature maps for training into long short-term memory models corresponding to sequences of a forward RNN, and into those corresponding to the sequences of a backward RNN, to generate updated feature maps for training and inputting feature vectors for training into an attention layer, to generate an autonomous-driving mode value for training, and (c) allowing a loss layer to calculate losses and to learn the long short-term memory models.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: June 22, 2021
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 11017673
    Abstract: A method for generating a lane departure warning (LDW) alarm by referring to information on a driving situation is provided to be used for ADAS, V2X or driver safety which are required to satisfy level 4 and level 5 of autonomous vehicles. The method includes steps of: a computing device instructing a LDW system (i) to collect information on the driving situation including information on whether a specific spot corresponding to a side mirror on a side of a lane, into which the driver desires to change, belongs to a virtual viewing frustum of the driver and (ii) to generate risk information on lane change by referring to the information on the driving situation; and instructing the LDW system to generate the LDW alarm by referring to the risk information. Thus, the LDW alarm can be provided to neighboring autonomous vehicles of level 4 and level 5.
    Type: Grant
    Filed: December 26, 2019
    Date of Patent: May 25, 2021
    Assignee: StradVision. Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 11010668
    Abstract: A method for achieving better performance in autonomous driving while saving computing power, by using confidence scores representing a credibility of an object detection which is generated in parallel with an object detection process is provided.
    Type: Grant
    Filed: January 9, 2020
    Date of Patent: May 18, 2021
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10984262
    Abstract: A learning method of a CNN (Convolutional Neural Network) for monitoring one or more blind spots of a monitoring vehicle is provided. The learning method includes steps of: a learning device, if training data corresponding to output from a detector on the monitoring vehicle is inputted, instructing a cue information extracting layer to uses class information and location information on a monitored vehicle included in the training data, thereby outputting cue information on the monitored vehicle; instructing an FC layer for monitoring the blind spots to perform neural network operations by using the cue information, thereby outputting a result of determining whether the monitored vehicle is located on one of the blind spots; and instructing a loss layer to generate loss values by referring to the result and its corresponding GT, thereby learning parameters of the FC layer for monitoring the blind spots by backpropagating the loss values.
    Type: Grant
    Filed: October 8, 2018
    Date of Patent: April 20, 2021
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10970633
    Abstract: A method for optimizing an on-device neural network model by using a Sub-kernel Searching Module is provided. The method includes steps of a learning device (a) if a Big Neural Network Model having a capacity capable of performing a targeted task by using a maximal computing power of an edge device has been trained to generate a first inference result on an input data, allowing the Sub-kernel Searching Module to identify constraint and a state vector corresponding to the training data, to generate architecture information on a specific sub-kernel suitable for performing the targeted task on the training data, (b) optimizing the Big Neural Network Model according to the architecture information to generate a specific Small Neural Network Model for generating a second inference result on the training data, and (c) training the Sub-kernel Searching Module by using the first and the second inference result.
    Type: Grant
    Filed: December 28, 2020
    Date of Patent: April 6, 2021
    Assignee: STRADVISION, INC.
    Inventors: Sung An Gweon, Yongjoong Kim, Bongnam Kang, Hongmo Je
  • Patent number: 10970598
    Abstract: A method for training an object detection network by using attention maps is provided. The method includes steps of: (a) an on-device learning device inputting the training images into a feature extraction network, inputting outputs of the feature extraction network into a attention network and a concatenation layer, and inputting outputs of the attention network into the concatenation layer; (b) the on-device learning device inputting outputs of the concatenation layer into an RPN and an ROI pooling layer, inputting outputs of the RPN into a binary convertor and the ROI pooling layer, and inputting outputs of the ROI pooling layer into a detection network and thus to output object detection data; and (c) the on-device learning device train at least one of the feature extraction network, the detection network, the RPN and the attention network through backpropagations using an object detection losses, an RPN losses, and a cross-entropy losses.
    Type: Grant
    Filed: December 4, 2020
    Date of Patent: April 6, 2021
    Inventors: Wooju Ryu, Hongmo Je, Bongnam Kang, Yongjoong Kim
  • Patent number: 10963792
    Abstract: A method for training a deep learning network based on artificial intelligence is provided. The method includes steps of: a learning device (a) inputting unlabeled data into an active learning network to acquire sub unlabeled data and inputting the sub unlabeled data into an auto labeling network to generate new labeled data; (b) allowing a continual learning network to sample the new labeled data and existing labeled data to generate a mini-batch, and train the existing learning network using the mini-batch to acquire a trained learning network, wherein part of the mini-batch are selected by referring to specific existing losses; and (c) (i) allowing an explainable analysis network to generate insightful results on validation data and transmit the insightful results to a human engineer to transmit an analysis of the trained learning network and (ii) modifying at least one of the active learning network and the continual learning network.
    Type: Grant
    Filed: December 4, 2020
    Date of Patent: March 30, 2021
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Hongmo Je, Bongnam Kang, Wooju Ryu