Patents Assigned to StradVision, Inc.
  • Patent number: 10565476
    Abstract: A method for generating at least one data set for learning to be used for detecting at least one obstruction in autonomous driving circumstances is provided. The method includes steps of: a computing device (a) obtaining a first original image indicating a driving situation, and a first segmentation ground truth (GT) image corresponding to the first original image; (b) obtaining a second original image including a specific object, and a second segmentation GT image which includes segmentation information for the specific object and corresponds to the second original image; (c) obtaining a third original image by cutting a portion corresponding to the specific object, and a third segmentation GT image by cutting pixels corresponding to an area where the specific object is located; and (d) creating the data set for learning which includes a fourth original image and a fourth segmentation GT image corresponding to the fourth original image.
    Type: Grant
    Filed: September 4, 2018
    Date of Patent: February 18, 2020
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10565863
    Abstract: A method for providing an Advanced Pedestrian Assistance System to protect a pedestrian preoccupied with a smartphone is provided. The method includes steps of: the smartphone instructing a locating unit to acquire 1-st information including location and velocity information of the pedestrian and location and velocity information of the smartphone; instructing a detecting unit to acquire 2-nd information including hazard statuses of hazardous areas near the pedestrian and location information and velocity information of hazardous objects, by referring to images acquired by phone cameras linked with the smartphone and the 1-st information; and instructing a control unit to calculate a degree of pedestrian safety of the pedestrian by referring to the 1-st and the 2-nd information, and to transmit a hazard alert to the pedestrian via the smartphone. Further, the method can be used for surveillance or a military purpose.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: February 18, 2020
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10551845
    Abstract: A method for generating at least one image data set for training to be used for a CNN capable of detecting objects in an input image is provided for improving hazard detection while driving. The method includes steps of: a computing device (a) acquiring a first label image in which edge parts are set on boundaries between the objects and a background and different label values are assigned corresponding to the objects and the background; (b) generating an edge image by extracting edge parts from the first label image; (c) generating a second label image by merging the first label image with a reinforced edge image, generated by assigning weights to the extracted edge parts; and (d) storing the input image and the second label image as the image data set. Further, the method allows a degree of detecting traffic sign, landmark, road marker and the like to be increased.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: February 4, 2020
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10553118
    Abstract: A method for generating a lane departure warning (LDW) alarm by referring to information on a driving situation is provided to be used for ADAS, V2X or driver safety which are required to satisfy level 4 and level 5 of autonomous vehicles. The method includes steps of: a computing device instructing a LDW system (i) to collect information on the driving situation including information on whether a specific spot corresponding to a side mirror on a side of a lane, into which the driver desires to change, belongs to a virtual viewing frustum of the driver and (ii) to generate risk information on lane change by referring to the information on the driving situation; and instructing the LDW system to generate the LDW alarm by referring to the risk information. Thus, the LDW alarm can be provided to neighboring autonomous vehicles of level 4 and level 5.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: February 4, 2020
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10551846
    Abstract: A learning method for improving segmentation performance to be used for detecting road user events including pedestrian events and vehicle events using double embedding configuration in a multi-camera system is provided.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: February 4, 2020
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10540572
    Abstract: A method for auto-labeling a training image to be used for learning a neural network is provided for achieving high precision. The method includes steps of: an auto-labeling device (a) instructing a meta ROI detection network to generate a feature map and to acquire n current meta ROIs, on the specific training image, grouped according to each of locations of each of the objects; and (b) generating n manipulated images by cropping regions, corresponding to the n current meta ROIs, on the specific training image, instructing an object detection network to output each of n labeled manipulated images having each of bounding boxes for each of the n manipulated images, and generating a labeled specific training image by merging the n labeled manipulated images. The method can be performed by using an online learning, a continual learning, a hyperparameter learning, and a reinforcement learning with policy gradient algorithms.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: January 21, 2020
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10528867
    Abstract: A method for learning a neural network by adjusting a learning rate each time when an accumulated number of iterations reaches one of a first to an n-th specific values. The method includes steps of: (a) a learning device, while increasing k from 1 to (n?1), (b1) performing a k-th learning process of repeating the learning of the neural network at a k-th learning rate by using a part of the training data while the accumulated number of iterations is greater than a (k?1)-th specific value and is equal to or less than a k-th specific value, (b2) changing a k-th gamma to a (k+1)-th gamma by referring to k-th losses of the neural network which are obtained by the k-th learning process and (ii) changing a k-th learning rate to a (k+1)-th learning rate by using the (k+1)-th gamma.
    Type: Grant
    Filed: October 8, 2018
    Date of Patent: January 7, 2020
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10509987
    Abstract: A method for learning parameters of an object detector based on a CNN adaptable to customer's requirements such as KPI by using a target object estimating network and a target object merging network is provided. The CNN can be redesigned when scales of objects change as a focal length or a resolution changes depending on the KPI. The method includes steps of: a learning device instructing convolutional layers to generate a k-th feature map by applying convolution operations to a k-th manipulated image which corresponds to the (k?1)-th target region on an image; and instructing the target object merging network to merge a first to an n-th object detection information, outputted from an FC layer, and backpropagating losses generated by referring to merged object detection information and its corresponding GT. The method can be useful for multi-camera, SVM (surround view monitor), and the like, as accuracy of 2D bounding boxes improves.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: December 17, 2019
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10503174
    Abstract: A method for efficient resource allocation in autonomous driving by reinforcement learning is provided for reducing computation via a heterogeneous sensor fusion. This attention-based method includes steps of: a computing device instructing an attention network to perform a neural network operation by referring to attention sensor data, to calculate attention scores; instructing a detection network to acquire video data by referring to the attention scores and to generate decision data for the autonomous driving; instructing a drive network to operate the autonomous vehicle by referring to the decision data, to acquire circumstance data, and to generate a reward by referring to the circumstance data; and instructing the attention network to adjust parameters used for the neural network operation by referring to the reward. Thus, a virtual space where the autonomous vehicle optimizes the resource allocation can be provided by the method.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: December 10, 2019
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10504027
    Abstract: A convolutional neural network (CNN)-based learning method for selecting useful training data is provided. The CNN-based learning method includes steps of: a learning device (a) instructing a first CNN module (i) to generate a first feature map, and instructing a second CNN module to generate a second feature map; (ii) to generate a first output indicating identification information or location information of an object by using the first feature map, and calculate a first loss by referring to the first output and its corresponding GT; (b) instructing the second CNN module (i) to change a size of the first feature map and integrate the first feature map with the second feature map, to generate a third feature map; (ii) to generate a fourth feature map and to calculate a second loss; and (c) backpropagating the auto-screener's loss generated by referring to the first loss and the second loss.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: December 10, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10496899
    Abstract: A CNN-based method for meta learning, i.e., learning to learning, by using a learning device including convolutional layers capable of applying convolution operations to an image or its corresponding input feature maps to generate output feature maps, and residual networks capable of feed-forwarding the image or its corresponding input feature maps to next convolutional layer through bypassing the convolutional layers or its sub-convolutional layers is provided. The CNN-based method includes steps of: the learning device (a) selecting a specific residual network to be dropped out among the residual networks; (b) feeding the image into a transformed CNN where the specific residual network is dropped out, and outputting a CNN output; and (c) calculating losses by using the CNN output and its corresponding GT, and adjusting parameters of the transformed CNN. Further, the CNN-based method can be also applied to layer-wise dropout, stochastic ensemble, virtual driving, and the like.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: December 3, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10482584
    Abstract: A method for detecting jittering in videos generated by a shaken camera to remove the jittering on the videos using neural networks is provided for fault tolerance and fluctuation robustness in extreme situations. The method includes steps of: a computing device, generating each of t-th masks corresponding to each of objects in a t-th image; generating each of t-th object motion vectors of each of object pixels, included in the t-th image by applying at least one 2-nd neural network operation to each of the t-th masks, each of t-th cropped images, each of (t?1)-th masks, and each of (t?1)-th cropped images; and generating each of t-th jittering vectors corresponding to each of reference pixels among pixels in the t-th image by referring to each of the t-th object motion vectors. Thus, the method is used for video stabilization, object tracking with high precision, behavior estimation, motion decomposition, etc.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: November 19, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10474713
    Abstract: A method for learning convolutional neural network (CNN) by using a plurality of labeled databases having different label sets provided. The method includes steps of: a learning device (a) establishing databases for training, respectively including image data sets by categories, and GT label sets by the categories, if each of the objects corresponds to a class belonging its corresponding category, each information annotated as its corresponding class to the object, wherein the GT label sets correspond to the image data sets; (b) receiving, as an input image, a specific image belonging to a specific image data set corresponding a specific class among the databases for training, and generating a feature map, and then generating classification results, by the categories, corresponding to a specific object included in the input image based on the feature map; and (c) learning parameters of the CNN by using losses by the categories.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: November 12, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10474930
    Abstract: A learning method of a CNN (Convolutional Neural Network) for monitoring one or more blind spots of a monitoring vehicle is provided. The learning method includes steps of: a learning device instructing a detector to output class information and location information on a monitored vehicle in a training image; instructing a cue information extracting layer to output cue information on the monitored vehicle by using the outputted information, and instructing an FC layer to determine whether the monitored vehicle is located on the blind spots by neural-network operations with the cue information or its processed values; and learning parameters of the FC layer and parameters of the detector, by backpropagating loss values for the blind spots by referring to the determination and its corresponding GT and backpropagating loss values for the vehicle detection by referring to the class information and the location information and their corresponding GT, respectively.
    Type: Grant
    Filed: October 5, 2018
    Date of Patent: November 12, 2019
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10467503
    Abstract: A method of generating at least one training data set including steps of: (a) a computing device acquiring (1) an original image and (ii) an initial synthesized label generated by using an original label and a bounding box corresponding to an arbitrary specific object (b) the computing device supporting a CNN module to generate a first synthesized image and a first synthesized label by using the original image and the initial synthesized label, wherein the first synthesized label is created by adding a specific label to the original label at a location in the original label corresponding to a location of the bounding box in the initial synthesized label, and wherein the first synthesized image is created by adding a specific image o to the original image at a location in the original image corresponding to the location of the bounding box in the initial synthesized label.
    Type: Grant
    Filed: September 5, 2018
    Date of Patent: November 5, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10460210
    Abstract: A method of neural network operations by using a grid generator is provided for converting modes according to classes of areas to satisfy level 4 of autonomous vehicles. The method includes steps of: (a) a computing device, if a test image is acquired, instructing a non-object detector to acquire non-object location information for testing and class information of the non-objects for testing by detecting the non-objects for testing on the test image; (b) the computing device instructing the grid generator to generate section information by referring to the non-object location information for testing; (c) the computing device instructing a neural network to determine parameters for testing; (d) the computing device instructing the neural network to apply the neural network operations to the test image by using each of the parameters for testing, to thereby generate one or more neural network outputs.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: October 29, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10452980
    Abstract: A learning method for extracting features from an input image by hardware optimization using n blocks in a convolutional neural network (CNN) is provided. The method includes steps of: a learning device instructing a first convolutional layer of a k-th block to elementwise add a (1_1)-st to a (k_1)-st feature maps or their processed feature maps, and instructing a second convolutional layer of the k-th block to generate a (k_2)-nd feature map; and feeding a pooled feature map, generated by pooling an ROI area on an (n_2)-nd feature map or its processed feature map, into a feature classifier; and instructing a loss layer to calculate losses by referring to outputs of the feature classifier and their corresponding GT. By optimizing hardware, CNN throughput can be improved, and the method becomes more appropriate for compact networks, mobile devices, and the like. Further, the method allows key performance index to be satisfied.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: October 22, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10445611
    Abstract: A method for detecting at least one pseudo-3D bounding box based on a CNN capable of converting modes according to conditions of objects in an image is provided. The method includes steps of: a learning device (a) instructing a pooling layer to generate a pooled feature map corresponding to a 2D bounding box, and instructing a type-classifying layer to determine whether objects in the pooled feature map are truncated or non-truncated; (b) instructing FC layers to generate box pattern information corresponding to the pseudo-3D bounding box; (c) instructing classification layers to generate orientation class information on the objects, and regression layers to generate regression information on coordinates of the pseudo-3D bounding box; and (d) backpropagating class losses and regression losses generated from FC loss layers. Through the method, rendering of truncated objects can be performed while virtual driving, and this is useful for mobile devices and also for military purpose.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: October 15, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyu Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10438082
    Abstract: A method for learning parameters of a CNN capable of detecting ROIs determined based on bottom lines of nearest obstacles in an input image is provided. The method includes steps of: a learning device instructing a first to an n-th convolutional layers to generate a first to an n-th encoded feature maps from the input image; instructing an n-th to a first deconvolutional layers to generate an n-th to a first decoded feature maps from the n-th encoded feature map; if a specific decoded feature map is divided into directions of rows and columns, generating an obstacle segmentation result by referring to a feature of the n-th to the first decoded feature maps; instructing an RPN to generate an ROI bounding box by referring to each anchor box, and losses by referring to the ROI bounding box and its corresponding GT; and backpropagating the losses, to learn the parameters.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: October 8, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10430691
    Abstract: A method for learning parameters of an object detector based on a CNN adaptable to customers' requirements such as KPI by using a target object merging network and a target region estimating network is provided. The CNN can be redesigned when scales of objects change as a focal length or a resolution changes depending on the KPI. The method includes steps of: a learning device (i) instructing the target region estimating network to search for k-th estimated target regions, (ii) instructing an RPN to generate (k_1)-st to (k_n)-th object proposals, corresponding to an object on a (k_1)-st to a (k_n)-th manipulated images, and (iii) instructing the target object merging network to merge the object proposals and merge (k_1)-st to (k_n)-th object detection information, outputted from an FC layer. The method can be useful for multi-camera, SVM (surround view monitor), and the like, as accuracy of 2D bounding boxes improves.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: October 1, 2019
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho