Patents Assigned to StradVision, Inc.
  • Patent number: 10373026
    Abstract: A method of learning for deriving virtual feature maps from virtual images, whose characteristics are same as or similar to those of real feature maps derived from real images, by using GAN including a generating network and a discriminating network capable of being applied to domain adaptation is provided to be used in virtual driving environments. The method includes steps of: (a) a learning device instructing the generating network to apply convolutional operations to an input image, to thereby generate a output feature map, whose characteristics are same as or similar to those of the real feature maps; and (b) instructing a loss unit to generate losses by referring to an evaluation score, corresponding to the output feature map, generated by the discriminating network. By the method using a runtime input transformation, a gap between virtuality and reality can be reduced, and annotation costs can be reduced.
    Type: Grant
    Filed: January 28, 2019
    Date of Patent: August 6, 2019
    Assignee: Stradvision, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10373317
    Abstract: A method for an attention-driven image segmentation by using at least one adaptive loss weight map is provided to be used for updating HD maps required to satisfy level 4 of autonomous vehicles. By this method, vague objects such as lanes and road markers at distance may be detected more accurately. Also, this method can be usefully performed in military, where identification of friend or foe is important, by distinguishing aircraft marks or military uniforms at distance. The method includes steps of: a learning device instructing a softmax layer to generate softmax scores; instructing a loss weight layer to generate loss weight values by applying loss weight operations to predicted error values generated therefrom; and instructing a softmax loss layer to generate adjusted softmax loss values by referring to initial softmax loss values, generated by referring to the softmax scores and their corresponding GTs, and the loss weight values.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: August 6, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10373025
    Abstract: A method for verifying an integrity of one or more parameters of a convolutional neural network (CNN) by using at least one test pattern to be added to at least one original input is provided for fault tolerance, fluctuation robustness in extreme situations, functional safety on the CNN, and annotation cost reduction. The method includes steps of: (a) a computing device instructing at least one adding unit to generate at least one extended input by adding the test pattern to the original input; (b) the computing device instructing the CNN to generate at least one output for verification by applying one or more convolution operations to the extended input; and (c) the computing device instructing at least one comparing unit to verify the integrity of the parameters of the CNN by determining a validity of the output for verification with reference to at least one output for reference.
    Type: Grant
    Filed: January 28, 2019
    Date of Patent: August 6, 2019
    Assignee: Stradvision, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10373323
    Abstract: A method for merging object detection information detected by object detectors, each of which corresponds to each of cameras located nearby, by using V2X-based auto labeling and evaluation, wherein the object detectors detect objects in each of images generated from each of the cameras by image analysis based on deep learning is provided. The method includes steps of: if first to n-th object detection information are respectively acquired from a first to an n-th object detectors in a descending order of degrees of detection reliabilities, a merging device generating (k-1)-th object merging information by merging (k-2)-th objects and k-th objects through matching operations, and re-projecting the (k-1)-th object merging information onto an image, by increasing k from 3 to n. The method can be used for a collaborative driving or an HD map update through V2X-enabled applications, sensor fusion via multiple vehicles, and the like.
    Type: Grant
    Filed: January 29, 2019
    Date of Patent: August 6, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10373004
    Abstract: A method for detecting lane elements, which are unit regions including pixels of lanes in an input image, to plan the drive path of an autonomous vehicle by using a horizontal filter mask is provided. The method includes steps of: a computing device acquiring a segmentation score map from a CNN using the input image; instructing a post-processing module, capable of performing data processing at an output end of the CNN, to generate a magnitude map by using the segmentation score map and the horizontal filter mask; instructing the post-processing module to determine each of lane element candidates per each of rows of the segmentation score map by referring to values of the magnitude map; and instructing the post-processing module to apply estimation operations to each of the lane element candidates per each of the rows, to thereby detect each of the lane elements.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: August 6, 2019
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10346693
    Abstract: A method of attention-based lane detection without post-processing by using a lane mask is provided. The method includes steps of: a learning device instructing a CNN to acquire a final feature map which has been generated by applying convolution operations to an image, a segmentation score map, and an embedded feature map which have been generated by using the final feature map; instructing a lane masking layer to recognize lane candidates, generate the lane mask, and generate a masked feature map; instructing a convolutional layer to generate a lane feature map; instructing a first FC layer to generate a softmax score map and a second FC layer to generate lane parameters; and backpropagating loss values outputted from a multinomial logistic loss layer and a line fitting loss layer, to thereby learn parameters of the FC layers, and the convolutional layer. Thus, lanes at distance can be detected more accurately.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: July 9, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10339424
    Abstract: A method of neural network operations by using a grid generator is provided for converting modes according to classes of areas to satisfy level 4 of autonomous vehicles. The method includes steps of: a computing device (a) instructing a detector to acquire object location information for testing and class information; (b) instructing the grid generator to generate section information by referring to the object location information for testing; (c) instructing a neural network to determine parameters for testing, to be used for applying the neural network operations to either (i) the subsections including each of the objects for testing and each of non-objects for testing, or (ii) each of sub-regions, in each of the subsections, where said each of the non-objects for testing is located; and (d) instructing the neural network to apply the neural network operations to the test image for testing to thereby generate neural network outputs.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: July 2, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10325371
    Abstract: A method for segmenting an image by using each of a plurality of weighted convolution filters for each of grid cells to be used for converting modes according to classes of areas is provided to satisfy level 4 of an autonomous vehicle. The method includes steps of: a learning device (a) instructing (i) an encoding layer to generate an encoded feature map and (ii) a decoding layer to generate a decoded feature map; (b) if a specific decoded feature map is divided into the grid cells, instructing a weight convolution layer to set weighted convolution filters therein to correspond to the grid cells, and to apply a weight convolution operation to the specific decoded feature map; and (c) backpropagating a loss. The method is applicable to CCTV for surveillance as the neural network may have respective optimum parameters to be applied to respective regions with respective distances.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: June 18, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10325352
    Abstract: There is provided a method for transforming convolutional layers of a CNN including m convolutional blocks to optimize CNN parameter quantization to be used for mobile devices, compact networks, and the like with high precision via hardware optimization. The method includes steps of: a computing device (a) generating k-th quantization loss values by referring to k-th initial weights of a k-th initial convolutional layer included in a k-th convolutional block, a (k?1)-th feature map outputted from the (k?1)-th convolutional block, and each of k-th scaling parameters; (b) determining each of k-th optimized scaling parameters by referring to the k-th quantization loss values; (c) generating a k-th scaling layer and a k-th inverse scaling layer by referring to the k-th optimized scaling parameters; and (d) transforming the k-th initial convolutional layer into a k-th integrated convolutional layer by using the k-th scaling layer and the (k?1)-th inverse scaling layer.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: June 18, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10325185
    Abstract: A method of online batch normalization, on-device learning, or continual learning which are applicable to mobile devices, IoT devices, and the like is provided. The method includes steps of: (a) computing device instructing convolutional layer to acquire k-th batch, and to generate feature maps for k-th batch by applying convolution operations to input images included in k-th batch respectively; and (b) computing device instructing batch normalization layer to calculate adjusted averages and adjusted variations of the feature maps by referring to the feature maps in case k is 1, and the feature maps and previous feature maps, included in at least part of previous batches among previously generated first to (k?1)-th batches in case k is integer from 2 to m, and to apply batch normalization operations to the feature maps. Further, the method may be performed for military purpose, or other devices such as drones, robots.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: June 18, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10325201
    Abstract: A method for generating a deceivable composite image by using a GAN (Generative Adversarial Network) including a generating and a discriminating neural network to allow a surveillance system to recognize surroundings and detect a rare event, such as hazardous situations, more accurately by using a heterogeneous sensor fusion is provided. The method includes steps of: a computing device, generating location candidates of a rare object on a background image, and selecting a specific location candidate among the location candidates as an optimal location of the rare object by referring to candidate scores; inserting a rare object image into the optimal location, generating an initial composite image; and adjusting color values corresponding to each of pixels in the initial composite image, generating the deceivable composite image. Further, the method may be applicable to a pedestrian assistant system and a route planning by using 3D maps, GPS, smartphones, V2X communications, etc.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: June 18, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10325179
    Abstract: A method for pooling at least one ROI by using one or more masking parameters is provided. The method is applicable to mobile devices, compact networks, and the like via hardware optimization. The method includes steps of: (a) a computing device, if an input image is acquired, instructing a convolutional layer of a CNN to generate a feature map corresponding to the input image; (b) the computing device instructing an RPN of the CNN to determine the ROI corresponding to at least one object included in the input image by using the feature map; (c) the computing device instructing an ROI pooling layer of the CNN to apply each of pooling operations correspondingly to each of sub-regions in the ROI by referring to each of the masking parameters corresponding to each of the pooling operations, to thereby generate a masked pooled feature map.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: June 18, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10318842
    Abstract: A learning method for learning parameters of convolutional neural network (CNN) by using multiple video frames is provided. The learning method includes steps of: (a) a learning device applying at least one convolutional operation to a (t-k)-th input image corresponding to a (t-k)-th frame and applying at least one convolutional operation to a t-th input image corresponding to a t-th frame following the (t-k)-th frame, to thereby obtain a (t-k)-th feature map corresponding to the (t-k)-th frame and a t-th feature map corresponding to the t-th frame; (b) the learning device calculating a first loss by referring to each of at least one distance value between each of pixels in the (t-k)-th feature map and each of pixels in the t-th feature map; and (c) the learning device backpropagating the first loss to thereby optimize at least one parameter of the CNN.
    Type: Grant
    Filed: September 5, 2018
    Date of Patent: June 11, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10311336
    Abstract: A method of neural network operations by using a grid generator is provided for converting modes according to classes of areas to satisfy level 4 of autonomous vehicles. The method includes steps of: (a) a computing device instructing a pair detector to acquire information on locations and classes of pairs for testing by detecting the pairs for testing; (b) the computing device instructing the grid generator to generate section information by referring to the information on the locations of the pairs for testing; (c) the computing device instructing a neural network to determine parameters for testing by referring to parameters for training which have been learned by using information on pairs for training; and (d) the computing device instructing the neural network to apply the neural network operations to a test image by using each of the parameters for testing to thereby generate one or more neural network outputs.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: June 4, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10311337
    Abstract: A method for providing an integrated feature map by using an ensemble of a plurality of outputs from a convolutional neural network (CNN) is provided. The method includes steps of: a CNN device (a) receiving an input image and applying a plurality of modification functions to the input image to thereby generate a plurality of modified input images; (b) applying convolution operations to each of the modified input images to thereby obtain each of modified feature maps corresponding to each of the modified input images; (c) applying each of reverse transform functions, corresponding to each of the modification functions, to each of the corresponding modified feature maps, to thereby generate each of reverse transform feature maps corresponding to each of the modified feature maps; and (d) integrating at least part of the reverse transform feature maps to thereby obtain an integrated feature map.
    Type: Grant
    Filed: September 4, 2018
    Date of Patent: June 4, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10311324
    Abstract: A method for learning parameters of CNNs capable of identifying objectnesses by detecting bottom lines and top lines of nearest obstacles in an input image is provided. The method includes steps of: a learning device, (a) instructing a first CNN to generate first encoded feature maps and first decoded feature maps, and instructing a second CNN to generate second encoded feature maps and second decoded feature maps; (b) generating first and second obstacle segmentation results respectively representing where the bottom lines and the top lines are estimated as being located per each column, by referring to the first and the second decoded feature maps respectively; (c) estimating the objectnesses by referring to the first and the second obstacle segmentation results; (d) generating losses by referring to the objectnesses and their corresponding GTs; and (f) backpropagating the losses, to thereby learn the parameters of the CNNs.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: June 4, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10311335
    Abstract: A method of generating at least one image data set to be used for learning CNN capable of detecting at least one obstruction in one or more autonomous driving circumstances, comprising steps of: (a) a learning device acquiring (i) an original image representing a road driving circumstance and (ii) a synthesized label obtained by using an original label corresponding to the original image and an additional label corresponding to an arbitrary specific object, wherein the arbitrary specific object does not relate to the original image; and (b) the learning device supporting a first CNN module to generate a synthesized image using the original image and the synthesized label, wherein the synthesized image is created by combining (i) an image of the arbitrary specific object corresponding to the additional label and (ii) the original image.
    Type: Grant
    Filed: September 5, 2018
    Date of Patent: June 4, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10311578
    Abstract: A learning method for segmenting an image having one or more lanes is provided to be used for supporting collaboration with HD maps required to satisfy level 4 of autonomous vehicles. The learning method includes steps of: a learning device instructing a CNN module (a) to apply convolution operations to the image, thereby generating a feature map, and apply deconvolution operations thereto, thereby generating segmentation scores of each of pixels on the image; (b) to apply Softmax operations to the segmentation scores, thereby generating Softmax scores; and (c) to (I) apply multinomial logistic loss operations and pixel embedding operations to the Softmax scores, thereby generating Softmax losses and embedding losses, where the embedding losses is used to increase inter-lane differences among averages of the segmentation scores and decrease intra-lane variances among the segmentation scores, in learning parameters of the CNN module, and (II) backpropagate the Softmax and the embedding losses.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: June 4, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10311321
    Abstract: A method for learning parameters of a CNN based on regression losses is provided. The method includes steps of: a learning device instructing a first to an n-th convolutional layers to generate a first to an n-th encoded feature maps; instructing an n-th to a first deconvolutional layers to generate an n-th to a first decoded feature maps from the n-th encoded feature map; generating an obstacle segmentation result by referring to a feature of the decoded feature maps; generating the regression losses by referring to differences of distances between each location of the specific rows, where bottom lines of nearest obstacles are estimated as being located per each of columns of a specific decoded feature map, and each location of exact rows, where the bottom lines are truly located per each of the columns on a GT; and backpropagating the regression losses, to thereby learn the parameters.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: June 4, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10311338
    Abstract: A learning method of a CNN capable of detecting one or more lanes is provided. The learning method includes steps of: a learning device (a) applying convolution operations to an image, to generate a feature map, and generating lane candidate information; (b) generating a first pixel data map including information on pixels in the image and their corresponding pieces of first data, wherein main subsets from the first data include distance values from the pixels to their nearest first lane candidates by Using a direct regression, and generating a second pixel data map including information on the pixels and their corresponding pieces of second data, wherein main subsets from the second data include distance values from the pixels to their nearest second lane candidates by using the direct regression; and (c) detecting the lanes by inference to the first pixel data map and the second pixel data map.
    Type: Grant
    Filed: September 15, 2018
    Date of Patent: June 4, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho