Patents by Inventor Kyungjoong Jeong

Kyungjoong Jeong has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10387753
    Abstract: A method for learning parameters of a CNN for image recognition is provided to be used for hardware optimization which satisfies KPI. The method includes steps of: a learning device (1) instructing a first transposing layer or a pooling layer to generate an integrated feature map by concatenating each of pixels, per each of ROIs, in corresponding locations on pooled ROI feature maps; and (2) (i) instructing a second transposing layer or a classifying layer to divide an adjusted feature map, whose volume is adjusted from the integrated feature map, by each of the pixels, and instructing the classifying layer to generate object information on the ROIs, and (ii) backpropagating object losses. Size of a chip can be decreased as convolution operations and fully connected layer operations are performed by a same processor. Accordingly, there are advantages such as no need to build additional lines in a semiconductor manufacturing process.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: August 20, 2019
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10380724
    Abstract: A method for learning reduction of distortion occurred in a warped image by using a GAN is provided for enhancing fault tolerance and fluctuation robustness in extreme situations. And the method includes steps of: (a) if an initial image is acquired, instructing an adjusting layer included in the generating network to adjust at least part of initial feature values, to thereby transform the initial image into an adjusted image; and (b) if at least part of (i) a naturality score, (ii) a maintenance score, and (iii) a similarity score are acquired, instructing a loss layer included in the generating network to generate a generating network loss by referring to said at least part of the naturality score, the maintenance score and the similarity score, and learn parameters of the generating network. Further, the method can be used for estimating behaviors, and detecting or tracking objects with high precision, etc.
    Type: Grant
    Filed: January 28, 2019
    Date of Patent: August 13, 2019
    Assignee: Stradvision, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10373026
    Abstract: A method of learning for deriving virtual feature maps from virtual images, whose characteristics are same as or similar to those of real feature maps derived from real images, by using GAN including a generating network and a discriminating network capable of being applied to domain adaptation is provided to be used in virtual driving environments. The method includes steps of: (a) a learning device instructing the generating network to apply convolutional operations to an input image, to thereby generate a output feature map, whose characteristics are same as or similar to those of the real feature maps; and (b) instructing a loss unit to generate losses by referring to an evaluation score, corresponding to the output feature map, generated by the discriminating network. By the method using a runtime input transformation, a gap between virtuality and reality can be reduced, and annotation costs can be reduced.
    Type: Grant
    Filed: January 28, 2019
    Date of Patent: August 6, 2019
    Assignee: Stradvision, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10373023
    Abstract: A method for learning a runtime input transformation of real images into virtual images by using a cycle GAN capable of being applied to domain adaptation is provided. The method can be also performed in virtual driving environments. The method includes steps of: (a) (i) instructing first transformer to transform a first image to second image, (ii-1) instructing first discriminator to generate a 1_1-st result, and (ii-2) instructing second transformer to transform the second image to third image, whose characteristics are same as or similar to those of the real images; (b) (i) instructing the second transformer to transform a fourth image to fifth image, (ii-1) instructing second discriminator to generate a 2_1-st result, and (ii-2) instructing the first transformer to transform the fifth image to sixth image; (c) calculating losses. By the method, a gap between virtuality and reality can be reduced, and annotation costs can be reduced.
    Type: Grant
    Filed: January 28, 2019
    Date of Patent: August 6, 2019
    Assignee: Stradvision, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10373323
    Abstract: A method for merging object detection information detected by object detectors, each of which corresponds to each of cameras located nearby, by using V2X-based auto labeling and evaluation, wherein the object detectors detect objects in each of images generated from each of the cameras by image analysis based on deep learning is provided. The method includes steps of: if first to n-th object detection information are respectively acquired from a first to an n-th object detectors in a descending order of degrees of detection reliabilities, a merging device generating (k-1)-th object merging information by merging (k-2)-th objects and k-th objects through matching operations, and re-projecting the (k-1)-th object merging information onto an image, by increasing k from 3 to n. The method can be used for a collaborative driving or an HD map update through V2X-enabled applications, sensor fusion via multiple vehicles, and the like.
    Type: Grant
    Filed: January 29, 2019
    Date of Patent: August 6, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10373317
    Abstract: A method for an attention-driven image segmentation by using at least one adaptive loss weight map is provided to be used for updating HD maps required to satisfy level 4 of autonomous vehicles. By this method, vague objects such as lanes and road markers at distance may be detected more accurately. Also, this method can be usefully performed in military, where identification of friend or foe is important, by distinguishing aircraft marks or military uniforms at distance. The method includes steps of: a learning device instructing a softmax layer to generate softmax scores; instructing a loss weight layer to generate loss weight values by applying loss weight operations to predicted error values generated therefrom; and instructing a softmax loss layer to generate adjusted softmax loss values by referring to initial softmax loss values, generated by referring to the softmax scores and their corresponding GTs, and the loss weight values.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: August 6, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10373025
    Abstract: A method for verifying an integrity of one or more parameters of a convolutional neural network (CNN) by using at least one test pattern to be added to at least one original input is provided for fault tolerance, fluctuation robustness in extreme situations, functional safety on the CNN, and annotation cost reduction. The method includes steps of: (a) a computing device instructing at least one adding unit to generate at least one extended input by adding the test pattern to the original input; (b) the computing device instructing the CNN to generate at least one output for verification by applying one or more convolution operations to the extended input; and (c) the computing device instructing at least one comparing unit to verify the integrity of the parameters of the CNN by determining a validity of the output for verification with reference to at least one output for reference.
    Type: Grant
    Filed: January 28, 2019
    Date of Patent: August 6, 2019
    Assignee: Stradvision, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10372573
    Abstract: A method for generating one or more test patterns and selecting optimized test patterns among the test patterns to verify an integrity of convolution operations is provided for fault tolerance, fluctuation robustness in extreme situations, functional safety of the convolution operations, and annotation cost reduction. The method includes: a computing device (a) instructing a pattern generating unit to generate the test patterns by using a certain function such that saturation does not occur while at least one original CNN applies the convolution operations to the test patterns; (b) instructing a pattern evaluation unit to generate each of evaluation scores of each of the test patterns by referring to each of the test patterns and one or more parameters of the original CNN; and (c) instructing a pattern selection unit to select the optimized test patterns among the test patterns by referring to the evaluation scores.
    Type: Grant
    Filed: January 28, 2019
    Date of Patent: August 6, 2019
    Assignee: Stradvision, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10373004
    Abstract: A method for detecting lane elements, which are unit regions including pixels of lanes in an input image, to plan the drive path of an autonomous vehicle by using a horizontal filter mask is provided. The method includes steps of: a computing device acquiring a segmentation score map from a CNN using the input image; instructing a post-processing module, capable of performing data processing at an output end of the CNN, to generate a magnitude map by using the segmentation score map and the horizontal filter mask; instructing the post-processing module to determine each of lane element candidates per each of rows of the segmentation score map by referring to values of the magnitude map; and instructing the post-processing module to apply estimation operations to each of the lane element candidates per each of the rows, to thereby detect each of the lane elements.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: August 6, 2019
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10373027
    Abstract: A method for acquiring a sample image for label-inspecting among auto-labeled images for learning a deep learning network, optimizing sampling processes for manual labeling, and reducing annotation costs is provided. The method includes steps of: a sample image acquiring device, generating a first and a second images, instructing convolutional layers to generate a first and a second feature maps, instructing pooling layers to generate a first and a second pooled feature maps, and generating concatenated feature maps; instructing a deep learning classifier to acquire the concatenated feature maps, to thereby generate class information; and calculating probabilities of abnormal class elements in an abnormal class group, determining whether the auto-labeled image is a difficult image, and selecting the auto-labeled image as the sample image for label-inspecting. Further, the method can be performed by using a robust algorithm with multiple transform pairs.
    Type: Grant
    Filed: January 30, 2019
    Date of Patent: August 6, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10346693
    Abstract: A method of attention-based lane detection without post-processing by using a lane mask is provided. The method includes steps of: a learning device instructing a CNN to acquire a final feature map which has been generated by applying convolution operations to an image, a segmentation score map, and an embedded feature map which have been generated by using the final feature map; instructing a lane masking layer to recognize lane candidates, generate the lane mask, and generate a masked feature map; instructing a convolutional layer to generate a lane feature map; instructing a first FC layer to generate a softmax score map and a second FC layer to generate lane parameters; and backpropagating loss values outputted from a multinomial logistic loss layer and a line fitting loss layer, to thereby learn parameters of the FC layers, and the convolutional layer. Thus, lanes at distance can be detected more accurately.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: July 9, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10339424
    Abstract: A method of neural network operations by using a grid generator is provided for converting modes according to classes of areas to satisfy level 4 of autonomous vehicles. The method includes steps of: a computing device (a) instructing a detector to acquire object location information for testing and class information; (b) instructing the grid generator to generate section information by referring to the object location information for testing; (c) instructing a neural network to determine parameters for testing, to be used for applying the neural network operations to either (i) the subsections including each of the objects for testing and each of non-objects for testing, or (ii) each of sub-regions, in each of the subsections, where said each of the non-objects for testing is located; and (d) instructing the neural network to apply the neural network operations to the test image for testing to thereby generate neural network outputs.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: July 2, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10325179
    Abstract: A method for pooling at least one ROI by using one or more masking parameters is provided. The method is applicable to mobile devices, compact networks, and the like via hardware optimization. The method includes steps of: (a) a computing device, if an input image is acquired, instructing a convolutional layer of a CNN to generate a feature map corresponding to the input image; (b) the computing device instructing an RPN of the CNN to determine the ROI corresponding to at least one object included in the input image by using the feature map; (c) the computing device instructing an ROI pooling layer of the CNN to apply each of pooling operations correspondingly to each of sub-regions in the ROI by referring to each of the masking parameters corresponding to each of the pooling operations, to thereby generate a masked pooled feature map.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: June 18, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10325352
    Abstract: There is provided a method for transforming convolutional layers of a CNN including m convolutional blocks to optimize CNN parameter quantization to be used for mobile devices, compact networks, and the like with high precision via hardware optimization. The method includes steps of: a computing device (a) generating k-th quantization loss values by referring to k-th initial weights of a k-th initial convolutional layer included in a k-th convolutional block, a (k?1)-th feature map outputted from the (k?1)-th convolutional block, and each of k-th scaling parameters; (b) determining each of k-th optimized scaling parameters by referring to the k-th quantization loss values; (c) generating a k-th scaling layer and a k-th inverse scaling layer by referring to the k-th optimized scaling parameters; and (d) transforming the k-th initial convolutional layer into a k-th integrated convolutional layer by using the k-th scaling layer and the (k?1)-th inverse scaling layer.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: June 18, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10325201
    Abstract: A method for generating a deceivable composite image by using a GAN (Generative Adversarial Network) including a generating and a discriminating neural network to allow a surveillance system to recognize surroundings and detect a rare event, such as hazardous situations, more accurately by using a heterogeneous sensor fusion is provided. The method includes steps of: a computing device, generating location candidates of a rare object on a background image, and selecting a specific location candidate among the location candidates as an optimal location of the rare object by referring to candidate scores; inserting a rare object image into the optimal location, generating an initial composite image; and adjusting color values corresponding to each of pixels in the initial composite image, generating the deceivable composite image. Further, the method may be applicable to a pedestrian assistant system and a route planning by using 3D maps, GPS, smartphones, V2X communications, etc.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: June 18, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10325185
    Abstract: A method of online batch normalization, on-device learning, or continual learning which are applicable to mobile devices, IoT devices, and the like is provided. The method includes steps of: (a) computing device instructing convolutional layer to acquire k-th batch, and to generate feature maps for k-th batch by applying convolution operations to input images included in k-th batch respectively; and (b) computing device instructing batch normalization layer to calculate adjusted averages and adjusted variations of the feature maps by referring to the feature maps in case k is 1, and the feature maps and previous feature maps, included in at least part of previous batches among previously generated first to (k?1)-th batches in case k is integer from 2 to m, and to apply batch normalization operations to the feature maps. Further, the method may be performed for military purpose, or other devices such as drones, robots.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: June 18, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10325371
    Abstract: A method for segmenting an image by using each of a plurality of weighted convolution filters for each of grid cells to be used for converting modes according to classes of areas is provided to satisfy level 4 of an autonomous vehicle. The method includes steps of: a learning device (a) instructing (i) an encoding layer to generate an encoded feature map and (ii) a decoding layer to generate a decoded feature map; (b) if a specific decoded feature map is divided into the grid cells, instructing a weight convolution layer to set weighted convolution filters therein to correspond to the grid cells, and to apply a weight convolution operation to the specific decoded feature map; and (c) backpropagating a loss. The method is applicable to CCTV for surveillance as the neural network may have respective optimum parameters to be applied to respective regions with respective distances.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: June 18, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10318842
    Abstract: A learning method for learning parameters of convolutional neural network (CNN) by using multiple video frames is provided. The learning method includes steps of: (a) a learning device applying at least one convolutional operation to a (t-k)-th input image corresponding to a (t-k)-th frame and applying at least one convolutional operation to a t-th input image corresponding to a t-th frame following the (t-k)-th frame, to thereby obtain a (t-k)-th feature map corresponding to the (t-k)-th frame and a t-th feature map corresponding to the t-th frame; (b) the learning device calculating a first loss by referring to each of at least one distance value between each of pixels in the (t-k)-th feature map and each of pixels in the t-th feature map; and (c) the learning device backpropagating the first loss to thereby optimize at least one parameter of the CNN.
    Type: Grant
    Filed: September 5, 2018
    Date of Patent: June 11, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10311335
    Abstract: A method of generating at least one image data set to be used for learning CNN capable of detecting at least one obstruction in one or more autonomous driving circumstances, comprising steps of: (a) a learning device acquiring (i) an original image representing a road driving circumstance and (ii) a synthesized label obtained by using an original label corresponding to the original image and an additional label corresponding to an arbitrary specific object, wherein the arbitrary specific object does not relate to the original image; and (b) the learning device supporting a first CNN module to generate a synthesized image using the original image and the synthesized label, wherein the synthesized image is created by combining (i) an image of the arbitrary specific object corresponding to the additional label and (ii) the original image.
    Type: Grant
    Filed: September 5, 2018
    Date of Patent: June 4, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10311337
    Abstract: A method for providing an integrated feature map by using an ensemble of a plurality of outputs from a convolutional neural network (CNN) is provided. The method includes steps of: a CNN device (a) receiving an input image and applying a plurality of modification functions to the input image to thereby generate a plurality of modified input images; (b) applying convolution operations to each of the modified input images to thereby obtain each of modified feature maps corresponding to each of the modified input images; (c) applying each of reverse transform functions, corresponding to each of the modification functions, to each of the corresponding modified feature maps, to thereby generate each of reverse transform feature maps corresponding to each of the modified feature maps; and (d) integrating at least part of the reverse transform feature maps to thereby obtain an integrated feature map.
    Type: Grant
    Filed: September 4, 2018
    Date of Patent: June 4, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho