Patents by Inventor Kyungjoong Jeong

Kyungjoong Jeong has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10460210
    Abstract: A method of neural network operations by using a grid generator is provided for converting modes according to classes of areas to satisfy level 4 of autonomous vehicles. The method includes steps of: (a) a computing device, if a test image is acquired, instructing a non-object detector to acquire non-object location information for testing and class information of the non-objects for testing by detecting the non-objects for testing on the test image; (b) the computing device instructing the grid generator to generate section information by referring to the non-object location information for testing; (c) the computing device instructing a neural network to determine parameters for testing; (d) the computing device instructing the neural network to apply the neural network operations to the test image by using each of the parameters for testing, to thereby generate one or more neural network outputs.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: October 29, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10452980
    Abstract: A learning method for extracting features from an input image by hardware optimization using n blocks in a convolutional neural network (CNN) is provided. The method includes steps of: a learning device instructing a first convolutional layer of a k-th block to elementwise add a (1_1)-st to a (k_1)-st feature maps or their processed feature maps, and instructing a second convolutional layer of the k-th block to generate a (k_2)-nd feature map; and feeding a pooled feature map, generated by pooling an ROI area on an (n_2)-nd feature map or its processed feature map, into a feature classifier; and instructing a loss layer to calculate losses by referring to outputs of the feature classifier and their corresponding GT. By optimizing hardware, CNN throughput can be improved, and the method becomes more appropriate for compact networks, mobile devices, and the like. Further, the method allows key performance index to be satisfied.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: October 22, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10445611
    Abstract: A method for detecting at least one pseudo-3D bounding box based on a CNN capable of converting modes according to conditions of objects in an image is provided. The method includes steps of: a learning device (a) instructing a pooling layer to generate a pooled feature map corresponding to a 2D bounding box, and instructing a type-classifying layer to determine whether objects in the pooled feature map are truncated or non-truncated; (b) instructing FC layers to generate box pattern information corresponding to the pseudo-3D bounding box; (c) instructing classification layers to generate orientation class information on the objects, and regression layers to generate regression information on coordinates of the pseudo-3D bounding box; and (d) backpropagating class losses and regression losses generated from FC loss layers. Through the method, rendering of truncated objects can be performed while virtual driving, and this is useful for mobile devices and also for military purpose.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: October 15, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyu Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10438082
    Abstract: A method for learning parameters of a CNN capable of detecting ROIs determined based on bottom lines of nearest obstacles in an input image is provided. The method includes steps of: a learning device instructing a first to an n-th convolutional layers to generate a first to an n-th encoded feature maps from the input image; instructing an n-th to a first deconvolutional layers to generate an n-th to a first decoded feature maps from the n-th encoded feature map; if a specific decoded feature map is divided into directions of rows and columns, generating an obstacle segmentation result by referring to a feature of the n-th to the first decoded feature maps; instructing an RPN to generate an ROI bounding box by referring to each anchor box, and losses by referring to the ROI bounding box and its corresponding GT; and backpropagating the losses, to learn the parameters.
    Type: Grant
    Filed: October 26, 2018
    Date of Patent: October 8, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10430691
    Abstract: A method for learning parameters of an object detector based on a CNN adaptable to customers' requirements such as KPI by using a target object merging network and a target region estimating network is provided. The CNN can be redesigned when scales of objects change as a focal length or a resolution changes depending on the KPI. The method includes steps of: a learning device (i) instructing the target region estimating network to search for k-th estimated target regions, (ii) instructing an RPN to generate (k_1)-st to (k_n)-th object proposals, corresponding to an object on a (k_1)-st to a (k_n)-th manipulated images, and (iii) instructing the target object merging network to merge the object proposals and merge (k_1)-st to (k_n)-th object detection information, outputted from an FC layer. The method can be useful for multi-camera, SVM (surround view monitor), and the like, as accuracy of 2D bounding boxes improves.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: October 1, 2019
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10423840
    Abstract: A post-processing method for detecting lanes to plan the drive path of an autonomous vehicle by using a segmentation score map and a clustering map is provided. The method includes steps of: a computing device acquiring the segmentation score map and the clustering map from a CNN; instructing a post-processing module to detect lane elements including pixels forming the lanes referring to the segmentation score map and generate seed information referring to the lane elements, the segmentation score map, and the clustering map; instructing the post-processing module to generate base models referring to the seed information and generate lane anchors referring to the base models; instructing the post-processing module to generate lane blobs referring to the lane anchors; and instructing the post-processing module to detect lane candidates referring to the lane blobs and generate a lane model by line-fitting operations on the lane candidates.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: September 24, 2019
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10423860
    Abstract: A method for learning parameters of an object detector based on a CNN adaptable to customers' requirements such as KPI by using an image concatenation and a target object merging network is provided. The CNN can be redesigned when scales of objects change as a focal length or a resolution changes depending on the KPI. The method includes steps of: a learning device instructing an image-manipulating network to generate n manipulated images; instructing an RPN to generate first to n-th object proposals respectively in the manipulated images, and instructing an FC layer to generate first to n-th object detection information; and instructing the target object merging network to merge the object proposals and merge the object detection information. In this method, the object proposals can be generated by using lidar. The method can be useful for multi-camera, SVM(surround view monitor), and the like, as accuracy of 2D bounding boxes improves.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: September 24, 2019
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10410352
    Abstract: A learning method for improving a segmentation performance to be used for detecting events including a pedestrian event, a vehicle event, a falling event, and a fallen event using a learning device is provided. The method includes steps of: the learning device (a) instructing k convolutional layers to generate k encoded feature maps; (b) instructing k?1 deconvolutional layers to sequentially generate k?1 decoded feature maps, wherein the learning device instructs h mask layers to refer to h original decoded feature maps outputted from h deconvolutional layers corresponding thereto and h edge feature maps generated by extracting edge parts from the h original decoded feature maps; and (c) instructing h edge loss layers to generate h edge losses by referring to the edge parts and their corresponding GTs. Further, the method allows a degree of detecting traffic sign, landmark, road marker, and the like to be increased.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: September 10, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10408939
    Abstract: A method for integrating, at each convolution stage in a neural network, an image generated by a camera and its corresponding point-cloud map generated by a radar, a LiDAR, or a heterogeneous sensor fusion is provided to be used for an HD map update. The method includes steps of: a computing device instructing an initial operation layer to integrate the image and its corresponding original point-cloud map, to generate a first fused feature map and a first fused point-cloud map; instructing a transformation layer to apply a first transformation operation to the first fused feature map, and to apply a second transformation operation to the first fused point-cloud map; and instructing an integration layer to integrate feature maps outputted from the transformation layer, to generate a second fused point-cloud map. By the method, an object detection and a segmentation can be performed more efficiently with a distance estimation.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: September 10, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10410120
    Abstract: A method for learning an object detector based on a region-based convolutional neural network (R-CNN) capable of converting modes according to aspect ratios or scales of objects is provided. The aspect ratio and the scale of the objects including traffic lights may be determined according to characteristics, such as distance from the object detector, shapes, and the like, of the object. The method includes steps of: a learning device instructing an RPN to generate ROI candidates; instructing pooling layers to output feature vector; and learn the FC layers and the convolutional layer through backpropagation. In this method, pooling processes may be performed depending on real ratios and real sizes of the objects by using distance information and object information obtained through a radar, a lidar or other sensors. Also, the method can be used for surveillance as humans at a specific location have similar sizes.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: September 10, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10402686
    Abstract: A method for an object detector to be used for surveillance based on a convolutional neural network capable of converting modes according to scales of objects is provided. The method includes steps of: a learning device (a) instructing convolutional layers to output a feature map by applying convolution operations to an image and instructing an RPN to output ROIs in the image; (b) instructing pooling layers to output first feature vectors by pooling each of ROI areas on the feature map per each of their scales, instructing first FC layers to output second feature vectors, and instructing second FC layers to output class information and regression information; and (c) instructing loss layers to generate class losses and regression losses by referring to the class information, the regression information, and their corresponding GTs.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: September 3, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10402695
    Abstract: A method for learning parameters of a CNN for image recognition is provided to be used for hardware optimization which satisfies KPI. The method includes steps of: a learning device (a) instructing a first transposing layer or a pooling layer to generate an integrated feature map by concatenating pixels, per each ROI, on pooled ROI feature maps; (b) instructing a 1×H1 convolutional layer to generate a first adjusted feature map using a first reshaped feature map, generated by concatenating features in H1 channels of the integrated feature map, and instructing a 1×H2 convolutional layer to generate a second adjusted feature map using a second reshaped feature map, generated by concatenating features in H2 channels of the first adjusted feature map; and (c) instructing a second transposing layer or a classifying layer to divide the second adjusted feature map by each pixel, to thereby generate pixel-wise feature maps.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: September 3, 2019
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10402977
    Abstract: A learning method for improving a segmentation performance in detecting edges of road obstacles and traffic signs, etc. required to satisfy level 4 and level 5 of autonomous vehicles using a learning device is provided. The traffic signs, as well as landmarks and road markers may be detected more accurately by reinforcing text parts as edge parts in an image. The method includes steps of: the learning device (a) instructing k convolutional layers to generate k encoded feature maps, including h encoded feature maps corresponding to h mask layers; (b) instructing k deconvolutional layers to generate k decoded feature maps (i) by using h bandpass feature maps and h decoded feature maps corresponding to the h mask layers and (ii) by using feature maps to be inputted respectively to k-h deconvolutional layers; and (c) adjusting parameters of the deconvolutional and convolutional layers.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: September 3, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10402978
    Abstract: A method for detecting a pseudo-3D bounding box based on a CNN capable of converting modes according to poses of detected objects using an instance segmentation is provided to be used for realistic rendering in virtual driving. Shade information of each of surfaces of the pseudo-3D bounding box can be reflected on the learning according to this method. The pseudo-3D bounding box may be obtained through a lidar or a rader, and the surface may be segmented by using a camera. The method includes steps of: a learning device instructing a pooling layer to apply pooling operations to a 2D bounding box region, thereby generating a pooled feature map, and instructing an FC layer to apply neural network operations thereto; instructing a convolutional layer to apply convolution operations to surface regions; and instructing a FC loss layer to generate class losses and regression losses.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: September 3, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10402724
    Abstract: A method for acquiring a pseudo-3D box from a 2D bounding box in a training image is provided. The method includes steps of: (a) a computing device acquiring the training image including an object bounded by the 2D bounding box; (b) the computing device performing (i) a process of classifying a pseudo-3D orientation of the object, by referring to information on probabilities corresponding to respective patterns of pseudo-3D orientation and (ii) a process of acquiring 2D coordinates of vertices of the pseudo-3D box by using regression analysis; and (c) the computing device adjusting parameters thereof by backpropagating loss information determined by referring to at least one of (i) differences between the acquired 2D coordinates of the vertices of the pseudo-3D box and 2D coordinates of ground truth corresponding to the pseudo-3D box, and (ii) differences between the classified pseudo-3D orientation and ground truth corresponding to the pseudo-3D orientation.
    Type: Grant
    Filed: October 3, 2017
    Date of Patent: September 3, 2019
    Assignee: STRADVISION, INC.
    Inventors: Yongjoong Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10402692
    Abstract: A method for learning parameters of an object detector by using a target object estimating network adaptable to customers' requirements such as KPI is provided. When a focal length or a resolution changes depending on the KPI, scales of objects also change. In this method for customer optimizable design, unsecure objects such as falling or fallen objects may be detected more accurately, and also fluctuations of the objects may be detected. Therefore, the method can be usefully performed for military purpose or for detection of the objects at distance. The method includes steps of: a learning device instructing an RPN to generate k-th object proposals on k-th manipulated images which correspond to (k?1)-th target region on an image; instructing an FC layer to generate object detection information corresponding to k-th objects; and instructing an FC loss layer to generate FC losses, by increasing k from 1 to n.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: September 3, 2019
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10395140
    Abstract: A method for learning parameters of an object detector based on a CNN is provided to be used for hardware optimization which satisfies KPI. The method includes steps of: a learning device instructing a first transposing layer or a pooling layer to generate an integrated feature map by concatenating pixels per each proposal; and instructing a second transposing layer or a classifying layer to divide volume-adjusted feature map, generated by using the integrated feature map, by pixel, and instructing the classifying layer to generate object class information. By this method, size of a chip can be decreased as convolution operations and fully connected layer operations can be performed by a same processor. Accordingly, there are advantages such as no need to build additional lines in a semiconductor manufacturing process, power saving, more space to place other modules instead of an FC module in a die, and the like.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: August 27, 2019
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10395392
    Abstract: A method for learning transformation of an annotated RGB image into an annotated Non-RGB image, in target color space, by using a cycle GAN and for domain adaptation capable of reducing annotation cost and optimizing customer requirements is provided. The method includes steps of: a learning device transforming a first image in an RGB format to a second image in a non-RGB format, determining whether the second image has a primary or a secondary non-RGB format, and transforming the second image to a third image in the RGB format; transforming a fourth image in the non-RGB format to a fifth image in the RGB format, determining whether the fifth image has a primary RGB format or a secondary RGB format, and transforming the fifth image to a sixth image in the non-RGB format. Further, by the method, training data can be generated even with virtual driving environments.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: August 27, 2019
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10387754
    Abstract: A method for learning parameters of an object detector based on a CNN is provided to be used for hardware optimization which satisfies KPI. The method includes steps of: a learning device (a) instructing a first transposing layer or a pooling layer to concatenate pixels, per each proposal, on pooled feature maps per each proposal; (b) instructing a 1×H1 and a 1×H2 convolutional layers to apply a 1×H1 and a 1×H2 convolution operations to reshaped feature maps generated by concatenating each feature in each of corresponding channels among all channels of the concatenated pooled feature map, to thereby generate an adjusted feature map; and (c) instructing a second transposing layer or a classifying layer to generate pixel-wise feature maps per each proposal by dividing the adjusted feature map by each pixel, and backpropagating object detection losses calculated by referring to object detection information and its corresponding GT.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: August 20, 2019
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10387752
    Abstract: A method for learning parameters of an object detector with hardware optimization based on a CNN for detection at distance or military purpose using an image concatenation is provided. The CNN can be redesigned when scales of objects change as a focal length or a resolution changes depending on the KPI. The method includes steps of: (a) concatenating n manipulated images which correspond to n target regions; (b) instructing an RPN to generate first to n-th object proposals in the n manipulated images by using an integrated feature map, and instructing a pooling layer to apply pooling operations to regions, corresponding to the first to the n-th object proposals, on the integrated feature map; and (c) instructing an FC loss layer to generate first to n-th FC losses by referring to the object detection information, outputted from an FC layer.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: August 20, 2019
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho