Patents Assigned to StradVision, Inc.
  • Patent number: 10423860
    Abstract: A method for learning parameters of an object detector based on a CNN adaptable to customers' requirements such as KPI by using an image concatenation and a target object merging network is provided. The CNN can be redesigned when scales of objects change as a focal length or a resolution changes depending on the KPI. The method includes steps of: a learning device instructing an image-manipulating network to generate n manipulated images; instructing an RPN to generate first to n-th object proposals respectively in the manipulated images, and instructing an FC layer to generate first to n-th object detection information; and instructing the target object merging network to merge the object proposals and merge the object detection information. In this method, the object proposals can be generated by using lidar. The method can be useful for multi-camera, SVM(surround view monitor), and the like, as accuracy of 2D bounding boxes improves.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: September 24, 2019
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10423840
    Abstract: A post-processing method for detecting lanes to plan the drive path of an autonomous vehicle by using a segmentation score map and a clustering map is provided. The method includes steps of: a computing device acquiring the segmentation score map and the clustering map from a CNN; instructing a post-processing module to detect lane elements including pixels forming the lanes referring to the segmentation score map and generate seed information referring to the lane elements, the segmentation score map, and the clustering map; instructing the post-processing module to generate base models referring to the seed information and generate lane anchors referring to the base models; instructing the post-processing module to generate lane blobs referring to the lane anchors; and instructing the post-processing module to detect lane candidates referring to the lane blobs and generate a lane model by line-fitting operations on the lane candidates.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: September 24, 2019
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10410120
    Abstract: A method for learning an object detector based on a region-based convolutional neural network (R-CNN) capable of converting modes according to aspect ratios or scales of objects is provided. The aspect ratio and the scale of the objects including traffic lights may be determined according to characteristics, such as distance from the object detector, shapes, and the like, of the object. The method includes steps of: a learning device instructing an RPN to generate ROI candidates; instructing pooling layers to output feature vector; and learn the FC layers and the convolutional layer through backpropagation. In this method, pooling processes may be performed depending on real ratios and real sizes of the objects by using distance information and object information obtained through a radar, a lidar or other sensors. Also, the method can be used for surveillance as humans at a specific location have similar sizes.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: September 10, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10408939
    Abstract: A method for integrating, at each convolution stage in a neural network, an image generated by a camera and its corresponding point-cloud map generated by a radar, a LiDAR, or a heterogeneous sensor fusion is provided to be used for an HD map update. The method includes steps of: a computing device instructing an initial operation layer to integrate the image and its corresponding original point-cloud map, to generate a first fused feature map and a first fused point-cloud map; instructing a transformation layer to apply a first transformation operation to the first fused feature map, and to apply a second transformation operation to the first fused point-cloud map; and instructing an integration layer to integrate feature maps outputted from the transformation layer, to generate a second fused point-cloud map. By the method, an object detection and a segmentation can be performed more efficiently with a distance estimation.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: September 10, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10410352
    Abstract: A learning method for improving a segmentation performance to be used for detecting events including a pedestrian event, a vehicle event, a falling event, and a fallen event using a learning device is provided. The method includes steps of: the learning device (a) instructing k convolutional layers to generate k encoded feature maps; (b) instructing k?1 deconvolutional layers to sequentially generate k?1 decoded feature maps, wherein the learning device instructs h mask layers to refer to h original decoded feature maps outputted from h deconvolutional layers corresponding thereto and h edge feature maps generated by extracting edge parts from the h original decoded feature maps; and (c) instructing h edge loss layers to generate h edge losses by referring to the edge parts and their corresponding GTs. Further, the method allows a degree of detecting traffic sign, landmark, road marker, and the like to be increased.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: September 10, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10402978
    Abstract: A method for detecting a pseudo-3D bounding box based on a CNN capable of converting modes according to poses of detected objects using an instance segmentation is provided to be used for realistic rendering in virtual driving. Shade information of each of surfaces of the pseudo-3D bounding box can be reflected on the learning according to this method. The pseudo-3D bounding box may be obtained through a lidar or a rader, and the surface may be segmented by using a camera. The method includes steps of: a learning device instructing a pooling layer to apply pooling operations to a 2D bounding box region, thereby generating a pooled feature map, and instructing an FC layer to apply neural network operations thereto; instructing a convolutional layer to apply convolution operations to surface regions; and instructing a FC loss layer to generate class losses and regression losses.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: September 3, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10402695
    Abstract: A method for learning parameters of a CNN for image recognition is provided to be used for hardware optimization which satisfies KPI. The method includes steps of: a learning device (a) instructing a first transposing layer or a pooling layer to generate an integrated feature map by concatenating pixels, per each ROI, on pooled ROI feature maps; (b) instructing a 1×H1 convolutional layer to generate a first adjusted feature map using a first reshaped feature map, generated by concatenating features in H1 channels of the integrated feature map, and instructing a 1×H2 convolutional layer to generate a second adjusted feature map using a second reshaped feature map, generated by concatenating features in H2 channels of the first adjusted feature map; and (c) instructing a second transposing layer or a classifying layer to divide the second adjusted feature map by each pixel, to thereby generate pixel-wise feature maps.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: September 3, 2019
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10402686
    Abstract: A method for an object detector to be used for surveillance based on a convolutional neural network capable of converting modes according to scales of objects is provided. The method includes steps of: a learning device (a) instructing convolutional layers to output a feature map by applying convolution operations to an image and instructing an RPN to output ROIs in the image; (b) instructing pooling layers to output first feature vectors by pooling each of ROI areas on the feature map per each of their scales, instructing first FC layers to output second feature vectors, and instructing second FC layers to output class information and regression information; and (c) instructing loss layers to generate class losses and regression losses by referring to the class information, the regression information, and their corresponding GTs.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: September 3, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10402977
    Abstract: A learning method for improving a segmentation performance in detecting edges of road obstacles and traffic signs, etc. required to satisfy level 4 and level 5 of autonomous vehicles using a learning device is provided. The traffic signs, as well as landmarks and road markers may be detected more accurately by reinforcing text parts as edge parts in an image. The method includes steps of: the learning device (a) instructing k convolutional layers to generate k encoded feature maps, including h encoded feature maps corresponding to h mask layers; (b) instructing k deconvolutional layers to generate k decoded feature maps (i) by using h bandpass feature maps and h decoded feature maps corresponding to the h mask layers and (ii) by using feature maps to be inputted respectively to k-h deconvolutional layers; and (c) adjusting parameters of the deconvolutional and convolutional layers.
    Type: Grant
    Filed: January 25, 2019
    Date of Patent: September 3, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10402724
    Abstract: A method for acquiring a pseudo-3D box from a 2D bounding box in a training image is provided. The method includes steps of: (a) a computing device acquiring the training image including an object bounded by the 2D bounding box; (b) the computing device performing (i) a process of classifying a pseudo-3D orientation of the object, by referring to information on probabilities corresponding to respective patterns of pseudo-3D orientation and (ii) a process of acquiring 2D coordinates of vertices of the pseudo-3D box by using regression analysis; and (c) the computing device adjusting parameters thereof by backpropagating loss information determined by referring to at least one of (i) differences between the acquired 2D coordinates of the vertices of the pseudo-3D box and 2D coordinates of ground truth corresponding to the pseudo-3D box, and (ii) differences between the classified pseudo-3D orientation and ground truth corresponding to the pseudo-3D orientation.
    Type: Grant
    Filed: October 3, 2017
    Date of Patent: September 3, 2019
    Assignee: STRADVISION, INC.
    Inventors: Yongjoong Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10402692
    Abstract: A method for learning parameters of an object detector by using a target object estimating network adaptable to customers' requirements such as KPI is provided. When a focal length or a resolution changes depending on the KPI, scales of objects also change. In this method for customer optimizable design, unsecure objects such as falling or fallen objects may be detected more accurately, and also fluctuations of the objects may be detected. Therefore, the method can be usefully performed for military purpose or for detection of the objects at distance. The method includes steps of: a learning device instructing an RPN to generate k-th object proposals on k-th manipulated images which correspond to (k?1)-th target region on an image; instructing an FC layer to generate object detection information corresponding to k-th objects; and instructing an FC loss layer to generate FC losses, by increasing k from 1 to n.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: September 3, 2019
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10395140
    Abstract: A method for learning parameters of an object detector based on a CNN is provided to be used for hardware optimization which satisfies KPI. The method includes steps of: a learning device instructing a first transposing layer or a pooling layer to generate an integrated feature map by concatenating pixels per each proposal; and instructing a second transposing layer or a classifying layer to divide volume-adjusted feature map, generated by using the integrated feature map, by pixel, and instructing the classifying layer to generate object class information. By this method, size of a chip can be decreased as convolution operations and fully connected layer operations can be performed by a same processor. Accordingly, there are advantages such as no need to build additional lines in a semiconductor manufacturing process, power saving, more space to place other modules instead of an FC module in a die, and the like.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: August 27, 2019
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10395392
    Abstract: A method for learning transformation of an annotated RGB image into an annotated Non-RGB image, in target color space, by using a cycle GAN and for domain adaptation capable of reducing annotation cost and optimizing customer requirements is provided. The method includes steps of: a learning device transforming a first image in an RGB format to a second image in a non-RGB format, determining whether the second image has a primary or a secondary non-RGB format, and transforming the second image to a third image in the RGB format; transforming a fourth image in the non-RGB format to a fifth image in the RGB format, determining whether the fifth image has a primary RGB format or a secondary RGB format, and transforming the fifth image to a sixth image in the non-RGB format. Further, by the method, training data can be generated even with virtual driving environments.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: August 27, 2019
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10387754
    Abstract: A method for learning parameters of an object detector based on a CNN is provided to be used for hardware optimization which satisfies KPI. The method includes steps of: a learning device (a) instructing a first transposing layer or a pooling layer to concatenate pixels, per each proposal, on pooled feature maps per each proposal; (b) instructing a 1×H1 and a 1×H2 convolutional layers to apply a 1×H1 and a 1×H2 convolution operations to reshaped feature maps generated by concatenating each feature in each of corresponding channels among all channels of the concatenated pooled feature map, to thereby generate an adjusted feature map; and (c) instructing a second transposing layer or a classifying layer to generate pixel-wise feature maps per each proposal by dividing the adjusted feature map by each pixel, and backpropagating object detection losses calculated by referring to object detection information and its corresponding GT.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: August 20, 2019
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10387753
    Abstract: A method for learning parameters of a CNN for image recognition is provided to be used for hardware optimization which satisfies KPI. The method includes steps of: a learning device (1) instructing a first transposing layer or a pooling layer to generate an integrated feature map by concatenating each of pixels, per each of ROIs, in corresponding locations on pooled ROI feature maps; and (2) (i) instructing a second transposing layer or a classifying layer to divide an adjusted feature map, whose volume is adjusted from the integrated feature map, by each of the pixels, and instructing the classifying layer to generate object information on the ROIs, and (ii) backpropagating object losses. Size of a chip can be decreased as convolution operations and fully connected layer operations are performed by a same processor. Accordingly, there are advantages such as no need to build additional lines in a semiconductor manufacturing process.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: August 20, 2019
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10387752
    Abstract: A method for learning parameters of an object detector with hardware optimization based on a CNN for detection at distance or military purpose using an image concatenation is provided. The CNN can be redesigned when scales of objects change as a focal length or a resolution changes depending on the KPI. The method includes steps of: (a) concatenating n manipulated images which correspond to n target regions; (b) instructing an RPN to generate first to n-th object proposals in the n manipulated images by using an integrated feature map, and instructing a pooling layer to apply pooling operations to regions, corresponding to the first to the n-th object proposals, on the integrated feature map; and (c) instructing an FC loss layer to generate first to n-th FC losses by referring to the object detection information, outputted from an FC layer.
    Type: Grant
    Filed: January 22, 2019
    Date of Patent: August 20, 2019
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10380724
    Abstract: A method for learning reduction of distortion occurred in a warped image by using a GAN is provided for enhancing fault tolerance and fluctuation robustness in extreme situations. And the method includes steps of: (a) if an initial image is acquired, instructing an adjusting layer included in the generating network to adjust at least part of initial feature values, to thereby transform the initial image into an adjusted image; and (b) if at least part of (i) a naturality score, (ii) a maintenance score, and (iii) a similarity score are acquired, instructing a loss layer included in the generating network to generate a generating network loss by referring to said at least part of the naturality score, the maintenance score and the similarity score, and learn parameters of the generating network. Further, the method can be used for estimating behaviors, and detecting or tracking objects with high precision, etc.
    Type: Grant
    Filed: January 28, 2019
    Date of Patent: August 13, 2019
    Assignee: Stradvision, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10372573
    Abstract: A method for generating one or more test patterns and selecting optimized test patterns among the test patterns to verify an integrity of convolution operations is provided for fault tolerance, fluctuation robustness in extreme situations, functional safety of the convolution operations, and annotation cost reduction. The method includes: a computing device (a) instructing a pattern generating unit to generate the test patterns by using a certain function such that saturation does not occur while at least one original CNN applies the convolution operations to the test patterns; (b) instructing a pattern evaluation unit to generate each of evaluation scores of each of the test patterns by referring to each of the test patterns and one or more parameters of the original CNN; and (c) instructing a pattern selection unit to select the optimized test patterns among the test patterns by referring to the evaluation scores.
    Type: Grant
    Filed: January 28, 2019
    Date of Patent: August 6, 2019
    Assignee: Stradvision, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10373023
    Abstract: A method for learning a runtime input transformation of real images into virtual images by using a cycle GAN capable of being applied to domain adaptation is provided. The method can be also performed in virtual driving environments. The method includes steps of: (a) (i) instructing first transformer to transform a first image to second image, (ii-1) instructing first discriminator to generate a 1_1-st result, and (ii-2) instructing second transformer to transform the second image to third image, whose characteristics are same as or similar to those of the real images; (b) (i) instructing the second transformer to transform a fourth image to fifth image, (ii-1) instructing second discriminator to generate a 2_1-st result, and (ii-2) instructing the first transformer to transform the fifth image to sixth image; (c) calculating losses. By the method, a gap between virtuality and reality can be reduced, and annotation costs can be reduced.
    Type: Grant
    Filed: January 28, 2019
    Date of Patent: August 6, 2019
    Assignee: Stradvision, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10373027
    Abstract: A method for acquiring a sample image for label-inspecting among auto-labeled images for learning a deep learning network, optimizing sampling processes for manual labeling, and reducing annotation costs is provided. The method includes steps of: a sample image acquiring device, generating a first and a second images, instructing convolutional layers to generate a first and a second feature maps, instructing pooling layers to generate a first and a second pooled feature maps, and generating concatenated feature maps; instructing a deep learning classifier to acquire the concatenated feature maps, to thereby generate class information; and calculating probabilities of abnormal class elements in an abnormal class group, determining whether the auto-labeled image is a difficult image, and selecting the auto-labeled image as the sample image for label-inspecting. Further, the method can be performed by using a robust algorithm with multiple transform pairs.
    Type: Grant
    Filed: January 30, 2019
    Date of Patent: August 6, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho