Patents Assigned to StradVision, Inc.
  • Patent number: 10303981
    Abstract: A method for learning parameters of an object detector based on R-CNN is provided. The method includes steps of: a learning device (a) if training image is acquired, instructing (i) convolutional layers to generate feature maps by applying convolution operations to the training image, (ii) an RPN to output ROI regression information and matching information (iii) a proposal layer to output ROI candidates as ROI proposals by referring to the ROI regression information and the matching information, and (iv) a proposal-selecting layer to output the ROI proposals by referring to the training image; (b) instructing pooling layers to generate feature vectors by pooling regions in the feature map, and instructing FC layers to generate object regression information and object class information; and (c) instructing first loss layers to calculate and backpropagate object class loss and object regression loss, to thereby learn parameters of the FC layers and the convolutional layers.
    Type: Grant
    Filed: October 4, 2018
    Date of Patent: May 28, 2019
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10300851
    Abstract: A method for warning a vehicle of a risk of lane change is provided. The method includes steps of: (a) an alarm device, if at least one rear image captured by a running vehicle is acquired, segmenting the rear image by using a learned convolutional neural network (CNN) to thereby obtain a segmentation image corresponding to the rear image; (b) the alarm device checking at least one free space ratio in at least one blind spot by referring to the segmentation image, wherein the free space ratio is determined as a ratio of a road area without an object in the blind spot to a whole area of the blind spot; and (c) the alarm device, if the free space ratio is less than or equal to at least one predetermined threshold value, warning a driver of the vehicle of the risk of lane change.
    Type: Grant
    Filed: October 4, 2018
    Date of Patent: May 28, 2019
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10303980
    Abstract: A method for learning parameters of a CNN capable of detecting obstacles in a training image is provided. The method includes steps of: a learning device (a) receiving the training image and instructing convolutional layers to generate encoded feature maps from the training image; (b) instructing the deconvolutional layers to generate decoded feature maps; (c) supposing that each cell of a grid with rows and columns is generated by dividing the decoded feature map with respect to a direction of the rows and the columns, concatenating features of the rows per column in a direction of a channel, to generate a reshaped feature map; (d) calculating losses referring to the reshaped feature map and its GT image in which each row is indicated as corresponding to GT positions where a nearest obstacle is on column from its corresponding lowest cell thereof along the columns; and (e) backpropagating the loss.
    Type: Grant
    Filed: September 5, 2018
    Date of Patent: May 28, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10304009
    Abstract: A method for learning an object detector based on an R-CNN by using a first to an n-th filter blocks respectively generating a first to an n-th feature maps through convolution operations in sequence, and a k-th to a first upsampling blocks respectively coupled with the first to the n-th filter blocks is provided. The method includes steps of: a learning device instructing the k-th upsampling block to the first upsampling block to generate a (k?1)-st pyramidic feature map to the first pyramidic feature map respectively; instructing an RPN to generate each ROI corresponding to each candidate region, and instructing a pooling layer to generate a feature vector; and learning parameters of the FC layer, the k-th to the first upsampling blocks, and the first to the n-th filter blocks by backpropagating a first loss generated by referring to object class information, object regression information, and their corresponding GTs.
    Type: Grant
    Filed: October 8, 2018
    Date of Patent: May 28, 2019
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10282864
    Abstract: A method for encoding an image based on convolutional neural network is provided. The method includes steps of: a learning device a learning device including a first to an n-th convolutional layers, (a) obtaining at least one input image; (b) instructing each of at least one of the convolutional layers to (i) apply one or more transposed convolution operations to the input image or an input feature map received from its corresponding previous convolutional layer, to thereby generate one or more transposed feature maps which have different sizes respectively, and (ii) apply one or more convolution operations, with a different stride and a different kernel size, to their corresponding transposed feature maps, to thereby generate their corresponding one or more inception feature maps as a first group; and (c) concatenating or element-wise adding the inception feature maps included in the first group to thereby generate its corresponding output feature map.
    Type: Grant
    Filed: September 17, 2018
    Date of Patent: May 7, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10275667
    Abstract: A learning method of a CNN capable of detecting one or more lanes using a lane model is provided. The method includes steps of: a learning device (a) acquiring information on the lanes from at least one image data set, wherein the information on the lanes are represented by respective sets of coordinates of pixels on the lanes; (b) calculating one or more function parameters of a lane modeling function of each of the lanes by using the coordinates of the pixels on the lanes; and (c) performing processes of classifying the function parameters into K cluster groups by using a clustering algorithm, assigning each of one or more cluster IDs to each of the cluster groups, and generating a cluster ID GT vector representing GT information on probabilities of being the cluster IDs corresponding to types of the lanes.
    Type: Grant
    Filed: September 5, 2018
    Date of Patent: April 30, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10269125
    Abstract: A method for tracking an object by using a CNN including a tracking network is provided. The method includes steps of: a testing device (a) generating a feature map by using a current video frame, and instructing an RPN to generate information on proposal boxes; (b) (i) generating an estimated state vector by using a Kalman filter algorithm, generating an estimated bounding box, and determining a specific proposal box as a seed box, and (ii) instructing an FCN to apply full convolution operations to the feature map, to thereby output a position sensitive score map; (c) generating a current bounding box by referring to a regression delta and a seed box which are generated by instructing a pooling layer to pool a region, corresponding to the seed box, on the position sensitive score map, and adjusting the current bounding box by using the Kalman filter algorithm.
    Type: Grant
    Filed: October 5, 2018
    Date of Patent: April 23, 2019
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10262214
    Abstract: A learning method of a CNN for detecting lanes is provided. The method includes steps of: a learning device (a) instructing convolutional layers to generate feature maps by applying convolution operations to an input image from an image data set; (b) instructing an FC layer to generate an estimated result vector of cluster ID classifications of the lanes by feeding a specific feature map among the feature maps into the FC layer; and (c) instructing a loss layer to generate a classification loss by referring to the estimated result vector and a cluster ID GT vector, and backpropagate the classification loss, to optimize device parameters of the CNN; wherein the cluster ID GT vector is GT information on probabilities of being cluster IDs per each of cluster groups assigned to function parameters of a lane modeling function by clustering the function parameters based on information on the lanes.
    Type: Grant
    Filed: September 5, 2018
    Date of Patent: April 16, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10229346
    Abstract: A learning method for detecting a specific object based on convolutional neural network (CNN) is provided. The learning method includes steps of: (a) a learning device, if an input image is obtained, performing (i) a process of applying one or more convolution operations to the input image to thereby obtain at least one specific feature map and (ii) a process of obtaining an edge image by extracting at least one edge part from the input image, and obtaining at least one guide map including information on at least one specific edge part having a specific shape similar to that of the specific object from the obtained edge image; and (b) the learning device reflecting the guide map on the specific feature map to thereby obtain a segmentation result for detecting the specific object in the input image.
    Type: Grant
    Filed: September 4, 2018
    Date of Patent: March 12, 2019
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10223614
    Abstract: A learning method for detecting at least one lane based on a convolutional neural network (CNN) is provided. The learning method includes steps of: (a) a learning device obtaining encoded feature maps, and information on lane candidate pixels in a input image; (b) the learning device, classifying a first parts of the lane candidate pixels, whose probability scores are not smaller than a predetermined threshold, as strong line pixels, and classifying the second parts of the lane candidate pixels, whose probability scores are less than the threshold but not less than another predetermined threshold, as weak lines pixels; and (c) the learning device, if distances between the weak line pixels and the strong line pixels are less than a predetermined distance, classifying the weak line pixels as pixels of additional strong lines, and determining that the pixels of the strong line and the additional correspond to pixels of the lane.
    Type: Grant
    Filed: September 4, 2018
    Date of Patent: March 5, 2019
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10169679
    Abstract: A learning method for adjusting parameters of a CNN using loss augmentation is provided. The method includes steps of: a learning device acquiring (a) a feature map from a training image; (b) (i) proposal ROIs corresponding to an object using an RPN, and a first pooled feature map by pooling areas, on the feature map, corresponding to the proposal ROIs, and (ii) a GT ROI, on the training image, corresponding to the object, and a second pooled feature map by pooling an area, on the feature map, corresponding to the GT ROI; and (c) (i) information on pixel data of a first bounding box when the first and second pooled feature maps are inputted into an FC layer, (ii) comparative data between the information on the pixel data of the first bounding box and a GT bounding box, and backpropagating information on the comparative data to adjust the parameters.
    Type: Grant
    Filed: October 13, 2017
    Date of Patent: January 1, 2019
    Assignee: STRADVISION, INC.
    Inventors: Yongjoong Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10163022
    Abstract: A method for learning parameters used to recognize characters included in a text in a scene text image of training set is provided. The method includes steps of: (a) a training apparatus generating each feature vector corresponding to each of the segmented character images; (b) the training apparatus processing feature vectors ci+j of neighboring character images to thereby generate a support vector to be used for a recognition of a specific character image; (c) the training apparatus obtaining a merged vector by executing a computation with the support vector and a feature vector ci of the specific character image; and (d) the training apparatus (i) performing a classification of the specific character image as a letter included in a predetermined set of letters by referring to the merged vector; and (ii) adjusting the parameters by referring to a result of the classification.
    Type: Grant
    Filed: June 22, 2017
    Date of Patent: December 25, 2018
    Assignee: StradVision, Inc.
    Inventor: Hojin Cho
  • Patent number: 10095977
    Abstract: A learning method for improving image segmentation including steps of: (a) acquiring a (1-1)-th to a (1-K)-th feature maps through an encoding layer if a training image is obtained; (b) acquiring a (3-1)-th to a (3-H)-th feature maps by respectively inputting each output of the H encoding filters to a (3-1)-th to a (3-H)-th filters; (c) performing a process of sequentially acquiring a (2-K)-th to a (2-1)-th feature maps either by (i) allowing the respective H decoding filters to respectively use both the (3-1)-th to the (3-H)-th feature maps and feature maps obtained from respective previous decoding filters of the respective H decoding filters or by (ii) allowing respective K-H decoding filters that are not associated with the (3-1)-th to the (3-H)-th filters to use feature maps gained from respective previous decoding filters of the respective K-H decoding filters; and (d) adjusting parameters of CNN.
    Type: Grant
    Filed: October 4, 2017
    Date of Patent: October 9, 2018
    Assignee: StradVision, Inc.
    Inventors: Yongjoong Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10089743
    Abstract: A method for segmenting an image using a CNN including steps of: a segmentation device acquiring (i) a first segmented image for a t-th frame by a CNN_PREVIOUS, having at least one first weight learned at a t?(i+1)-th frame, segmenting the image, (ii) optical flow images corresponding to the (t?1)-th to the (t?i)-th frames, including information on optical flows from pixels of the first segmented image to corresponding pixels of segmented images of the (t?1)-th to the (t?i)-th frames, and (iii) warped images for the t-th frame by replacing pixels in the first segmented image with pixels in the segmented images referring to the optical flow images, (iv) losses by comparing the first segmented image with the warped images, (v) a CNN_CURRENT with at least one second weight obtained by adjusting the first weight to segment an image of the t-th frame and frames thereafter by using the CNN_CURRENT.
    Type: Grant
    Filed: October 5, 2017
    Date of Patent: October 2, 2018
    Assignee: StradVision, Inc.
    Inventors: Yongjoong Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10083375
    Abstract: A method for configuring a CNN with learned parameters that performs activation operation of an activation module and convolution operation of one or more convolutional layer in a convolutional layer at the same time is provided. The method includes steps of: (a) allowing a comparator to compare an input value corresponding to each of pixel values of an input image as a test image with a predetermined reference value and then output a comparison result; (b) allowing a selector to output a specific parameter corresponding to the comparison result among multiple parameters of the convolutional layer; and (c) allowing a multiplier to output a multiplication value calculated by multiplying the specific parameter by the input value and allowing the multiplication value to be determined as a result value acquired by applying the convolutional layer to an output of the activation module.
    Type: Grant
    Filed: October 13, 2017
    Date of Patent: September 25, 2018
    Assignee: StradVision, Inc.
    Inventors: Yongjoong Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10049323
    Abstract: A method of learning parameters of a CNN is provided. The method includes steps of: (a) allowing an input value to be delivered to individual multiple element bias layers; (b) allowing the scale layer connected to a specific element bias layer to multiply a predetermined scale value by an output value of the specific element bias layer; (c) (i) allowing a specific element activation layer connected to the scale layer to apply activation function, and (ii) allowing the other individual element activation layers to apply activation functions to output values of the individual element bias layers; (d) allowing a concatenation layer to concatenate an output value of the specific element activation layer and output values of the other element activation layers; (e) allowing the convolutional layer to apply the convolution operation to the concatenated output; and (f) allowing a loss layer to acquire a loss during a backpropagation process.
    Type: Grant
    Filed: October 13, 2017
    Date of Patent: August 14, 2018
    Assignee: StradVision, Inc.
    Inventors: Yongjoong Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10043113
    Abstract: A method for generating feature maps by using a device adopting CNN including feature up-sampling networks (FPN). The method comprising steps of: (a) allowing, if the input image is obtained, a down-sampling block to acquire a down-sampling image by applying a predetermined operation to an input image (b) allowing, if the down-sampling image is obtained, each of a (1-1)-th to a (1-k)-th filter blocks to acquire each of a (1-1)-th to a (1-k)-th feature maps by applying one or more convolution operations to the down-sampling image and (c) allowing each of up-sampling blocks to receive a feature map from its corresponding filter block, to receive a feature map from its previous up-sampling block, and then rescale one feature map to be identical with the other feature map in size, and to apply a certain operation to both feature maps, thereby generating a (2-k)-th to a (2-1)-th feature maps.
    Type: Grant
    Filed: October 4, 2017
    Date of Patent: August 7, 2018
    Assignee: StradVision, Inc.
    Inventors: Yongjoong Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10037610
    Abstract: A method for tracking a target object in frames of video data using Absorbing Markov Chain (AMC), including steps of: (a) acquiring a bounding box containing the target object in a current frame and a segmentation result for the target object in a previous frame; (b) obtaining obtain a region of interest (ROI) in the current frame by enlarging the bounding box to contain a portion of background information surrounding the target object; (c) acquiring information on local regions within the ROI in the current frame; (d) constructing an AMC graph using at least part of the local regions within the region of interest (ROI) in the current frame and local regions within a region of interest (ROI) in the previous frame; and (e) acquiring a segmentation result for the target object within the current frame by thresholding individual nodes in the AMC graph using absorption times thereof.
    Type: Grant
    Filed: October 3, 2017
    Date of Patent: July 31, 2018
    Assignee: StradVision, Inc.
    Inventors: Yongjoong Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10023204
    Abstract: A driving assisting method is provided. The driving assisting method includes steps of: (a) a driving assisting device performing processes of (i) determining a gazing direction of a driver of a vehicle and (ii) identifying location of a specific object and determining distance between the specific object and the vehicle; and (b) the driving assisting device (i) maintaining or increasing threshold level of a triggering condition for providing alarm or (ii) providing the alarm, if the location of the specific object is detected as being outside a virtual viewing frustum corresponding to the gazing direction of the driver and if the distance between the specific object and the vehicle is determined as being less than at least one predetermined distance.
    Type: Grant
    Filed: December 1, 2017
    Date of Patent: July 17, 2018
    Assignee: StradVision, Inc.
    Inventors: Hak-Kyoung Kim, Hongmo Je
  • Patent number: 10007865
    Abstract: A learning method for acquiring a bounding box corresponding to an object in a training image from multi-scaled feature maps by using a CNN is provided. The learning method includes steps of: (a) allowing an N-way RPN to acquire at least two specific feature maps and allowing the N-way RPN to apply certain operations to the at least two specific feature maps; (b) allowing an N-way pooling layer to generate multiple pooled feature maps by applying pooling operations to respective areas on the at least two specific feature maps; and (c) (i) allowing a FC layer to acquire information on pixel data of the bounding box, and (ii) allowing a loss layer to acquire first comparative data, thereby adjusting at least one of parameters of the CNN by using the first comparative data during a backpropagation process.
    Type: Grant
    Filed: October 16, 2017
    Date of Patent: June 26, 2018
    Assignee: StradVision, Inc.
    Inventors: Yongjoong Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho