Patents Assigned to StradVision, Inc.
  • Patent number: 10692002
    Abstract: A method for learning a pedestrian detector to be used for robust surveillance or military purposes based on image analysis is provided for a solution to a lack of labeled images and for a reduction of annotation costs. The method can be also performed by using generative adversarial networks (GANs). The method includes steps of: a learning device generating an image patch by cropping each of regions on a training image, and instructing an adversarial style transformer to generate a transformed image patch by converting each of pedestrians into transformed pedestrians capable of impeding a detection; and generating a transformed training image by replacing each of the regions with the transformed image patch, instructing the pedestrian detector to detecting the transformed pedestrians, and learning parameters of the pedestrian detector to minimize losses. This learning, as a self-evolving system, is robust to adversarial patterns by generating training data including hard examples.
    Type: Grant
    Filed: January 28, 2019
    Date of Patent: June 23, 2020
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10657396
    Abstract: A method for detecting passenger statuses by analyzing a 2D interior image of a vehicle is provided. The method includes steps of: a passenger status-detecting device (a) inputting the 2D interior image taken with a fisheye lens into a pose estimation network to acquire pose points corresponding to passengers; and (b) (i) calculating location information on the pose points relative to a preset reference point by referring to a predetermined pixel-angle table, if a grid board has been placed in the vehicle, the pixel-angle table has been created such that vertical angles and horizontal angles, formed by a first line and second lines, correspond to pixels of grid corners, in which the first line connects a camera and a top center of the grid board and the second lines connects the corners and the camera and (ii) detecting the passenger statuses by referring to the location information.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: May 19, 2020
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10657584
    Abstract: A method for generating safe clothing patterns for a human-like figure is provided. The method includes steps of: a safe clothing-pattern generating device, (a) after acquiring an image of the human-like figure, generating a specific clothing pattern having an initial value, inputting the specific clothing pattern and the image of the human-like figure into a clothing composition network, combining the specific clothing pattern with a clothing of the human-like figure to generate a composite image; (b) inputting the composite image into an image translation network, translating surrounding environment on the composite image to generate a translated image, and inputting the translated image into an object detector to output detection information on the human-like figure; and (c) instructing a 1-st loss layer to calculate losses by referring to the detection information and a GT corresponding to the image of the human-like figure, and updating the initial value by using the losses.
    Type: Grant
    Filed: January 10, 2020
    Date of Patent: May 19, 2020
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10650279
    Abstract: A learning method for generating integrated object detection information of an integrated image by integrating first object detection information and second object detection information is provided. The method includes steps of: (a) a learning device, if the first object detection information and the second object detection information is acquired, instructing a concatenating network included in a DNN to generate pair feature vectors including information on pairs of first original ROIs and second original ROIs; (b) the learning device instructing a determining network included in the DNN to apply FC operations to the pair feature vectors, to thereby generate (i) determination vectors and (ii) box regression vectors; (c) the learning device instructing a loss unit to generate an integrated loss, and performing backpropagation processes by using the integrated loss, to thereby learn at least part of parameters included in the DNN.
    Type: Grant
    Filed: December 22, 2019
    Date of Patent: May 12, 2020
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10650548
    Abstract: A method for detecting a location of a subject vehicle capable of an autonomous driving by using a landmark detection. And the method includes steps of: (a) a computing device, if a live feature map is acquired, detecting each of feature map coordinates on the live feature map per each of reference objects included in a subject data region corresponding to a location and a posture of the subject vehicle, by referring to (i) reference feature maps corresponding to the reference objects, and (ii) the live feature map; (b) the computing device detecting image coordinates of the reference objects on a live image by referring to the feature map coordinates; and (c) the computing device detecting an optimized subject coordinate of the subject vehicle by referring to 3-dimensional coordinates of the reference objects in a real world.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: May 12, 2020
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10643085
    Abstract: A method for detecting body information on passengers of a vehicle based on humans' status recognition is provided. The method includes steps of: a passenger body information-detecting device, (a) inputting an interior image of the vehicle into a face recognition network, to detect faces of the passengers and output passenger feature information, and inputting the interior image into a body recognition network, to detect bodies and output body-part length information; and (b) retrieving specific height mapping information by referring to a height mapping table of ratios of segment body portions of human groups to heights per the human groups, acquiring a specific height of the specific passenger, retrieving specific weight mapping information from a weight mapping table of correlations between the heights and weights per the human groups, and acquiring a weight of the specific passenger by referring to the specific height.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: May 5, 2020
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10636295
    Abstract: A method for creating a traffic scenario in a virtual driving environment is provided. The method includes steps of: a traffic scenario-generating device, (a) on condition that driving data have been acquired which are created using previous traffic data corresponding to discrete traffic data extracted by a vision-based ADAS from a past driving video and detailed traffic data corresponding to sequential traffic data from sensors of data-collecting vehicles in a real driving environment, inputting the driving data into a scene analyzer to extract driving environment information and into a vehicle information extractor to extract vehicle status information on an ego vehicle, and generating sequential traffic logs according to a driving sequence; and (b) inputting the sequential traffic logs into a scenario augmentation network to augment the sequential traffic logs using critical events, and generate the traffic scenario, verifying the traffic scenario, and mapping the traffic scenario onto a traffic simulator.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: April 28, 2020
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10633007
    Abstract: A method for providing safe-driving information via eyeglasses of a driver of a vehicle is provided.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: April 28, 2020
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10635938
    Abstract: A method for training a main CNN by using a virtual image and a style-transformed real image is provided. And the method includes steps of: (a) a learning device acquiring first training images; and (b) the learning device performing a process of instructing the main CNN to generate first estimated autonomous driving source information, instructing the main CNN to generate first main losses and perform backpropagation by using the first main losses, to thereby learn parameters of the main CNN, and a process of instructing a supporting CNN to generate second training images, instructing the main CNN to generate second estimated autonomous driving source information, instructing the main CNN to generate second main losses and perform backpropagation by using the second main losses, to thereby learn parameters of the main CNN.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: April 28, 2020
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10635918
    Abstract: A method for managing a smart database which stores facial images for face recognition is provided. The method includes steps of: a managing device (a) counting specific facial images corresponding to a specific person in the smart database where new facial images are continuously stored, and determining whether a first counted value, representing a count of the specific facial images, satisfies a first set value; and (b) if the first counted value satisfies the first set value, inputting the specific facial images into a neural aggregation network, to generate quality scores of the specific facial images by aggregation of the specific facial images, and, if a second counted value, representing a count of specific quality scores among the quality scores from a highest during counting thereof, satisfies a second set value, deleting part of the specific facial images, corresponding to the uncounted quality scores, from the smart database.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: April 28, 2020
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10635941
    Abstract: A method for on-device continual learning of a neural network which analyzes input data is provided for smartphones, drones, vessels, or a military purpose. The method includes steps of: a learning device, (a) uniform-sampling new data to have a first volume, instructing a boosting network to convert a k-dimension random vector into a k-dimension modified vector, instructing an original data generator network to repeat outputting synthetic previous data of a second volume corresponding to the k-dimension modified vector and previous data having been used for learning, and generating a batch for a current-learning; and (b) instructing the neural network to generate output information corresponding to the batch. The method can be used for preventing catastrophic forgetting and an invasion of privacy, and for optimizing resources such as storage and sampling processes for training images. Further the method can be performed through a learning for Generative adversarial networks (GANs).
    Type: Grant
    Filed: January 29, 2019
    Date of Patent: April 28, 2020
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10635917
    Abstract: A method for detecting a vehicle occupancy by using passenger keypoints based on analyzing an interior image of a vehicle is provided. The method includes steps of: (a) if the interior image is acquired, a vehicle occupancy detecting device (i) inputting the interior image into a feature extractor network, to generate feature tensors by applying convolution operation to the interior image, (ii) inputting the feature tensors into a keypoint heatmap & part affinity field (PAF) extractor, to generate keypoint heatmaps and PAFs, (iii) inputting the keypoint heatmaps and the PAFs into a keypoint detecting device, to extract keypoints from the keypoint heatmaps, and (iv) grouping the keypoints based on the PAFs, to detect keypoints per passengers; and (b) inputting the keypoints into a seat occupation matcher, to match the passengers with seats by referring to the inputted keypoints and preset ROIs for the seats and to detect the vehicle occupancy.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: April 28, 2020
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10635915
    Abstract: A method for giving a warning on a blind spot of a vehicle based on V2V communication is provided. The method includes steps of: (a) if a rear video of a first vehicle is acquired from a rear camera, a first blind-spot warning device transmitting the rear video to a blind-spot monitor, to determine whether nearby vehicles are in the rear video using a CNN, and output first blind-spot monitoring information of determining whether the nearby vehicles are in a blind spot; and (b) if second blind-spot monitoring information of determining whether a second vehicle is in the blind spot, is acquired from a second blind-spot warning device of the second vehicle, over the V2V communication, the first blind-spot warning device warning that one of the second vehicle and the nearby vehicles is in the blind spot by referring to the first and the second blind-spot monitoring information.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: April 28, 2020
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10627823
    Abstract: A method for learning a sensor fusion network for sensor fusion of an autonomous vehicle performing a cooperative driving is provided.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: April 21, 2020
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10621473
    Abstract: A method for updating an object detecting system to detect objects with untrained classes in real-time is provided. The method includes steps of: (a) the object detecting system, if at least one input image is acquired, instructing a recognizer included therein to generate a specific feature map, and to generate a specific query vector; (b) the object detecting system instructing a similarity determining unit (i) to compare the specific query vector to data vectors, to thereby calculate each of first similarity scores between the specific query vector and each of the data vectors, and (ii) to add a specific partial image to an unknown image DB, if a specific first similarity score is smaller than a first threshold value; (c) the object detecting system, if specific class information is acquired, instructing a short-term update unit to generate a specific short-term update vector, and update the feature fingerprint DB.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: April 14, 2020
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10621476
    Abstract: A method for learning parameters of an object detector based on a CNN adaptable to customer's requirements such as KPI by using a target object estimating network and a target object merging network is provided. The CNN can be redesigned when scales of objects change as a focal length or a resolution changes depending on the KPI. The method includes steps of: a learning device instructing convolutional layers to generate a k-th feature map by applying convolution operations to a k-th manipulated image which corresponds to the (k?1)-th target region on an image; and instructing the target object merging network to merge a first to an n-th object detection information, outputted from an FC layer, and backpropagating losses generated by referring to merged object detection information and its corresponding GT. The method can be useful for multi-camera, SVM (surround view monitor), and the like, as accuracy of 2D bounding boxes improves.
    Type: Grant
    Filed: October 25, 2019
    Date of Patent: April 14, 2020
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Publication number: 20200090047
    Abstract: A learning method for a CNN (Convolutional Neural Network) capable of encoding at least one training image with multiple feeding layers, wherein the CNN includes a 1st to an n-th convolutional layers, which respectively generate a 1st to an n-th main feature maps by applying convolution operations to the training image, and a 1st to an h-th feeding layers respectively corresponding to h convolutional layers (1?h?(n-1)) is provided. The learning method includes steps of: a learning device instructing the convolutional layers to generate the 1st to the n-th main feature maps, wherein the learning device instructs a k-th convolutional layer to acquire a (k?1)-th main feature map and an m-th sub feature map, and to generate a k-th main feature map by applying the convolution operations to the (k?1)-th integrated feature map generated by integrating the (k?1)-th main feature map and the m-th sub feature map.
    Type: Application
    Filed: September 17, 2018
    Publication date: March 19, 2020
    Applicant: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10592799
    Abstract: There is provided a method for determining an FL value to be used for optimizing hardware applicable to mobile devices, compact networks, and the like with high precision. The method includes steps of: a computing device (a) applying quantization operations to original values included in an original vector by referring to a BW value and each of FL candidate values, to thereby generate each of quantized vectors, including the quantized values, corresponding to each of the FL candidate values; (b) generating each of weighted quantization loss values, corresponding to each of the FL candidate values, by applying weighted quantization loss operations to information on each of differences between the original values and the quantized values included in each of the quantized vectors; and (c) determining the FL value among the FL candidate values by referring to the weighted quantization loss values and a device using the same.
    Type: Grant
    Filed: January 23, 2019
    Date of Patent: March 17, 2020
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10579907
    Abstract: A method for evaluating a reliability of labeling training images to be used for learning a deep learning network is provided. The method includes steps of: a reliability-evaluating device instructing a similar-image selection network to select validation image candidates with their own true labels having shooting environments similar to those of acquired original images, which are unlabeled images, and instructing an auto-labeling network to auto-label the validation image candidates with their own true labels and the original images; and (i) evaluating a reliability of the auto-labeling network by referring to true labels and auto labels of easy-validation images, and (ii) evaluating a reliability of a manual-labeling device by referring to true labels and manual labels of difficult-validation images. This method can be used to recognize surroundings by applying a bag-of-words model, to optimize sampling processes for selecting a valid image among similar images, and to reduce annotation costs.
    Type: Grant
    Filed: January 31, 2019
    Date of Patent: March 3, 2020
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10579924
    Abstract: A learning method for a CNN (Convolutional Neural Network) capable of encoding at least one training image with multiple feeding layers, wherein the CNN includes a 1st to an n-th convolutional layers, which respectively generate a 1st to an n-th main feature maps by applying convolution operations to the training image, and a 1st to an h-th feeding layers respectively corresponding to h convolutional layers (1?h?n?1)) is provided. The learning method includes steps of: a learning device instructing the convolutional layers to generate the 1st to the n-th main feature maps, wherein the learning device instructs a k-th convolutional layer to acquire a (k?1)-th main feature map and an m-th sub feature map, and to generate a k-th main feature map by applying the convolution operations to the (k?1)-th integrated feature map generated by integrating the (k?1)-th main feature map and the m-th sub feature map.
    Type: Grant
    Filed: September 17, 2018
    Date of Patent: March 3, 2020
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho