Patents by Inventor Woonhyun Nam

Woonhyun Nam has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10803333
    Abstract: A method for calculating exact location of a subject vehicle by using information on relative distances is provided. And the method includes steps of: (a) a computing device, if a reference image is acquired through a camera on the subject vehicle, detecting reference objects in the reference image; (b) the computing device calculating image-based reference distances between the reference objects and the subject vehicle, by referring to information on reference bounding boxes, corresponding to the reference objects, on the reference image; (c) the computing device (i) generating a distance error value by referring to the image-based reference distances and coordinate-based reference distances, and (ii) calibrating subject location information of the subject vehicle by referring to the distance error value.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: October 13, 2020
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10796571
    Abstract: A method for detecting emergency vehicles in real time, and managing subject vehicles to support the emergency vehicles to drive without interferences from the subject vehicles by referring to detected information on the emergency vehicles is provided. And the method includes steps of: (a) a management server generating metadata on the specific emergency vehicle by referring to emergency circumstance information; (b) the management server generating a circumstance scenario vector by referring to the emergency circumstance information and the metadata, comparing the circumstance scenario vector with reference scenario vectors, to thereby find a specific scenario vector whose similarity score with the circumstance scenario vector is larger than a threshold, and acquiring an emergency reaction command by referring to the specific scenario vector; (c) the management server transmitting the emergency reaction command to each of the subject vehicles.
    Type: Grant
    Filed: January 10, 2020
    Date of Patent: October 6, 2020
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10796434
    Abstract: A method for learning an automatic parking device of a vehicle for detecting an available parking area is provided. The method includes steps of: a learning device, (a) if a parking lot image of an area nearby the vehicle is acquired, (i) inputting the parking lot image into a segmentation network to output a convolution feature map via an encoder, output a deconvolution feature map by deconvoluting the convolution feature map via a decoder, and output segmentation information by masking the deconvolution feature map via a masking layer; (b) inputting the deconvolution feature map into a regressor to generate relative coordinates of vertices of a specific available parking region, and generate regression location information by regressing the relative coordinates; and (c) instructing a loss layer to calculate 1-st losses by referring to the regression location information and an ROI GT, and learning the regressor via backpropagation using the 1-st losses.
    Type: Grant
    Filed: January 10, 2020
    Date of Patent: October 6, 2020
    Assignee: StradVision, Inc
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10796206
    Abstract: A method for integrating images from vehicles performing a cooperative driving is provided. The method includes steps of: a main driving image integrating device on one main vehicle (a) inputting one main driving image into a main object detector to (1) generate one main feature map by applying convolution operation via a main convolutional layer, (2) generate main ROIs via a main region proposal network, (3) generate main pooled feature maps by applying pooling operation via a main pooling layer, and (4) generate main object detection information on the main objects by applying fully-connected operation via a main fully connected layer; (b) inputting the main pooled feature maps into a main confidence network to generate main confidences; and (c) acquiring sub-object detection information and sub-confidences from sub-vehicles, and integrating the main object detection information and the sub-object detection information using the main & the sub-confidences to generate object detection result.
    Type: Grant
    Filed: January 10, 2020
    Date of Patent: October 6, 2020
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10783438
    Abstract: A method for on-device continual learning of a neural network which analyzes input data is provided to be used for smartphones, drones, vessels, or a military purpose. The method includes steps of: a learning device, (a) sampling new data to have a preset first volume, instructing an original data generator network, which has been learned, to repeat outputting synthetic previous data corresponding to a k-dimension random vector and previous data having been used for learning the original data generator network, such that the synthetic previous data has a second volume, and generating a batch for a current-learning; and (b) instructing the neural network to generate output information corresponding to the batch. The method can be performed by generative adversarial networks (GANs), online learning, and the like. Also, the present disclosure has effects of saving resources such as storage, preventing catastrophic forgetting, and securing privacy.
    Type: Grant
    Filed: June 17, 2019
    Date of Patent: September 22, 2020
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10780897
    Abstract: A method for signaling a driving intention of an autonomous vehicle is provided. The method includes steps of: a driving intention signaling device (a) detecting a pedestrian ahead of the autonomous vehicle using surroundings video images, and determining whether the pedestrian crosses a roadway using a virtual crosswalk; (b) if the pedestrian crosses the roadway, estimating a crosswalking trajectory, corresponding to an expected path of the pedestrian, by referring to a moving trajectory of the pedestrian, setting a driving plan of the autonomous vehicle referring to driving information and the crosswalking trajectory, and allowing the autonomous vehicle to self-drive by the driving plan; and (c) determining whether the pedestrian pays attention to the autonomous vehicle by referring to gaze patterns and, if not, allowing delivery of the driving intention to the pedestrian and/or a nearby driver, via an external display and/or an external speaker.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: September 22, 2020
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10776673
    Abstract: A method for training a CNN by using a camera and a radar together, to thereby allow the CNN to perform properly even when an object depiction ratio of a photographed image acquired through the camera is low due to a bad condition of a photographing circumstance is provided. And the method includes steps of: (a) a learning device instructing a convolutional layer to apply a convolutional operation to a multichannel integrated image, to thereby generate a feature map; (b) the learning device instructing an output layer to apply an output operation to the feature map, to thereby generate estimated object information; and (c) the learning device instructing a loss layer to generate a loss by using the estimated object information and GT object information corresponding thereto, and to perform backpropagation by using the loss, to thereby learn at least part of parameters in the CNN.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: September 15, 2020
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10776542
    Abstract: A method for calibrating a physics engine of a virtual world simulator for learning of a deep learning-based device is provided.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: September 15, 2020
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10776647
    Abstract: A method for achieving better performance in an autonomous driving while saving computing powers, by using confidence scores representing a credibility of an object detection which is generated in parallel with an object detection process is provided. And the method includes steps of: (a) a computing device acquiring at least one circumstance image on surroundings of a subject vehicle, through at least one panorama view sensor installed on the subject vehicle; (b) the computing device instructing a Convolutional Neural Network (CNN) to apply at least one CNN operation to the circumstance image, to thereby generate initial object information and initial confidence information on the circumstance image; and (c) the computing device generating final object information on the circumstance image by referring to the initial object information and the initial confidence information.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: September 15, 2020
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10779139
    Abstract: A method for a V2V communication by using a radar module used for detecting objects nearby is provided. And the method includes steps of: (a) a computing device performing (i) a process of instructing the radar module to transmit 1-st transmitting signals by referring to at least one 1-st schedule and (ii) a process of generating RVA information by using (1-1)-st receiving signals, corresponding to the 1-st transmitting signals; and (b) the computing device performing a process of instructing the radar module to transmit 2-nd transmitting signals by referring to at least one 2-nd schedule.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: September 15, 2020
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10768638
    Abstract: A method for switching driving modes of a subject vehicle to support the subject vehicle to perform a platoon driving by using platoon driving information is provided. And the method includes steps of: (a) a basement server, which interworks with the subject vehicle driving in a first mode, acquiring first platoon driving information, to N-th platoon driving information by referring to a real-time platoon driving information DB; (b) the basement server (i) calculating a first platoon driving suitability score to an N-th platoon driving suitability score by referring to first platoon driving parameters to N-th platoon driving parameters and (ii) selecting a target platoon driving group to be including the subject vehicle; (c) the basement server instructing the subject vehicle to drive in a second mode.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: September 8, 2020
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10762393
    Abstract: A method for learning an automatic labeling device for auto-labeling a base image of a base vehicle using sub-images of nearby vehicles is provided. The method includes steps of: a learning device inputting the base image and the sub-images into previous trained dense correspondence networks to generate dense correspondences; and into encoders to output convolution feature maps, inputting the convolution feature maps into decoders to output deconvolution feature maps; with an integer k from 1 to n, generating a k-th adjusted deconvolution feature map by translating coordinates of a (k+1)-th deconvolution feature map using a k-th dense correspondence; generating a concatenated feature map by concatenating the 1-st deconvolution feature map and the adjusted deconvolution feature maps; and inputting the concatenated feature map into a masking layer to output a semantic segmentation image and instructing a 1-st loss layer to calculate 1-st losses and updating decoder weights and encoder weights.
    Type: Grant
    Filed: January 10, 2020
    Date of Patent: September 1, 2020
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10748032
    Abstract: A method for enhancing an accuracy of object distance estimation based on a subject camera by performing pitch calibration of the subject camera more precisely with additional information acquired through V2V communication is provided. And the method includes steps of: (a) a computing device, performing (i) a process of instructing an initial pitch calibration module to apply a pitch calculation operation to the reference image, to thereby generate an initial estimated pitch, and (ii) a process of instructing an object detection network to apply a neural network operation to the reference image, to thereby generate reference object detection information; (b) the computing device instructing an adjusting pitch calibration module to (i) select a target object, (ii) calculate an estimated target height of the target object, (iii) calculate an error corresponding to the initial estimated pitch, and (iv) determine an adjusted estimated pitch on the subject camera by using the error.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: August 18, 2020
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10740593
    Abstract: A method for face recognition by using a multiple patch combination based on a deep neural network is provided. The method includes steps of: a face-recognizing device, (a) if a face image with a 1-st size is acquired, inputting the face image into a feature extraction network, to allow the feature extraction network to generate a feature map by applying convolution operation to the face image with the 1-st size, and to generate multiple features by applying sliding-pooling operation to the feature map, wherein the feature extraction network has been learned to extract a feature using a face image for training having a 2-nd size and wherein the 2-nd size is smaller than the 1-st size; and (b) inputting the multiple features into a learned neural aggregation network, to allow the neural aggregation network to aggregate the multiple features and to output an optimal feature for the face recognition.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: August 11, 2020
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Publication number: 20200252770
    Abstract: A method for a V2V communication by using a radar module used for detecting objects nearby is provided. And the method includes steps of: (a) a computing device performing (i) a process of instructing the radar module to transmit 1-st transmitting signals by referring to at least one 1-st schedule and (ii) a process of generating RVA information by using (1-1)-st receiving signals, corresponding to the 1-st transmitting signals; and (b) the computing device performing a process of instructing the radar module to transmit 2-nd transmitting signals by referring to at least one 2-nd schedule.
    Type: Application
    Filed: December 31, 2019
    Publication date: August 6, 2020
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Publication number: 20200250526
    Abstract: A method for achieving better performance in an autonomous driving while saving computing powers, by using confidence scores representing a credibility of an object detection which is generated in parallel with an object detection process is provided. And the method includes steps of: (a) a computing device acquiring at least one circumstance image on surroundings of a subject vehicle, through at least one image sensor installed on the subject vehicle; (b) the computing device instructing a Convolutional Neural Network (CNN) to apply at least one CNN operation to the circumstance image, to thereby generate initial object information and initial confidence information on the circumstance image; and (c) the computing device generating final object information on the circumstance image by referring to the initial object information and the initial confidence information with a support of a Reinforcement Learning (RL) agent, and through V2X communications with at least part of surrounding objects.
    Type: Application
    Filed: January 9, 2020
    Publication date: August 6, 2020
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Publication number: 20200250986
    Abstract: A method for generating a lane departure warning (LDW) alarm by referring to information on a driving situation is provided to be used for ADAS, V2X or driver safety which are required to satisfy level 4 and level 5 of autonomous vehicles. The method includes steps of: a computing device instructing a LDW system (i) to collect information on the driving situation including information on whether a specific spot corresponding to a side mirror on a side of a lane, into which the driver desires to change, belongs to a virtual viewing frustum of the driver and (ii) to generate risk information on lane change by referring to the information on the driving situation; and instructing the LDW system to generate the LDW alarm by referring to the risk information. Thus, the LDW alarm can be provided to neighboring autonomous vehicles of level 4 and level 5.
    Type: Application
    Filed: December 26, 2019
    Publication date: August 6, 2020
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Insu Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Donghun Yeo, Wooju Ryu, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Publication number: 20200250470
    Abstract: A method for enhancing an accuracy of object distance estimation based on a subject camera by performing pitch calibration of the subject camera more precisely with additional information acquired through V2V communication is provided. And the method includes steps of: (a) a computing device, performing (i) a process of instructing an initial pitch calibration module to apply a pitch calculation operation to the reference image, to thereby generate an initial estimated pitch, and (ii) a process of instructing an object detection network to apply a neural network operation to the reference image, to thereby generate reference object detection information; (b) the computing device instructing an adjusting pitch calibration module to (i) select a target object, (ii) calculate an estimated target height of the target object, (iii) calculate an error corresponding to the initial estimated pitch, and (iv) determine an adjusted estimated pitch on the subject camera by using the error.
    Type: Application
    Filed: December 23, 2019
    Publication date: August 6, 2020
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Publication number: 20200250982
    Abstract: A method for warning by detecting an abnormal state of a driver of a vehicle based on deep learning is provided. The method includes steps of: a driver state detecting device (a) inputting an interior image of the vehicle into a drowsiness detecting network, to detect a facial part of the driver, detect an eye part from the facial part, detect a blinking state of an eye to determine a drowsiness state, and inputting the interior image into a pose matching network, to detect body keypoints of the driver, determine whether the body keypoints match one of preset driving postures, to determine the abnormal state; and (b) if the driver is in a hazardous state referring to part of the drowsiness state and the abnormal state, transmitting information on the hazardous state to nearby vehicles over vehicle-to-vehicle communication to allow nearby drivers to perceive the hazardous state.
    Type: Application
    Filed: January 9, 2020
    Publication date: August 6, 2020
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Publication number: 20200250450
    Abstract: A method for achieving better performance in an autonomous driving while saving computing powers, by using confidence scores representing a credibility of an object detection which is generated in parallel with an object detection process is provided. And the method includes steps of: (a) a computing device acquiring at least one circumstance image on surroundings of a subject vehicle, through at least one panorama view sensor installed on the subject vehicle; (b) the computing device instructing a Convolutional Neural Network (CNN) to apply at least one CNN operation to the circumstance image, to thereby generate initial object information and initial confidence information on the circumstance image; and (c) the computing device generating final object information on the circumstance image by referring to the initial object information and the initial confidence information.
    Type: Application
    Filed: December 31, 2019
    Publication date: August 6, 2020
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho