Patents by Inventor Myeong-Chun Lee

Myeong-Chun Lee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 10780897
    Abstract: A method for signaling a driving intention of an autonomous vehicle is provided. The method includes steps of: a driving intention signaling device (a) detecting a pedestrian ahead of the autonomous vehicle using surroundings video images, and determining whether the pedestrian crosses a roadway using a virtual crosswalk; (b) if the pedestrian crosses the roadway, estimating a crosswalking trajectory, corresponding to an expected path of the pedestrian, by referring to a moving trajectory of the pedestrian, setting a driving plan of the autonomous vehicle referring to driving information and the crosswalking trajectory, and allowing the autonomous vehicle to self-drive by the driving plan; and (c) determining whether the pedestrian pays attention to the autonomous vehicle by referring to gaze patterns and, if not, allowing delivery of the driving intention to the pedestrian and/or a nearby driver, via an external display and/or an external speaker.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: September 22, 2020
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10776673
    Abstract: A method for training a CNN by using a camera and a radar together, to thereby allow the CNN to perform properly even when an object depiction ratio of a photographed image acquired through the camera is low due to a bad condition of a photographing circumstance is provided. And the method includes steps of: (a) a learning device instructing a convolutional layer to apply a convolutional operation to a multichannel integrated image, to thereby generate a feature map; (b) the learning device instructing an output layer to apply an output operation to the feature map, to thereby generate estimated object information; and (c) the learning device instructing a loss layer to generate a loss by using the estimated object information and GT object information corresponding thereto, and to perform backpropagation by using the loss, to thereby learn at least part of parameters in the CNN.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: September 15, 2020
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10776647
    Abstract: A method for achieving better performance in an autonomous driving while saving computing powers, by using confidence scores representing a credibility of an object detection which is generated in parallel with an object detection process is provided. And the method includes steps of: (a) a computing device acquiring at least one circumstance image on surroundings of a subject vehicle, through at least one panorama view sensor installed on the subject vehicle; (b) the computing device instructing a Convolutional Neural Network (CNN) to apply at least one CNN operation to the circumstance image, to thereby generate initial object information and initial confidence information on the circumstance image; and (c) the computing device generating final object information on the circumstance image by referring to the initial object information and the initial confidence information.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: September 15, 2020
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10779139
    Abstract: A method for a V2V communication by using a radar module used for detecting objects nearby is provided. And the method includes steps of: (a) a computing device performing (i) a process of instructing the radar module to transmit 1-st transmitting signals by referring to at least one 1-st schedule and (ii) a process of generating RVA information by using (1-1)-st receiving signals, corresponding to the 1-st transmitting signals; and (b) the computing device performing a process of instructing the radar module to transmit 2-nd transmitting signals by referring to at least one 2-nd schedule.
    Type: Grant
    Filed: December 31, 2019
    Date of Patent: September 15, 2020
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10776542
    Abstract: A method for calibrating a physics engine of a virtual world simulator for learning of a deep learning-based device is provided.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: September 15, 2020
    Assignee: Stradvision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10768638
    Abstract: A method for switching driving modes of a subject vehicle to support the subject vehicle to perform a platoon driving by using platoon driving information is provided. And the method includes steps of: (a) a basement server, which interworks with the subject vehicle driving in a first mode, acquiring first platoon driving information, to N-th platoon driving information by referring to a real-time platoon driving information DB; (b) the basement server (i) calculating a first platoon driving suitability score to an N-th platoon driving suitability score by referring to first platoon driving parameters to N-th platoon driving parameters and (ii) selecting a target platoon driving group to be including the subject vehicle; (c) the basement server instructing the subject vehicle to drive in a second mode.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: September 8, 2020
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10762393
    Abstract: A method for learning an automatic labeling device for auto-labeling a base image of a base vehicle using sub-images of nearby vehicles is provided. The method includes steps of: a learning device inputting the base image and the sub-images into previous trained dense correspondence networks to generate dense correspondences; and into encoders to output convolution feature maps, inputting the convolution feature maps into decoders to output deconvolution feature maps; with an integer k from 1 to n, generating a k-th adjusted deconvolution feature map by translating coordinates of a (k+1)-th deconvolution feature map using a k-th dense correspondence; generating a concatenated feature map by concatenating the 1-st deconvolution feature map and the adjusted deconvolution feature maps; and inputting the concatenated feature map into a masking layer to output a semantic segmentation image and instructing a 1-st loss layer to calculate 1-st losses and updating decoder weights and encoder weights.
    Type: Grant
    Filed: January 10, 2020
    Date of Patent: September 1, 2020
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10748032
    Abstract: A method for enhancing an accuracy of object distance estimation based on a subject camera by performing pitch calibration of the subject camera more precisely with additional information acquired through V2V communication is provided. And the method includes steps of: (a) a computing device, performing (i) a process of instructing an initial pitch calibration module to apply a pitch calculation operation to the reference image, to thereby generate an initial estimated pitch, and (ii) a process of instructing an object detection network to apply a neural network operation to the reference image, to thereby generate reference object detection information; (b) the computing device instructing an adjusting pitch calibration module to (i) select a target object, (ii) calculate an estimated target height of the target object, (iii) calculate an error corresponding to the initial estimated pitch, and (iv) determine an adjusted estimated pitch on the subject camera by using the error.
    Type: Grant
    Filed: December 23, 2019
    Date of Patent: August 18, 2020
    Assignee: StradVision, Inc.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Patent number: 10740593
    Abstract: A method for face recognition by using a multiple patch combination based on a deep neural network is provided. The method includes steps of: a face-recognizing device, (a) if a face image with a 1-st size is acquired, inputting the face image into a feature extraction network, to allow the feature extraction network to generate a feature map by applying convolution operation to the face image with the 1-st size, and to generate multiple features by applying sliding-pooling operation to the feature map, wherein the feature extraction network has been learned to extract a feature using a face image for training having a 2-nd size and wherein the 2-nd size is smaller than the 1-st size; and (b) inputting the multiple features into a learned neural aggregation network, to allow the neural aggregation network to aggregate the multiple features and to output an optimal feature for the face recognition.
    Type: Grant
    Filed: December 20, 2019
    Date of Patent: August 11, 2020
    Assignee: STRADVISION, INC.
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Publication number: 20200252770
    Abstract: A method for a V2V communication by using a radar module used for detecting objects nearby is provided. And the method includes steps of: (a) a computing device performing (i) a process of instructing the radar module to transmit 1-st transmitting signals by referring to at least one 1-st schedule and (ii) a process of generating RVA information by using (1-1)-st receiving signals, corresponding to the 1-st transmitting signals; and (b) the computing device performing a process of instructing the radar module to transmit 2-nd transmitting signals by referring to at least one 2-nd schedule.
    Type: Application
    Filed: December 31, 2019
    Publication date: August 6, 2020
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Publication number: 20200250982
    Abstract: A method for warning by detecting an abnormal state of a driver of a vehicle based on deep learning is provided. The method includes steps of: a driver state detecting device (a) inputting an interior image of the vehicle into a drowsiness detecting network, to detect a facial part of the driver, detect an eye part from the facial part, detect a blinking state of an eye to determine a drowsiness state, and inputting the interior image into a pose matching network, to detect body keypoints of the driver, determine whether the body keypoints match one of preset driving postures, to determine the abnormal state; and (b) if the driver is in a hazardous state referring to part of the drowsiness state and the abnormal state, transmitting information on the hazardous state to nearby vehicles over vehicle-to-vehicle communication to allow nearby drivers to perceive the hazardous state.
    Type: Application
    Filed: January 9, 2020
    Publication date: August 6, 2020
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Publication number: 20200250450
    Abstract: A method for achieving better performance in an autonomous driving while saving computing powers, by using confidence scores representing a credibility of an object detection which is generated in parallel with an object detection process is provided. And the method includes steps of: (a) a computing device acquiring at least one circumstance image on surroundings of a subject vehicle, through at least one panorama view sensor installed on the subject vehicle; (b) the computing device instructing a Convolutional Neural Network (CNN) to apply at least one CNN operation to the circumstance image, to thereby generate initial object information and initial confidence information on the circumstance image; and (c) the computing device generating final object information on the circumstance image by referring to the initial object information and the initial confidence information.
    Type: Application
    Filed: December 31, 2019
    Publication date: August 6, 2020
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Publication number: 20200252550
    Abstract: A method for correcting an incorrect angle of a camera is provided. And the method includes steps of: (a) a computing device, generating first reference data or second reference data according to circumstance information by referring to a reference image; (b) the computing device generating a first angle error or a second angle error by referring to the first reference data or the second reference data with vehicle coordinate data; and (c) the computing device instructing a physical rotation module to adjust the incorrect angle by referring to the first angle error or the second angle error.
    Type: Application
    Filed: January 10, 2020
    Publication date: August 6, 2020
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Publication number: 20200247434
    Abstract: A method for signaling a driving intention of an autonomous vehicle is provided. The method includes steps of: a driving intention signaling device (a) detecting a pedestrian ahead of the autonomous vehicle using surroundings video images, and determining whether the pedestrian crosses a roadway using a virtual crosswalk; (b) if the pedestrian crosses the roadway, estimating a crosswalking trajectory, corresponding to an expected path of the pedestrian, by referring to a moving trajectory of the pedestrian, setting a driving plan of the autonomous vehicle referring to driving information and the crosswalking trajectory, and allowing the autonomous vehicle to self-drive by the driving plan; and (c) determining whether the pedestrian pays attention to the autonomous vehicle by referring to gaze patterns and, if not, allowing delivery of the driving intention to the pedestrian and/or a nearby driver, via an external display and/or an external speaker.
    Type: Application
    Filed: December 31, 2019
    Publication date: August 6, 2020
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Publication number: 20200250853
    Abstract: A method for supporting at least one administrator to evaluate detecting processes of object detectors to provide logical grounds of an autonomous driving is provided. And the method includes steps of: (a) a computing device instructing convolutional layers, included in an object detecting CNN which has been trained before, to generate reference convolutional feature maps by applying convolutional operations to reference images inputted thereto, and instructing ROI pooling layers included therein to generate reference ROI-Pooled feature maps by pooling at least part of values corresponding to ROIs on the reference convolutional feature maps; and (b) the computing device instructing a representative selection unit to classify the reference ROI-Pooled feature maps by referring to information on classes of objects included in their corresponding ROIs on the reference images, and to generate at least one representative feature map per each class.
    Type: Application
    Filed: December 23, 2019
    Publication date: August 6, 2020
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Publication number: 20200250470
    Abstract: A method for enhancing an accuracy of object distance estimation based on a subject camera by performing pitch calibration of the subject camera more precisely with additional information acquired through V2V communication is provided. And the method includes steps of: (a) a computing device, performing (i) a process of instructing an initial pitch calibration module to apply a pitch calculation operation to the reference image, to thereby generate an initial estimated pitch, and (ii) a process of instructing an object detection network to apply a neural network operation to the reference image, to thereby generate reference object detection information; (b) the computing device instructing an adjusting pitch calibration module to (i) select a target object, (ii) calculate an estimated target height of the target object, (iii) calculate an error corresponding to the initial estimated pitch, and (iv) determine an adjusted estimated pitch on the subject camera by using the error.
    Type: Application
    Filed: December 23, 2019
    Publication date: August 6, 2020
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Publication number: 20200249675
    Abstract: A method for providing a dynamic adaptive deep learning model other than a fixed deep learning model, to thereby support at least one specific autonomous vehicle to perform a proper autonomous driving according to surrounding circumstances is provided. And the method includes steps of: (a) a managing device which interworks with autonomous vehicles instructing a fine-tuning system to acquire a specific deep learning model to be updated; (b) the managing device inputting video data and its corresponding labeled data to the fine-tuning system as training data, to thereby update the specific deep learning model; and (c) the managing device instructing an automatic updating system to transmit the updated specific deep learning model to the specific autonomous vehicle, to thereby support the specific autonomous vehicle to perform the autonomous driving by using the updated specific deep learning model other than a legacy deep learning model.
    Type: Application
    Filed: January 9, 2020
    Publication date: August 6, 2020
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Publication number: 20200250541
    Abstract: A learning method for supporting a safer autonomous driving through a fusion of information acquired from images and communications is provided. And the method includes steps of: (a) a learning device instructing a first neural network and a second neural network to generate an image-based feature map and a communication-based feature map by using a circumstance image and circumstance communication information; (b) the learning device instructing a third neural network to apply a third neural network operation to the image-based feature map and the communication-based feature map to generate an integrated feature map; (c) the learning device instructing a fourth neural network to apply a fourth neural network operation to the integrated feature map to generate estimated surrounding motion information; and (d) the learning device instructing a first loss layer to train parameters of the first to the fourth neural networks.
    Type: Application
    Filed: January 9, 2020
    Publication date: August 6, 2020
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Publication number: 20200250974
    Abstract: A method for detecting emergency vehicles in real time, and managing subject vehicles to support the emergency vehicles to drive without interferences from the subject vehicles by referring to detected information on the emergency vehicles is provided. And the method includes steps of: (a) a management server generating metadata on the specific emergency vehicle by referring to emergency circumstance information; (b) the management server generating a circumstance scenario vector by referring to the emergency circumstance information and the metadata, comparing the circumstance scenario vector with reference scenario vectors, to thereby find a specific scenario vector whose similarity score with the circumstance scenario vector is larger than a threshold, and acquiring an emergency reaction command by referring to the specific scenario vector; (c) the management server transmitting the emergency reaction command to each of the subject vehicles.
    Type: Application
    Filed: January 10, 2020
    Publication date: August 6, 2020
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, Sukhoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho
  • Publication number: 20200250442
    Abstract: A method for achieving better performance in an autonomous driving while saving computing powers, by using confidence scores representing a credibility of an object detection which is generated in parallel with an object detection process is provided. And the method includes steps of: (a) a computing device acquiring at least one circumstance image on surroundings of a subject vehicle, through at least one panorama view sensor installed on the subject vehicle; (b) the computing device instructing a Convolutional Neural Network (CNN) to apply at least one CNN operation to the circumstance image, to thereby generate initial object information and initial confidence information on the circumstance image; and (c) the computing device generating final object information on the circumstance image by referring to the initial object information and the initial confidence information, with a support of an RL agent.
    Type: Application
    Filed: January 10, 2020
    Publication date: August 6, 2020
    Inventors: Kye-Hyeon Kim, Yongjoong Kim, Hak-Kyoung Kim, Woonhyun Nam, SukHoon Boo, Myungchul Sung, Dongsoo Shin, Donghun Yeo, Wooju Ryu, Myeong-Chun Lee, Hyungsoo Lee, Taewoong Jang, Kyungjoong Jeong, Hongmo Je, Hojin Cho