Patents by Inventor Choong Sang Cho

Choong Sang Cho has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240062522
    Abstract: There is provided a self-directed visual intelligence system, The self-directed visual intelligence system according to an embodiment prepares data necessary for training a visual intelligence model when a change in a visual context of a real world is recognized, configures a visual intelligence model and configures training data of the visual intelligence model, based on the changed visual context of the real world, trains the configured visual intelligence model with the training data, and evaluates performance of the trained visual intelligence model. Accordingly, the visual intelligence model is corrected/improved in a self-directed way according to a change in a visual context of a real world, and is grown/advanced by itself, so that performance of the visual intelligence model is maintained in a best state even in response to any change in the context of the real world.
    Type: Application
    Filed: October 19, 2022
    Publication date: February 22, 2024
    Applicant: Korea Electronics Technology Institute
    Inventors: Choong Sang CHO, Ju Hong YOON, Young Han LEE
  • Patent number: 11605167
    Abstract: An image region segmentation method and system suing self-spatial adaptive normalization is provided. The image region segmentation system includes: an encoder configured to encode an image for segmenting a region by using a plurality of encoding blocks; and a decoder configured to decode the image encoded by the encoder and to generate a region-segmented image by using a plurality of decoding blocks, wherein each of the encoding blocks processes an inputted image into a convolution layer, performs spatial adaptive normalization, and then reduces the image and delivers the image to the next encoding block. Accordingly, spatial characteristics of the image are considered in an encoding process and a decoding process, so that region segmentation can be exactly performed with respect to various images.
    Type: Grant
    Filed: December 18, 2020
    Date of Patent: March 14, 2023
    Assignee: KOREA ELECTRONICS TECHNOLOGY INSTITUTE
    Inventors: Choong Sang Cho, Charles Hyok Song, Young Han Lee
  • Publication number: 20230004866
    Abstract: There are provided AI model learning method and system based on self-learning for focusing on specific areas. According to an embodiment, a network learning system includes: a detection module configured to detect a specific area from unlabeled images, and to generate unlabeled area images; a configuration module configured to configure self-learning data by using the generated area images; and a learning module to cause a backbone network to perform self-learning by using the configured self-learning data. Accordingly, an AI model may be trained based on self-learning for focusing on a desired specific area according to a desired purpose, and high-performance analysis specified for various purposes and characteristics of various types of specific areas is possible.
    Type: Application
    Filed: June 29, 2022
    Publication date: January 5, 2023
    Applicant: Korea Electronics Technology Institute
    Inventors: Choong Sang CHO, Young Han LEE
  • Publication number: 20220383104
    Abstract: There are provided a method and a system for image segmentation utilizing a GAN architecture. A method for training an image segmentation network according to an embodiment includes: inputting an image to a first network which is trained to output a region segmentation result regarding an input image, and generating a region segmentation result; and inputting the region segmentation result generated at the generation step and a ground truth (GT) to a second network, and acquiring a discrimination result, the second network being trained to discriminate inputted region segmentation results as a result generated by the first network and a GT, respectively; and training the first network and the second network by using the discrimination result. Accordingly, region segmentation performance of a semantic segmentation network regarding various images can be enhanced, and a very small image region can be exactly segmented.
    Type: Application
    Filed: October 27, 2021
    Publication date: December 1, 2022
    Inventors: Choong Sang CHO, Young Han LEE
  • Publication number: 20220028084
    Abstract: An image region segmentation method and system suing self-spatial adaptive normalization is provided. The image region segmentation system includes: an encoder configured to encode an image for segmenting a region by using a plurality of encoding blocks; and a decoder configured to decode the image encoded by the encoder and to generate a region-segmented image by using a plurality of decoding blocks, wherein each of the encoding blocks processes an inputted image into a convolution layer, performs spatial adaptive normalization, and then reduces the image and delivers the image to the next encoding block. Accordingly, spatial characteristics of the image are considered in an encoding process and a decoding process, so that region segmentation can be exactly performed with respect to various images.
    Type: Application
    Filed: December 18, 2020
    Publication date: January 27, 2022
    Inventors: Choong Sang CHO, Charles Hyok SONG, Young Han LEE
  • Patent number: 10978049
    Abstract: An audio segmentation method based on an attention mechanism is provided. The audio segmentation method according to an embodiment obtains a mapping relationship between an “inputted text” and an “audio spectrum feature vector for generating an audio signal”, the audio spectrum feature vector being automatically synthesized by using the inputted text, and segments an inputted audio signal by using the mapping relationship. Accordingly, high quality can be guaranteed and the effort, time, and cost can be noticeably reduced through audio segmentation utilizing the attention mechanism.
    Type: Grant
    Filed: January 24, 2019
    Date of Patent: April 13, 2021
    Assignee: KOREA ELECTRONICS TECHNOLOGY INSTITUTE
    Inventors: Young Han Lee, Jong Yeol Yang, Choong Sang Cho, Hye Dong Jung
  • Patent number: 10923106
    Abstract: An audio synthesis method adapted to video characteristics is provided. The audio synthesis method according to an embodiment includes: extracting characteristics x from a video in a time-series way; extracting characteristics p of phonemes from a text; and generating an audio spectrum characteristic St used to generate an audio to be synthesized with a video at a time t, based on correlations between an audio spectrum characteristic St-1, which is used to generate an audio to be synthesized with a video at a time t?1, and the characteristics x. Accordingly, an audio can be synthesized according to video characteristics, and speech according to a video can be easily added.
    Type: Grant
    Filed: January 24, 2019
    Date of Patent: February 16, 2021
    Assignee: Korea Electronics Technology Institute
    Inventors: Jong Yeol Yang, Young Han Lee, Choong Sang Cho, Hye Dong Jung
  • Patent number: 10846568
    Abstract: Deep learning-based automatic gesture recognition method and system are provided. The training method according to an embodiment includes: extracting a plurality of contours from an input image; generating training data by normalizing pieces of contour information forming each of the contours; and training an AI model for gesture recognition by using the generated training data. Accordingly, robust and high-performance automatic gesture recognition can be performed, without being influenced by an environment and a condition even while using less training data.
    Type: Grant
    Filed: October 1, 2018
    Date of Patent: November 24, 2020
    Assignee: Korea Electronics Technology Institute
    Inventors: Sang Ki Ko, Choong Sang Cho, Hye Dong Jung, Young Han Lee
  • Patent number: 10819301
    Abstract: The present disclosure relates to a method and system for controlling loudness of an audio based on signal analysis and deep learning. The method includes analyzing an audio characteristic in a frame level based on signal analysis, analyzing the audio characteristic in the frame level based on learning, and controlling loudness of the audio in the frame level, by combining the analysis results. Accordingly, reliability of audio characteristic analysis can be enhanced and audio loudness can be optimally controlled.
    Type: Grant
    Filed: October 18, 2018
    Date of Patent: October 27, 2020
    Assignee: KOREA ELECTRONICS TECHNOLOGY INSTITUTE
    Inventors: Choong Sang Cho, Young Han Lee
  • Patent number: 10726289
    Abstract: A method and a system for automatic image caption generation are provided. The automatic image caption generation method according to an embodiment of the present disclosure includes: extracting a distinctive attribute from example captions of a learning image; training a first neural network for predicting a distinctive attribute from an image, by using a pair of the extracted distinctive attribute and the learning image; inferring a distinctive attribute by inputting the learning image to the trained first neural network; and training a second neural network for generating a caption of an image by using a pair of the inferred distinctive attribute and the learning image. Accordingly, a caption well indicating a feature of a given image is automatically generated, such that an image can be more exactly explained and a difference from other images can be clearly distinguished.
    Type: Grant
    Filed: July 24, 2018
    Date of Patent: July 28, 2020
    Assignee: Korea Electronics Technology Institute
    Inventors: Bo Eun Kim, Choong Sang Cho, Hye Dong Jung, Young Han Lee
  • Publication number: 20200043473
    Abstract: An audio segmentation method based on an attention mechanism is provided. The audio segmentation method according to an embodiment obtains a mapping relationship between an “inputted text” and an “audio spectrum feature vector for generating an audio signal”, the audio spectrum feature vector being automatically synthesized by using the inputted text, and segments an inputted audio signal by using the mapping relationship. Accordingly, high quality can be guaranteed and the effort, time, and cost can be noticeably reduced through audio segmentation utilizing the attention mechanism.
    Type: Application
    Filed: January 24, 2019
    Publication date: February 6, 2020
    Applicant: Korea Electronics Technology Institute
    Inventors: Young Han LEE, Jong Yeol YANG, Choong Sang CHO, Hye Dong JUNG
  • Publication number: 20200043465
    Abstract: An audio synthesis method adapted to video characteristics is provided. The audio synthesis method according to an embodiment includes: extracting characteristics x from a video in a time-series way; extracting characteristics p of phonemes from a text; and generating an audio spectrum characteristic St used to generate an audio to be synthesized with a video at a time t, based on correlations between an audio spectrum characteristic St-1, which is used to generate an audio to be synthesized with a video at a time t?1, and the characteristics x. Accordingly, an audio can be synthesized according to video characteristics, and speech according to a video can be easily added.
    Type: Application
    Filed: January 24, 2019
    Publication date: February 6, 2020
    Applicant: Korea Electronics Technology Institute
    Inventors: Jong Yeol YANG, Young Han LEE, Choong Sang CHO, Hye Dong JUNG
  • Publication number: 20200005086
    Abstract: Deep learning-based automatic gesture recognition method and system are provided. The training method according to an embodiment includes: extracting a plurality of contours from an input image; generating training data by normalizing pieces of contour information forming each of the contours; and training an AI model for gesture recognition by using the generated training data. Accordingly, robust and high-performance automatic gesture recognition can be performed, without being influenced by an environment and a condition even while using less training data.
    Type: Application
    Filed: October 1, 2018
    Publication date: January 2, 2020
    Applicant: Korea Electronics Technology Institute
    Inventors: Sang Ki KO, Choong Sang CHO, Hye Dong JUNG, Young Han LEE
  • Patent number: 10489675
    Abstract: Provided herein is a robust region segmentation method and a system using the same, the method including receiving setting of an image in an input image; calculating representative values for each of an internal portion and an external portion of the region; calculating a cost by substituting the representative values and a pixel value of the image in a cost function, and updating the region based on the calculated cost, wherein the cost function includes a term in which a difference between the updated pixel value and an original pixel value is reflected, thereby enabling an accurate region segmentation even in ambiguous images that are complicated and where division of regions is unclear.
    Type: Grant
    Filed: September 8, 2016
    Date of Patent: November 26, 2019
    Assignee: Korea Electronics Technology Institute
    Inventors: Choong Sang Cho, Hwa Seon Shin, Young Han Lee, Joo Hyung Kang
  • Publication number: 20190286931
    Abstract: A method and a system for automatic image caption generation are provided. The automatic image caption generation method according to an embodiment of the present disclosure includes: extracting a distinctive attribute from example captions of a learning image; training a first neural network for predicting a distinctive attribute from an image, by using a pair of the extracted distinctive attribute and the learning image; inferring a distinctive attribute by inputting the learning image to the trained first neural network; and training a second neural network for generating a caption of an image by using a pair of the inferred distinctive attribute and the learning image. Accordingly, a caption well indicating a feature of a given image is automatically generated, such that an image can be more exactly explained and a difference from other images can be clearly distinguished.
    Type: Application
    Filed: July 24, 2018
    Publication date: September 19, 2019
    Applicant: Korea Electronics Technology Institute
    Inventors: Bo Eun KIM, Choong Sang CHO, Hye Dong JUNG, Young Han LEE
  • Publication number: 20190131948
    Abstract: The present disclosure relates to a method and system for controlling loudness of an audio based on signal analysis and deep learning. The method includes analyzing an audio characteristic in a frame level based on signal analysis, analyzing the audio characteristic in the frame level based on learning, and controlling loudness of the audio in the frame level, by combining the analysis results. Accordingly, reliability of audio characteristic analysis can be enhanced and audio loudness can be optimally controlled.
    Type: Application
    Filed: October 18, 2018
    Publication date: May 2, 2019
    Inventors: Choong Sang CHO, Young Han LEE
  • Patent number: 10176583
    Abstract: Provided herein is a topological derivatives (TDs)-based image segmentation method and system using heterogeneous image features data. The image segmentation method according to an embodiment of the present disclosure involves calculating TDs having each of the heterogeneous image features data as an input value, and segmenting an image into a plurality of regions using the calculated TDs. Accordingly, performance may be improved, and robustness against noise may be further improved.
    Type: Grant
    Filed: November 3, 2016
    Date of Patent: January 8, 2019
    Assignee: Korea Electronics Technology Institute
    Inventors: Choong Sang Cho, Hwa Seon Shin, Young Han Lee, Joo Hyung Kang
  • Publication number: 20170124720
    Abstract: Provided herein is a topological derivatives (TDs)-based image segmentation method and system using heterogeneous image features data. The image segmentation method according to an embodiment of the present disclosure involves calculating TDs having each of the heterogeneous image features data as an input value, and segmenting an image into a plurality of regions using the calculated TDs. Accordingly, performance may be improved, and robustness against noise may be further improved.
    Type: Application
    Filed: November 3, 2016
    Publication date: May 4, 2017
    Inventors: Choong Sang Cho, Hwa Seon Shin, Young Han Lee, Joo Hyung Kang
  • Publication number: 20170076462
    Abstract: Provided herein is a robust region segmentation method and a system using the same, the method including receiving setting of an image in an input image; calculating representative values for each of an internal portion and an external portion of the region; calculating a cost by substituting the representative values and a pixel value of the image in a cost function, and updating the region based on the calculated cost, wherein the cost function includes a term in which a difference between the updated pixel value and an original pixel value is reflected, thereby enabling an accurate region segmentation even in ambiguous images that are complicated and where division of regions is unclear.
    Type: Application
    Filed: September 8, 2016
    Publication date: March 16, 2017
    Inventors: Choong Sang Cho, Hwa Seon Shin, Young Han Lee, Joo Hyung Kang
  • Patent number: 9570110
    Abstract: The present invention relates to a multimedia-data-processing method which enables a media graph to always be constructed in a “connection without negotiation” manner, on the basis of an already known media graph construction, and thus provides a media framework in which procedures for connecting components are minimized, thereby improving the performance of a system and satisfying the requests of an OS platform builder and a media application developer. The multimedia-data-processing method of the present invention is performed by a multimedia framework, and comprises: (a) a step of receiving component information required for the construction of the media graph and component connection information from a media application; and (b) a step of ensuring that the media graph is constructed by the content received in step (a), and that the media graph waits for a rendering command, thereby eliminating the necessity of permitting the media application to check the construction of the media graph.
    Type: Grant
    Filed: December 24, 2010
    Date of Patent: February 14, 2017
    Assignee: Korea Electronics Technology Institute
    Inventors: Byeong Ho Choi, Yong Hwan Kim, Hwa Seon Shin, Choong Sang Cho, Min Seok Park