Patents by Inventor Bohyung Han
Bohyung Han has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20230282216Abstract: An authentication method and apparatus using a transformation model are disclosed. The authentication method includes generating, at a first apparatus, a first enrolled feature based on a first feature extractor, obtaining a second enrolled feature to which the first enrolled feature is transformed, determining an input feature by extracting a feature from input data with a second feature extractor different from the first feature extractor, and performing an authentication based on the second enrolled feature and the input feature.Type: ApplicationFiled: May 15, 2023Publication date: September 7, 2023Applicants: SAMSUNG ELECTRONICS CO., LTD., SNU R&DB FOUNDATIONInventors: Seungju HAN, Jaejoon HAN, Minsu KO, Chang Kyu CHOI, Bohyung HAN
-
Publication number: 20230215169Abstract: Systems and methods for a weakly supervised action localization model are provided. Example models according to example aspects of the present disclosure can localize and/or classify actions in untrimmed videos using machine-learned models, such as convolutional neural networks. The example models can predict temporal intervals of human actions given video-level class labels with no requirement of temporal localization information of actions. The example models can recognize actions and identify a sparse set of keyframes associated with actions through adaptive temporal pooling of video frames, wherein the loss function of the model is composed of a classification error and a sparsity of frame selection. Following action recognition with sparse keyframe attention, temporal proposals for action can be extracted using temporal class activation mappings, and final time intervals can be estimated corresponding to target actions.Type: ApplicationFiled: March 10, 2023Publication date: July 6, 2023Inventors: Ting Liu, Gautam Prasad, Phuc Xuan Nguyen, Bohyung Han
-
Patent number: 11688403Abstract: An authentication method and apparatus using a transformation model are disclosed. The authentication method includes generating, at a first apparatus, a first enrolled feature based on a first feature extractor, obtaining a second enrolled feature to which the first enrolled feature is transformed, determining an input feature by extracting a feature from input data with a second feature extractor different from the first feature extractor, and performing an authentication based on the second enrolled feature and the input feature.Type: GrantFiled: March 6, 2020Date of Patent: June 27, 2023Assignees: Samsung Electronics Co., Ltd., SNU R&DB FOUNDATIONInventors: Seungju Han, Jaejoon Han, Minsu Ko, Chang Kyu Choi, Bohyung Han
-
Publication number: 20230153961Abstract: An image deblurring method and apparatus are provided. The image deblurring method includes generating a primary feature representation on a first blur point in an input image and offset information on similar points of the first blur point by encoding the input image by implementing an encoding model, generating secondary feature representations on the similar points by applying the offset information to the primary feature representation, and generating an output image, based on the secondary feature representations and the offset information, by implementing an implicit function model.Type: ApplicationFiled: October 26, 2022Publication date: May 18, 2023Applicants: Samsung Electronics Co., Ltd., Seoul National University R&DB FoundationInventors: Huijin LEE, Dong-Hwan JANG, Bohyung HAN, Nahyup KANG
-
Publication number: 20230132630Abstract: A method includes: generating, based on a student network result of an implemented student network provided with an input, a sample corresponding to a distribution of an energy-based model based on the student network result and a teacher network result of an implemented teacher network provided with the input; training model parameters of the energy-based model to decrease a value of the energy-based model, based on the teacher network result and the student network result; and training the implemented student network to increase the value of the energy-based model, based on the sample and the student network result.Type: ApplicationFiled: July 12, 2022Publication date: May 4, 2023Applicants: SAMSUNG ELECTRONICS CO., LTD., Seoul National University R&DB FoundationInventors: Eunhee KANG, Minsoo KANG, Bohyung HAN, Sehwan KI, HYONG EUK LEE
-
Patent number: 11640710Abstract: Systems and methods for a weakly supervised action localization model are provided. Example models according to example aspects of the present disclosure can localize and/or classify actions in untrimmed videos using machine-learned models, such as convolutional neural networks. The example models can predict temporal intervals of human actions given video-level class labels with no requirement of temporal localization information of actions. The example models can recognize actions and identify a sparse set of keyframes associated with actions through adaptive temporal pooling of video frames, wherein the loss function of the model is composed of a classification error and a sparsity of frame selection. Following action recognition with sparse keyframe attention, temporal proposals for action can be extracted using temporal class activation mappings, and final time intervals can be estimated corresponding to target actions.Type: GrantFiled: November 5, 2018Date of Patent: May 2, 2023Assignee: GOOGLE LLCInventors: Ting Liu, Gautam Prasad, Phuc Xuan Nguyen, Bohyung Han
-
Publication number: 20230119509Abstract: A method includes generating, by a neural network having a plurality of layers, final feature vectors of one or more frames of a plurality of frames of an input video, while sequentially processing each of the plurality of, and generating image information corresponding to the input video based on the generated final feature vectors. For each of the plurality of frames, the generating of the final feature vectors comprises determining whether to proceed with or stop a corresponding sequenced operation through layers of the neural network for generating a final feature vector of a corresponding frame, and generating the final feature vector of the corresponding frame in response to the corresponding sequenced operation completing a final stage of the corresponding sequenced operation.Type: ApplicationFiled: July 15, 2022Publication date: April 20, 2023Applicants: SAMSUNG ELECTRONICS CO., LTD., SNU R&DB FOUNDATIONInventors: Bohyung Han, Jonghyeon Seon, Jaedong Hwang
-
Publication number: 20210365790Abstract: A processor-implemented neural network data processing method includes: receiving input data; determining a portion of channels to be used for calculation among channels of a neural network based on importance values respectively corresponding to the channels of the neural network; and performing a calculation based on the input data using the determined portion of channels of the neural network.Type: ApplicationFiled: January 14, 2021Publication date: November 25, 2021Applicants: SAMSUNG ELECTRONICS CO., LTD., SNU R&DB FOUNDATIONInventors: Changyong SON, Minsoo KANG, Bohyung HAN
-
Patent number: 11055854Abstract: The invention disclosed here relates to a method and system for real-time target tracking based on deep learning. The method for real-time target tracking according to an embodiment is performed by a computing device including a processor, and includes pre-training a target tracking model for detecting a tracking target from an image using pre-inputted training data, receiving an image with a plurality of frames, and detecting the tracking target for each of the plurality of frames by applying the target tracking model to the image. According to an embodiment, there is a remarkable reduction in the time required to detect the target from the image, thereby allowing real-time visual tracking, and improvement of the hierarchical structure and introduction of a new loss function make it possible to achieve more precise localization and distinguish different targets of similar shapes.Type: GrantFiled: August 22, 2019Date of Patent: July 6, 2021Assignee: SEOUL NATIONAL UNIVERSITY R&DB FOUNDATIONInventors: Bohyung Han, Ilchae Jung, Hyeonseob Nam
-
Publication number: 20200293886Abstract: An authentication method and apparatus using a transformation model are disclosed. The authentication method includes generating, at a first apparatus, a first enrolled feature based on a first feature extractor, obtaining a second enrolled feature to which the first enrolled feature is transformed, determining an input feature by extracting a feature from input data with a second feature extractor different from the first feature extractor, and performing an authentication based on the second enrolled feature and the input feature.Type: ApplicationFiled: March 6, 2020Publication date: September 17, 2020Applicants: SAMSUNG ELECTRONICS CO., LTD., SNU R&DB FOUNDATIONInventors: Seungju HAN, Jaejoon HAN, Minsu KO, Chang Kyu CHOI, Bohyung HAN
-
Publication number: 20200272823Abstract: Systems and methods for a weakly supervised action localization model are provided. Example models according to example aspects of the present disclosure can localize and/or classify actions in untrimmed videos using machine-learned models, such as convolutional neural networks. The example models can predict temporal intervals of human actions given video-level class labels with no requirement of temporal localization information of actions. The example models can recognize actions and identify a sparse set of keyframes associated with actions through adaptive temporal pooling of video frames, wherein the loss function of the model is composed of a classification error and a sparsity of frame selection. Following action recognition with sparse keyframe attention, temporal proposals for action can be extracted using temporal class activation mappings, and final time intervals can be estimated corresponding to target actions.Type: ApplicationFiled: November 5, 2018Publication date: August 27, 2020Inventors: Ting Liu, Gautam Prasad, Phuc Xuan Nguyen, Bohyung Han
-
Patent number: 10650042Abstract: Systems and methods of the present disclosure can use machine-learned image descriptor models for image retrieval applications and other applications. A trained image descriptor model can be used to analyze a plurality of database images to create a large-scale index of keypoint descriptors associated with the database images. An image retrieval application can provide a query image as input to the trained image descriptor model, resulting in receipt of a set of keypoint descriptors associated with the query image. Keypoint descriptors associated with the query image can be analyzed relative to the index to determine matching descriptors (e.g., by implementing a nearest neighbor search). Matching descriptors can then be geometrically verified and used to identify one or more matching images from the plurality of database images to retrieve and provide as output (e.g., by providing for display) within the image retrieval application.Type: GrantFiled: September 3, 2019Date of Patent: May 12, 2020Assignee: Google LLCInventors: Andre Filgueiras de Araujo, Jiwoong Sim, Bohyung Han, Hyeonwoo Noh
-
Publication number: 20200065976Abstract: The invention disclosed here relates to a method and system for real-time target tracking based on deep learning. The method for real-time target tracking according to an embodiment is performed by a computing device including a processor, and includes pre-training a target tracking model for detecting a tracking target from an image using pre-inputted training data, receiving an image with a plurality of frames, and detecting the tracking target for each of the plurality of frames by applying the target tracking model to the image. According to an embodiment, there is a remarkable reduction in the time required to detect the target from the image, thereby allowing real-time visual tracking, and improvement of the hierarchical structure and introduction of a new loss function make it possible to achieve more precise localization and distinguish different targets of similar shapes.Type: ApplicationFiled: August 22, 2019Publication date: February 27, 2020Inventors: Bohyung HAN, Ilchae JUNG, Hyeonseob NAM
-
Publication number: 20200004777Abstract: Systems and methods of the present disclosure can use machine-learned image descriptor models for image retrieval applications and other applications. A trained image descriptor model can be used to analyze a plurality of database images to create a large-scale index of keypoint descriptors associated with the database images. An image retrieval application can provide a query image as input to the trained image descriptor model, resulting in receipt of a set of keypoint descriptors associated with the query image. Keypoint descriptors associated with the query image can be analyzed relative to the index to determine matching descriptors (e.g., by implementing a nearest neighbor search). Matching descriptors can then be geometrically verified and used to identify one or more matching images from the plurality of database images to retrieve and provide as output (e.g., by providing for display) within the image retrieval application.Type: ApplicationFiled: September 3, 2019Publication date: January 2, 2020Inventors: Andre Filgueiras de Araujo, Jiwoong Sim, Bohyung Han, Hyeonwoo Noh
-
Patent number: 10402448Abstract: Systems and methods of the present disclosure can use machine-learned image descriptor models for image retrieval applications and other applications. A trained image descriptor model can be used to analyze a plurality of database images to create a large-scale index of keypoint descriptors associated with the database images. An image retrieval application can provide a query image as input to the trained image descriptor model, resulting in receipt of a set of keypoint descriptors associated with the query image. Keypoint descriptors associated with the query image can be analyzed relative to the index to determine matching descriptors (e.g., by implementing a nearest neighbor search). Matching descriptors can then be geometrically verified and used to identify one or more matching images from the plurality of database images to retrieve and provide as output (e.g., by providing for display) within the image retrieval application.Type: GrantFiled: June 28, 2017Date of Patent: September 3, 2019Assignee: Google LLCInventors: Andre Filgueiras de Araujo, Jiwoong Sim, Bohyung Han, Hyeonwoo Noh
-
Publication number: 20190005069Abstract: Systems and methods of the present disclosure can use machine-learned image descriptor models for image retrieval applications and other applications. A trained image descriptor model can be used to analyze a plurality of database images to create a large-scale index of keypoint descriptors associated with the database images. An image retrieval application can provide a query image as input to the trained image descriptor model, resulting in receipt of a set of keypoint descriptors associated with the query image. Keypoint descriptors associated with the query image can be analyzed relative to the index to determine matching descriptors (e.g., by implementing a nearest neighbor search). Matching descriptors can then be geometrically verified and used to identify one or more matching images from the plurality of database images to retrieve and provide as output (e.g., by providing for display) within the image retrieval application.Type: ApplicationFiled: June 28, 2017Publication date: January 3, 2019Inventors: Andre Filgueiras de Araujo, Jiwoong Sim, Bohyung Han, Hyeonwoo Noh
-
Patent number: 9940539Abstract: An object recognition apparatus and method thereof are disclosed. An exemplary apparatus may determine an image feature vector of a first image by applying a convolution network to the first image. The convolution network may extract features from image learning sets that include the first image and a sample segmentation map of the first image. The exemplary apparatus may determine a segmentation map of the first image by applying a deconvolution network to the determined image feature vector. The exemplary apparatus may determine a weight of the convolution network and a weight of the deconvolution network based on the sample segmentation map and the first segmentation map. The exemplary apparatus may determine a second segmentation map of a second image through the convolution network using the determined weight of the convolution network and through the deconvolution network using the determined weight of the deconvolution network.Type: GrantFiled: May 5, 2016Date of Patent: April 10, 2018Assignees: SAMSUNG ELECTRONICS CO., LTD., POSTECH ACADEMY-INDUSTRY FOUNDATIONInventors: Bohyung Han, Seunghoon Hong, Hyeonwoo Noh
-
Patent number: 9922262Abstract: A method by which a tracking apparatus tracks a target object includes: acquiring a first tree structure indicating a tracking processing order of frames, each frame including a tracking area in which the target object is located; acquiring a plurality of frame groups, each frame group consisting of two frames, and acquiring distance evaluation values of the respective frame groups; acquiring a second tree structure based on the first tree structure and the distance evaluation values; and tracking the target object based on the acquired second tree structure, wherein the distance evaluation value is determined based on at least one of locations of tracking areas included in two frames belonging to the frame group and pixel values included in the tracking areas.Type: GrantFiled: December 9, 2015Date of Patent: March 20, 2018Assignees: SAMSUNG ELECTRONICS CO., LTD., POSTECH ACADEMY-INDUSTRY FOUNDATIONInventors: Taegyu Lim, Bohyung Han, Seunghoon Hong
-
Publication number: 20160328630Abstract: An object recognition apparatus and method thereof are disclosed. An exemplary apparatus may determine an image feature vector of a first image by applying a convolution network to the first image. The convolution network may extract features from image learning sets that include the first image and a sample segmentation map of the first image. The exemplary apparatus may determine a segmentation map of the first image by applying a deconvolution network to the determined image feature vector. The exemplary apparatus may determine a weight of the convolution network and a weight of the deconvolution network based on the sample segmentation map and the first segmentation map. The exemplary apparatus may determine a second segmentation map of a second image through the convolution network using the determined weight of the convolution network and through the deconvolution network using the determined weight of the deconvolution network.Type: ApplicationFiled: May 5, 2016Publication date: November 10, 2016Applicants: SAMSUNG ELECTRONICS CO., LTD., POSTECH ACADEMY-INDUSTRY FOUNDATIONInventors: Bohyung HAN, Seunghoon HONG, Hyeonwoo NOH
-
Publication number: 20160171301Abstract: A method by which a tracking apparatus tracks a target object includes: acquiring a first tree structure indicating a tracking processing order of frames, each frame including a tracking area in which the target object is located; acquiring a plurality of frame groups, each frame group consisting of two frames, and acquiring distance evaluation values of the respective frame groups; acquiring a second tree structure based on the first tree structure and the distance evaluation values; and tracking the target object based on the acquired second tree structure, wherein the distance evaluation value is determined based on at least one of locations of tracking areas included in two frames belonging to the frame group and pixel values included in the tracking areas.Type: ApplicationFiled: December 9, 2015Publication date: June 16, 2016Applicants: SAMSUNG ELECTRONICS CO., LTD., POSTECH ACADEMY-INDUSTRY FOUNDATIONInventors: Taegyu LIM, Bohyung HAN, Seunghoon HONG