Patents by Inventor Bohyung Han

Bohyung Han has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240104749
    Abstract: Devices, systems, methods, and instructions for object tracking based on deep-learning are provided, including pre-training a model for object tracking based on pre-input learning data, receiving a target image of which at least one area contains an image corresponding an object for tracking and a search image of which at least one area contains an image corresponding the object for tracking, and obtaining information on area for tracking regarding to the area corresponding to the object for tracking in the search image by applying the model for object tracking, wherein the area corresponding to the object for tracking is defined by a Gaussian distribution model, and the information on area for tracking includes parameter values of a plurality of parameters based Gaussian distribution corresponding to the area corresponding to the object for tracking.
    Type: Application
    Filed: September 20, 2023
    Publication date: March 28, 2024
    Inventors: Bohyung HAN, Minji KIM
  • Patent number: 11881022
    Abstract: Systems and methods for a weakly supervised action localization model are provided. Example models according to example aspects of the present disclosure can localize and/or classify actions in untrimmed videos using machine-learned models, such as convolutional neural networks. The example models can predict temporal intervals of human actions given video-level class labels with no requirement of temporal localization information of actions. The example models can recognize actions and identify a sparse set of keyframes associated with actions through adaptive temporal pooling of video frames, wherein the loss function of the model is composed of a classification error and a sparsity of frame selection. Following action recognition with sparse keyframe attention, temporal proposals for action can be extracted using temporal class activation mappings, and final time intervals can be estimated corresponding to target actions.
    Type: Grant
    Filed: March 10, 2023
    Date of Patent: January 23, 2024
    Assignee: GOOGLE LLC
    Inventors: Ting Liu, Gautam Prasad, Phuc Xuan Nguyen, Bohyung Han
  • Publication number: 20230282216
    Abstract: An authentication method and apparatus using a transformation model are disclosed. The authentication method includes generating, at a first apparatus, a first enrolled feature based on a first feature extractor, obtaining a second enrolled feature to which the first enrolled feature is transformed, determining an input feature by extracting a feature from input data with a second feature extractor different from the first feature extractor, and performing an authentication based on the second enrolled feature and the input feature.
    Type: Application
    Filed: May 15, 2023
    Publication date: September 7, 2023
    Applicants: SAMSUNG ELECTRONICS CO., LTD., SNU R&DB FOUNDATION
    Inventors: Seungju HAN, Jaejoon HAN, Minsu KO, Chang Kyu CHOI, Bohyung HAN
  • Publication number: 20230215169
    Abstract: Systems and methods for a weakly supervised action localization model are provided. Example models according to example aspects of the present disclosure can localize and/or classify actions in untrimmed videos using machine-learned models, such as convolutional neural networks. The example models can predict temporal intervals of human actions given video-level class labels with no requirement of temporal localization information of actions. The example models can recognize actions and identify a sparse set of keyframes associated with actions through adaptive temporal pooling of video frames, wherein the loss function of the model is composed of a classification error and a sparsity of frame selection. Following action recognition with sparse keyframe attention, temporal proposals for action can be extracted using temporal class activation mappings, and final time intervals can be estimated corresponding to target actions.
    Type: Application
    Filed: March 10, 2023
    Publication date: July 6, 2023
    Inventors: Ting Liu, Gautam Prasad, Phuc Xuan Nguyen, Bohyung Han
  • Patent number: 11688403
    Abstract: An authentication method and apparatus using a transformation model are disclosed. The authentication method includes generating, at a first apparatus, a first enrolled feature based on a first feature extractor, obtaining a second enrolled feature to which the first enrolled feature is transformed, determining an input feature by extracting a feature from input data with a second feature extractor different from the first feature extractor, and performing an authentication based on the second enrolled feature and the input feature.
    Type: Grant
    Filed: March 6, 2020
    Date of Patent: June 27, 2023
    Assignees: Samsung Electronics Co., Ltd., SNU R&DB FOUNDATION
    Inventors: Seungju Han, Jaejoon Han, Minsu Ko, Chang Kyu Choi, Bohyung Han
  • Publication number: 20230153961
    Abstract: An image deblurring method and apparatus are provided. The image deblurring method includes generating a primary feature representation on a first blur point in an input image and offset information on similar points of the first blur point by encoding the input image by implementing an encoding model, generating secondary feature representations on the similar points by applying the offset information to the primary feature representation, and generating an output image, based on the secondary feature representations and the offset information, by implementing an implicit function model.
    Type: Application
    Filed: October 26, 2022
    Publication date: May 18, 2023
    Applicants: Samsung Electronics Co., Ltd., Seoul National University R&DB Foundation
    Inventors: Huijin LEE, Dong-Hwan JANG, Bohyung HAN, Nahyup KANG
  • Publication number: 20230132630
    Abstract: A method includes: generating, based on a student network result of an implemented student network provided with an input, a sample corresponding to a distribution of an energy-based model based on the student network result and a teacher network result of an implemented teacher network provided with the input; training model parameters of the energy-based model to decrease a value of the energy-based model, based on the teacher network result and the student network result; and training the implemented student network to increase the value of the energy-based model, based on the sample and the student network result.
    Type: Application
    Filed: July 12, 2022
    Publication date: May 4, 2023
    Applicants: SAMSUNG ELECTRONICS CO., LTD., Seoul National University R&DB Foundation
    Inventors: Eunhee KANG, Minsoo KANG, Bohyung HAN, Sehwan KI, HYONG EUK LEE
  • Patent number: 11640710
    Abstract: Systems and methods for a weakly supervised action localization model are provided. Example models according to example aspects of the present disclosure can localize and/or classify actions in untrimmed videos using machine-learned models, such as convolutional neural networks. The example models can predict temporal intervals of human actions given video-level class labels with no requirement of temporal localization information of actions. The example models can recognize actions and identify a sparse set of keyframes associated with actions through adaptive temporal pooling of video frames, wherein the loss function of the model is composed of a classification error and a sparsity of frame selection. Following action recognition with sparse keyframe attention, temporal proposals for action can be extracted using temporal class activation mappings, and final time intervals can be estimated corresponding to target actions.
    Type: Grant
    Filed: November 5, 2018
    Date of Patent: May 2, 2023
    Assignee: GOOGLE LLC
    Inventors: Ting Liu, Gautam Prasad, Phuc Xuan Nguyen, Bohyung Han
  • Publication number: 20230119509
    Abstract: A method includes generating, by a neural network having a plurality of layers, final feature vectors of one or more frames of a plurality of frames of an input video, while sequentially processing each of the plurality of, and generating image information corresponding to the input video based on the generated final feature vectors. For each of the plurality of frames, the generating of the final feature vectors comprises determining whether to proceed with or stop a corresponding sequenced operation through layers of the neural network for generating a final feature vector of a corresponding frame, and generating the final feature vector of the corresponding frame in response to the corresponding sequenced operation completing a final stage of the corresponding sequenced operation.
    Type: Application
    Filed: July 15, 2022
    Publication date: April 20, 2023
    Applicants: SAMSUNG ELECTRONICS CO., LTD., SNU R&DB FOUNDATION
    Inventors: Bohyung Han, Jonghyeon Seon, Jaedong Hwang
  • Publication number: 20210365790
    Abstract: A processor-implemented neural network data processing method includes: receiving input data; determining a portion of channels to be used for calculation among channels of a neural network based on importance values respectively corresponding to the channels of the neural network; and performing a calculation based on the input data using the determined portion of channels of the neural network.
    Type: Application
    Filed: January 14, 2021
    Publication date: November 25, 2021
    Applicants: SAMSUNG ELECTRONICS CO., LTD., SNU R&DB FOUNDATION
    Inventors: Changyong SON, Minsoo KANG, Bohyung HAN
  • Patent number: 11055854
    Abstract: The invention disclosed here relates to a method and system for real-time target tracking based on deep learning. The method for real-time target tracking according to an embodiment is performed by a computing device including a processor, and includes pre-training a target tracking model for detecting a tracking target from an image using pre-inputted training data, receiving an image with a plurality of frames, and detecting the tracking target for each of the plurality of frames by applying the target tracking model to the image. According to an embodiment, there is a remarkable reduction in the time required to detect the target from the image, thereby allowing real-time visual tracking, and improvement of the hierarchical structure and introduction of a new loss function make it possible to achieve more precise localization and distinguish different targets of similar shapes.
    Type: Grant
    Filed: August 22, 2019
    Date of Patent: July 6, 2021
    Assignee: SEOUL NATIONAL UNIVERSITY R&DB FOUNDATION
    Inventors: Bohyung Han, Ilchae Jung, Hyeonseob Nam
  • Publication number: 20200293886
    Abstract: An authentication method and apparatus using a transformation model are disclosed. The authentication method includes generating, at a first apparatus, a first enrolled feature based on a first feature extractor, obtaining a second enrolled feature to which the first enrolled feature is transformed, determining an input feature by extracting a feature from input data with a second feature extractor different from the first feature extractor, and performing an authentication based on the second enrolled feature and the input feature.
    Type: Application
    Filed: March 6, 2020
    Publication date: September 17, 2020
    Applicants: SAMSUNG ELECTRONICS CO., LTD., SNU R&DB FOUNDATION
    Inventors: Seungju HAN, Jaejoon HAN, Minsu KO, Chang Kyu CHOI, Bohyung HAN
  • Publication number: 20200272823
    Abstract: Systems and methods for a weakly supervised action localization model are provided. Example models according to example aspects of the present disclosure can localize and/or classify actions in untrimmed videos using machine-learned models, such as convolutional neural networks. The example models can predict temporal intervals of human actions given video-level class labels with no requirement of temporal localization information of actions. The example models can recognize actions and identify a sparse set of keyframes associated with actions through adaptive temporal pooling of video frames, wherein the loss function of the model is composed of a classification error and a sparsity of frame selection. Following action recognition with sparse keyframe attention, temporal proposals for action can be extracted using temporal class activation mappings, and final time intervals can be estimated corresponding to target actions.
    Type: Application
    Filed: November 5, 2018
    Publication date: August 27, 2020
    Inventors: Ting Liu, Gautam Prasad, Phuc Xuan Nguyen, Bohyung Han
  • Patent number: 10650042
    Abstract: Systems and methods of the present disclosure can use machine-learned image descriptor models for image retrieval applications and other applications. A trained image descriptor model can be used to analyze a plurality of database images to create a large-scale index of keypoint descriptors associated with the database images. An image retrieval application can provide a query image as input to the trained image descriptor model, resulting in receipt of a set of keypoint descriptors associated with the query image. Keypoint descriptors associated with the query image can be analyzed relative to the index to determine matching descriptors (e.g., by implementing a nearest neighbor search). Matching descriptors can then be geometrically verified and used to identify one or more matching images from the plurality of database images to retrieve and provide as output (e.g., by providing for display) within the image retrieval application.
    Type: Grant
    Filed: September 3, 2019
    Date of Patent: May 12, 2020
    Assignee: Google LLC
    Inventors: Andre Filgueiras de Araujo, Jiwoong Sim, Bohyung Han, Hyeonwoo Noh
  • Publication number: 20200065976
    Abstract: The invention disclosed here relates to a method and system for real-time target tracking based on deep learning. The method for real-time target tracking according to an embodiment is performed by a computing device including a processor, and includes pre-training a target tracking model for detecting a tracking target from an image using pre-inputted training data, receiving an image with a plurality of frames, and detecting the tracking target for each of the plurality of frames by applying the target tracking model to the image. According to an embodiment, there is a remarkable reduction in the time required to detect the target from the image, thereby allowing real-time visual tracking, and improvement of the hierarchical structure and introduction of a new loss function make it possible to achieve more precise localization and distinguish different targets of similar shapes.
    Type: Application
    Filed: August 22, 2019
    Publication date: February 27, 2020
    Inventors: Bohyung HAN, Ilchae JUNG, Hyeonseob NAM
  • Publication number: 20200004777
    Abstract: Systems and methods of the present disclosure can use machine-learned image descriptor models for image retrieval applications and other applications. A trained image descriptor model can be used to analyze a plurality of database images to create a large-scale index of keypoint descriptors associated with the database images. An image retrieval application can provide a query image as input to the trained image descriptor model, resulting in receipt of a set of keypoint descriptors associated with the query image. Keypoint descriptors associated with the query image can be analyzed relative to the index to determine matching descriptors (e.g., by implementing a nearest neighbor search). Matching descriptors can then be geometrically verified and used to identify one or more matching images from the plurality of database images to retrieve and provide as output (e.g., by providing for display) within the image retrieval application.
    Type: Application
    Filed: September 3, 2019
    Publication date: January 2, 2020
    Inventors: Andre Filgueiras de Araujo, Jiwoong Sim, Bohyung Han, Hyeonwoo Noh
  • Patent number: 10402448
    Abstract: Systems and methods of the present disclosure can use machine-learned image descriptor models for image retrieval applications and other applications. A trained image descriptor model can be used to analyze a plurality of database images to create a large-scale index of keypoint descriptors associated with the database images. An image retrieval application can provide a query image as input to the trained image descriptor model, resulting in receipt of a set of keypoint descriptors associated with the query image. Keypoint descriptors associated with the query image can be analyzed relative to the index to determine matching descriptors (e.g., by implementing a nearest neighbor search). Matching descriptors can then be geometrically verified and used to identify one or more matching images from the plurality of database images to retrieve and provide as output (e.g., by providing for display) within the image retrieval application.
    Type: Grant
    Filed: June 28, 2017
    Date of Patent: September 3, 2019
    Assignee: Google LLC
    Inventors: Andre Filgueiras de Araujo, Jiwoong Sim, Bohyung Han, Hyeonwoo Noh
  • Publication number: 20190005069
    Abstract: Systems and methods of the present disclosure can use machine-learned image descriptor models for image retrieval applications and other applications. A trained image descriptor model can be used to analyze a plurality of database images to create a large-scale index of keypoint descriptors associated with the database images. An image retrieval application can provide a query image as input to the trained image descriptor model, resulting in receipt of a set of keypoint descriptors associated with the query image. Keypoint descriptors associated with the query image can be analyzed relative to the index to determine matching descriptors (e.g., by implementing a nearest neighbor search). Matching descriptors can then be geometrically verified and used to identify one or more matching images from the plurality of database images to retrieve and provide as output (e.g., by providing for display) within the image retrieval application.
    Type: Application
    Filed: June 28, 2017
    Publication date: January 3, 2019
    Inventors: Andre Filgueiras de Araujo, Jiwoong Sim, Bohyung Han, Hyeonwoo Noh
  • Patent number: 9940539
    Abstract: An object recognition apparatus and method thereof are disclosed. An exemplary apparatus may determine an image feature vector of a first image by applying a convolution network to the first image. The convolution network may extract features from image learning sets that include the first image and a sample segmentation map of the first image. The exemplary apparatus may determine a segmentation map of the first image by applying a deconvolution network to the determined image feature vector. The exemplary apparatus may determine a weight of the convolution network and a weight of the deconvolution network based on the sample segmentation map and the first segmentation map. The exemplary apparatus may determine a second segmentation map of a second image through the convolution network using the determined weight of the convolution network and through the deconvolution network using the determined weight of the deconvolution network.
    Type: Grant
    Filed: May 5, 2016
    Date of Patent: April 10, 2018
    Assignees: SAMSUNG ELECTRONICS CO., LTD., POSTECH ACADEMY-INDUSTRY FOUNDATION
    Inventors: Bohyung Han, Seunghoon Hong, Hyeonwoo Noh
  • Patent number: 9922262
    Abstract: A method by which a tracking apparatus tracks a target object includes: acquiring a first tree structure indicating a tracking processing order of frames, each frame including a tracking area in which the target object is located; acquiring a plurality of frame groups, each frame group consisting of two frames, and acquiring distance evaluation values of the respective frame groups; acquiring a second tree structure based on the first tree structure and the distance evaluation values; and tracking the target object based on the acquired second tree structure, wherein the distance evaluation value is determined based on at least one of locations of tracking areas included in two frames belonging to the frame group and pixel values included in the tracking areas.
    Type: Grant
    Filed: December 9, 2015
    Date of Patent: March 20, 2018
    Assignees: SAMSUNG ELECTRONICS CO., LTD., POSTECH ACADEMY-INDUSTRY FOUNDATION
    Inventors: Taegyu Lim, Bohyung Han, Seunghoon Hong