Patents by Inventor Heeyeon NAH

Heeyeon NAH has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11348329
    Abstract: Provided is a method of recognizing a business card by a terminal through federated learning, including receiving an image of the business card; extracting a feature value from the image including text related to a field of an address book set in the terminal; inputting the feature value into a first common prediction model and determining first text information from an output of the first common prediction model; analyzing a pattern of the first text information and inputting the first text information into the field; caching the first text information and second text information received for error correction of the first text information from a user; and training the first common prediction model using the image, the first text information, and the second text information, whereby each terminal may train and share the first common prediction model.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: May 31, 2022
    Assignee: LG ELECTRONICS INC.
    Inventors: Daesung Kim, Heeyeon Nah, Jaewoong Yun
  • Patent number: 10936904
    Abstract: Provided is a method for recognizing handwritten characters in a terminal through federated learning. In the method, a first common prediction model for recognizing text from handwritten characters input from a user is applied, the handwritten characters are received from the user, feature values are extracted from an image including the handwritten characters, the feature values are input to the first common prediction mode, first text information is determined from an output of the first common prediction model, the first text information and a second text information received from the user for error correction of the first text information are cached, and the first common prediction model is learned using the image including the handwritten characters, the first text information, and the second text information. In this way, the terminal can determine the text from the handwritten characters input by the user, and can learn the first common prediction model through a feedback operation of the user.
    Type: Grant
    Filed: September 9, 2019
    Date of Patent: March 2, 2021
    Assignee: LG ELECTRONICS INC.
    Inventors: Heeyeon Nah, Daesung Kim, Jaewoong Yun
  • Publication number: 20200005071
    Abstract: Provided is a method of recognizing a business card of a terminal through federated learning, including receiving an image of the business card; extracting a feature value from the image including text related to a field of an address book set in the terminal; inputting the feature value into a first common prediction model and determining first text information from an output of the first common prediction model; analyzing a pattern of the first text information and inputting the first text information into the field; caching the first text information and second text information received for error correction of the first text information from a user; and training the first common prediction model using the image, the first text information, and the second text information, whereby each terminal may train and share the first common prediction model.
    Type: Application
    Filed: September 9, 2019
    Publication date: January 2, 2020
    Applicant: LG ELECTRONICS INC.
    Inventors: Daesung KIM, Heeyeon NAH, Jaewoong YUN
  • Publication number: 20200005081
    Abstract: Provided is a method for recognizing handwritten characters in a terminal through federated learning. In the method, a first common prediction model for recognizing text from handwritten characters input from a user is applied, the handwritten characters are received from the user, feature values are extracted from an image including the handwritten characters, the feature values are input to the first common prediction mode, first text information is determined from an output of the first common prediction model, the first text information and a second text information received from the user for error correction of the first text information are cached, and the first common prediction model is learned using the image including the handwritten characters, the first text information, and the second text information. In this way, the terminal can determine the text from the handwritten characters input by the user, and can learn the first common prediction model through a feedback operation of the user.
    Type: Application
    Filed: September 9, 2019
    Publication date: January 2, 2020
    Applicant: LG ELECTRONICS INC.
    Inventors: Heeyeon NAH, Daesung KIM, Jaewoong YUN