Patents by Inventor Seohyung LEE

Seohyung LEE has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11093734
    Abstract: A method and apparatus with emotion recognition acquires a plurality of pieces of data corresponding a plurality of inputs for each modality and corresponding to a plurality of modalities; determines a dynamics representation vector corresponding to each of the plurality of modalities based on a plurality of features for each modality extracted from the plurality of pieces of data; determines a fused representation vector based on the plurality of dynamics representation vectors corresponding to the plurality of modalities; and recognizes an emotion of a user based on the fused representation vector.
    Type: Grant
    Filed: November 7, 2018
    Date of Patent: August 17, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Youngsung Kim, Youngjun Kwak, Byung In Yoo, Seohyung Lee
  • Publication number: 20210201447
    Abstract: A system includes: an image sensor configured to acquire an image; an image processor configured to generate a quantized image based on the acquired image using a trained quantization filter; and an output interface configured to output the quantized image.
    Type: Application
    Filed: September 14, 2020
    Publication date: July 1, 2021
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Sangil JUNG, Dongwook LEE, Jinwoo SON, Changyong SON, Jaehyoung YOO, Seohyung LEE, Changin CHOI, Jaejoon HAN
  • Publication number: 20210192315
    Abstract: A processor-implemented neural network method includes: generating a first output line of an output feature map by performing a convolution operation between a first input line group of an input feature map and weight kernels; generating a first output of an operation block including the convolution operation based on the first output line; and storing the first output in a memory in which the input feature map is stored by overwriting the first output to a memory space of the memory.
    Type: Application
    Filed: October 15, 2020
    Publication date: June 24, 2021
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: JINWOO SON, CHANGYONG SON, JAEHYOUNG YOO, SEOHYUNG LEE, SANGIL JUNG, CHANGIN CHOI
  • Publication number: 20210142041
    Abstract: Disclosed is a method and apparatus for face detection using an adaptive threshold. The method includes determining a detection box in an input image, calculating a confidence score indicating whether an object in the detection box corresponds to a face, setting an adaptive threshold based on a size of the detection box, and determining whether the object in the detection box corresponds to a face based on comparing the confidence score to the adaptive threshold.
    Type: Application
    Filed: October 20, 2020
    Publication date: May 13, 2021
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Seohyung LEE, Jaehyoung YOO, Jinwoo SON, Changyong SON, Sangil JUNG, Changin CHOI
  • Publication number: 20210049474
    Abstract: A processor-implemented data processing method and apparatus for a neural network is provided. The data processing method includes generating cumulative data by accumulating results of multiplication operations between at least a portion of input elements in an input plane and at least a portion of weight elements in a weight plane, and generating an output plane corresponding to an output channel among output planes of an output feature map respectively corresponding to output channels based on the generated cumulative data.
    Type: Application
    Filed: April 23, 2020
    Publication date: February 18, 2021
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Jinwoo SON, Changyong SON, Jaehyoung YOO, Seohyung LEE, Sangil JUNG, Changin CHOI, Jaejoon HAN
  • Patent number: 10885317
    Abstract: A facial expression recognition apparatus and method and a facial expression training apparatus and method are provided. The facial expression recognition apparatus generates a speech map indicating a correlation between a speech and each portion of an object based on a speech model, extracts a facial expression feature associated with a facial expression based on a facial expression model, and recognizes a facial expression of the object based on the speech map and the facial expression feature. The facial expression training apparatus trains the speech model and the facial expression model.
    Type: Grant
    Filed: December 19, 2018
    Date of Patent: January 5, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Byung In Yoo, Youngjun Kwak, Youngsung Kim, Seohyung Lee
  • Publication number: 20200202199
    Abstract: A neural network processing method and apparatus based on nested bit representation is provided. The processing method includes obtaining first weights for a first layer of a source model of a first layer of a neural network, determining a bit-width for the first layer of the neural network, obtaining second weights for the first layer of the neural network by extracting at least one bit corresponding to the determined bit-width from each of the first weights for the first layer of a source model corresponding to the first layer of the neural network, and processing input data of the first layer of the neural network by executing the first layer of the neural network based on the obtained second weights.
    Type: Application
    Filed: August 12, 2019
    Publication date: June 25, 2020
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Seohyung LEE, Youngjun KWAK, Jinwoo SON, Changyong SON, Sangil JUNG, Chang Kyu CHOI, Jaejoon HAN
  • Publication number: 20200202200
    Abstract: A processor-implemented neural network processing method includes: obtaining a kernel bit-serial block corresponding to first data of a weight kernel of a layer in a neural network; generating a feature map bit-serial block based on second data of one or more input feature maps of the layer; and generating at least a portion of an output feature map by performing a convolution operation of the layer using a bitwise operation between the kernel bit-serial block and the feature map bit-serial block.
    Type: Application
    Filed: August 16, 2019
    Publication date: June 25, 2020
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Jinwoo SON, Changyong SON, Seohyung LEE, Sangil JUNG, Chang Kyu CHOI, Jaejoon HAN
  • Publication number: 20200074058
    Abstract: Disclosed is a method and apparatus for training a user terminal. A user terminal may authenticate a user input using an authentication model of the user terminal, generate a gradient to train the authentication model from the user input, in response to a success in the authentication, accumulate the generated gradient in positive gradients, and train the authentication model based on the positive gradients.
    Type: Application
    Filed: July 31, 2019
    Publication date: March 5, 2020
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Jinwoo SON, Changyong SON, Jaejoon HAN, Sangil JUNG, Seohyung LEE
  • Publication number: 20190370940
    Abstract: A processor-implemented method of generating feature data includes: receiving an input image; generating, based on a pixel value of the input image, at least one low-bit image having a number of bits per pixel lower than a number of bits per pixel of the input image; and generating, using at least one neural network, feature data corresponding to the input image from the at least one low-bit image.
    Type: Application
    Filed: May 8, 2019
    Publication date: December 5, 2019
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Chang Kyu CHOI, Youngjun KWAK, Seohyung LEE
  • Publication number: 20190347550
    Abstract: Disclosed is a processor-implemented data processing method in a neural network. A data processing apparatus includes at least one processor, and at least one memory configured to store instructions to be executed by the processor and a neural network, wherein the processor is configured to, based on the instructions, input an input activation map into a current layer included in the neural network, output an output activation map by performing a convolution operation between the input activation map and a weight quantized with a first representation bit number of the current layer, and output a quantized activation map by quantizing the output activation map with a second representation bit number based on an activation quantization parameter.
    Type: Application
    Filed: April 30, 2019
    Publication date: November 14, 2019
    Applicant: SAMSUNG ELECTRONICS CO., LTD.
    Inventors: Sangil JUNG, Changyong SON, Seohyung LEE, Jinwoo SON, Chang Kyu CHOI
  • Publication number: 20190213400
    Abstract: A method and apparatus with emotion recognition acquires a plurality of pieces of data corresponding a plurality of inputs for each modality and corresponding to a plurality of modalities; determines a dynamics representation vector corresponding to each of the plurality of modalities based on a plurality of features for each modality extracted from the plurality of pieces of data; determines a fused representation vector based on the plurality of dynamics representation vectors corresponding to the plurality of modalities; and recognizes an emotion of a user based on the fused representation vector.
    Type: Application
    Filed: November 7, 2018
    Publication date: July 11, 2019
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Youngsung KIM, Youngjun KWAK, Byung In YOO, Seohyung LEE
  • Publication number: 20190213399
    Abstract: A facial expression recognition apparatus and method and a facial expression training apparatus and method are provided. The facial expression recognition apparatus generates a speech map indicating a correlation between a speech and each portion of an object based on a speech model, extracts a facial expression feature associated with a facial expression based on a facial expression model, and recognizes a facial expression of the object based on the speech map and the facial expression feature. The facial expression training apparatus trains the speech model and the facial expression model.
    Type: Application
    Filed: December 19, 2018
    Publication date: July 11, 2019
    Applicant: Samsung Electronics Co., Ltd.
    Inventors: Byung In YOO, Youngjun KWAK, Youngsung KIM, Seohyung LEE