Patents by Inventor Chan Su Lee

Chan Su Lee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11983248
    Abstract: Disclosed herein are an apparatus and method for classifying clothing attributes based on deep learning. The apparatus includes memory for storing at least one program and a processor for executing the program, wherein the program includes a first classification unit for outputting a first classification result for one or more attributes of clothing worn by a person included in an input image, a mask generation unit for outputting a mask tensor in which multiple mask layers respectively corresponding to principal part regions obtained by segmenting a body of the person included in the input image are stacked, a second classification unit for outputting a second classification result for the one or more attributes of the clothing by applying the mask tensor, and a final classification unit for determining and outputting a final classification result for the input image based on the first classification result and the second classification result.
    Type: Grant
    Filed: October 7, 2021
    Date of Patent: May 14, 2024
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Chan-Kyu Park, Do-Hyung Kim, Jae-Hong Kim, Jae-Yeon Lee, Min-Su Jang
  • Patent number: 10068131
    Abstract: An apparatus for recognizing expression using an expression-gesture dictionary, includes a learning image acquisitor to obtain data from a learning expression, perform a normalization based on the data, track a change of a dense motion from a reference frame, and generate expression learning data, an expression-gesture dictionary and expression-gesture dictionary learner to represent and store a numerical value for expression recognition for each expression using a local support map in an image coordinate space for a motion flow with respect to a set of changes of the dense motion, an expression classifier learner to learn an expression classification for each expression based on a weight of data on the expression-gesture dictionary, a recognition image acquisitor to obtain data from a recognition target, and generate recognition data, and an expression recognizer to analyze an expression weight on the recognition data, and recognize an expression by the expression classifier learner.
    Type: Grant
    Filed: January 29, 2014
    Date of Patent: September 4, 2018
    Assignee: INDUSTRY-ACADEMIC COOPERATION FOUNDATION, YEUNGNAM UNIVERSITY
    Inventors: Chan Su Lee, Ja Soon Jang
  • Publication number: 20160342828
    Abstract: An apparatus for recognizing expression using an expression-gesture dictionary, includes a learning image acquisitor to obtain data from a learning expression, perform a normalization based on the data, track a change of a dense motion from a reference frame, and generate expression learning data, an expression-gesture dictionary and expression-gesture dictionary learner to represent and store a numerical value for expression recognition for each expression using a local support map in an image coordinate space for a motion flow with respect to a set of changes of the dense motion, an expression classifier learner to learn an expression classification for each expression based on a weight of data on the expression-gesture dictionary, a recognition image acquisitor to obtain data from a recognition target, and generate recognition data, and an expression recognizer to analyze an expression weight on the recognition data, and recognize an expression by the expression classifier learner.
    Type: Application
    Filed: January 29, 2014
    Publication date: November 24, 2016
    Inventors: Chan Su LEE, Ja Soon JANG
  • Patent number: 6445387
    Abstract: A virtual space search interface is provided to search a three-dimensional virtual space image by the visual angle of a virtual user immersed therein. The virtual space search interface engine interfaces the movement of the virtual user so as to adjust it with a mouse in such a way as the movement of a visual line of the body, displays an ICON which can adjust a three-dimensional motion with a mouse on the main image in which the search is made and enables the ICON to search a three-dimensional virtual space image by selecting an element of changes of a gaze, controlling changes of movement in horizon, its moving speed and 2-dimensional movement of neck. Consequently, movement is more free than the use of various ICONs. Since the user is well acquainted with the ICON which represents the body, the user knows what manipulation is good and when such manipulation can be executed so that a virtual space can be searched easily.
    Type: Grant
    Filed: October 14, 1999
    Date of Patent: September 3, 2002
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Jeong Dan Choi, Chan Su Lee, Jin Seong Choi, Chan Jong Park
  • Patent number: 6149435
    Abstract: The present invention is related to a system which is portable and attache the body of a trainee so that the trainee can practice navigating of a model airplane at an arbitrary location, contrary to the system of a large-sized airplane. According to an object of the present invention there is provided a trainee navigates a model airplane virtually by simulating a model airplane using a computer by combining the three-dimensional virtual image of a model airplane and the actual image of the training site by a computer.
    Type: Grant
    Filed: December 24, 1998
    Date of Patent: November 21, 2000
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Chan Jong Park, Jin Sung Choi, Man Kyu Sung, Ji Hyung Lee, Sang Won Kim, Dong Hyun Kim, Jung Kak Kim, Chan Su Lee