Patents by Inventor Min Su Jang

Min Su Jang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11897412
    Abstract: The present disclosure relates to a passenger airbag. The passenger airbag according to an embodiment of the present disclosure includes a main panel, and side panels arranged on both sides of the main panel, wherein the main panel includes a contact part to be in contact with a head of a passenger when the passenger airbag is deployed, and a folded portion partitioning the contact part such that both sides of the contact part protrude as the contact part is deployed.
    Type: Grant
    Filed: December 20, 2022
    Date of Patent: February 13, 2024
    Assignee: HYUNDAI MOBIS CO., LTD.
    Inventors: Kwang Soo Cho, Min Su Jang
  • Publication number: 20230382344
    Abstract: The present disclosure relates to a passenger airbag. The passenger airbag according to an embodiment of the present disclosure includes a main panel, and side panels arranged on both sides of the main panel, wherein the main panel includes a contact part to be in contact with a head of a passenger when the passenger airbag is deployed, and a folded portion partitioning the contact part such that both sides of the contact part protrudes as the contact parts is toward both sides.
    Type: Application
    Filed: December 20, 2022
    Publication date: November 30, 2023
    Applicant: HYUNDAI MOBIS CO., LTD.
    Inventors: Kwang Soo CHO, Min Su JANG
  • Patent number: 11691291
    Abstract: Disclosed herein are an apparatus and method for generating robot interaction behavior. The method for generating robot interaction behavior includes generating co-speech gesture of a robot corresponding to utterance input of a user, generating a nonverbal behavior of the robot, that is a sequence of next joint positions of the robot, which are estimated from joint positions of the user and current joint positions of the robot based on a pre-trained neural network model for robot pose estimation, and generating a final behavior using at least one of the co-speech gesture and the nonverbal behavior.
    Type: Grant
    Filed: November 27, 2020
    Date of Patent: July 4, 2023
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Woo-Ri Ko, Do-Hyung Kim, Jae-Hong Kim, Young-Woo Yoon, Jae-Yeon Lee, Min-Su Jang
  • Publication number: 20230142797
    Abstract: Disclosed herein a method and apparatus for learning a locally-adaptive local device task based on cloud simulation. According to an embodiment of the present disclosure, there is provided a method for learning a locally-adaptive local device task. The method comprising: receiving observation data about a surrounding environment recognized by a local device; performing a domain randomization based on the observation data and a failure type of a task assigned to the local device and relearning a policy network of the assigned task based on the domain randomization; and updating a policy network of the local device for the assigned task by transmitting the relearned policy network to the local device.
    Type: Application
    Filed: September 9, 2022
    Publication date: May 11, 2023
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Tae Woo KIM, Jae Hong KIM, Chan Kyu PARK, Woo Han YUN, Ho Sub YOON, Min Su JANG
  • Publication number: 20230147274
    Abstract: Disclosed herein a method and apparatus for recommending a table service based on image recognition. According to an embodiment of the present disclosure, there is provided a method for recommending a table service, including: receiving a table image that is captured in real time; acquiring, by using an artificial intelligence of a pre-learned learning model, table information that includes object information and food information of at least one table in the table image; and recommending, based on the table information, a service for each of the at least one table.
    Type: Application
    Filed: September 6, 2022
    Publication date: May 11, 2023
    Inventors: Woo Han YUN, Do Hyung KIM, Jae Hong KIM, Tae Woo KIM, Chan Kyu PARK, Ho Sub YOON, Jae Yeon LEE, Min Su JANG
  • Publication number: 20230084229
    Abstract: A stealth antenna includes an electromagnetic wave absorbing structure and an antenna patch embedded in the electromagnetic wave absorbing structure. The electromagnetic wave absorbing structure includes an upper dielectric layer, a lower dielectric layer and a spacer disposed between the upper dielectric layer and the lower dielectric layer. The upper dielectric layer includes a dielectric fabric and a conductive coating layer combined with at least a portion of the dielectric fabric. The lower dielectric layer includes a dielectric fabric and has a dielectric constant lower than that of the upper dielectric layer. The antenna patch is disposed between the spacer and the lower dielectric layer.
    Type: Application
    Filed: September 14, 2022
    Publication date: March 16, 2023
    Applicant: Korea Advanced Institute of Science and Technology
    Inventors: Chun-Gon KIM, Woo-Hyeok JANG, Min-Su JANG, Do-Hyeon JIN
  • Publication number: 20230077103
    Abstract: Disclosed herein are a cloud server, an edge server, and a method for generating an intelligence model using the same. The method for generating an intelligence model includes receiving, by the edge server, an intelligence model generation request from a user terminal, generating an intelligence model corresponding to the intelligence model generation request, and adjusting the generated intelligence model.
    Type: Application
    Filed: June 9, 2022
    Publication date: March 9, 2023
    Inventors: Min-Su JANG, Do-Hyung KIM, Jae-Hong KIM, Woo-Han YUN
  • Publication number: 20230053151
    Abstract: Disclosed herein are an apparatus and method for classifying clothing attributes based on deep learning. The apparatus includes memory for storing at least one program and a processor for executing the program, wherein the program includes a first classification unit for outputting a first classification result for one or more attributes of clothing worn by a person included in an input image, a mask generation unit for outputting a mask tensor in which multiple mask layers respectively corresponding to principal part regions obtained by segmenting a body of the person included in the input image are stacked, a second classification unit for outputting a second classification result for the one or more attributes of the clothing by applying the mask tensor, and a final classification unit for determining and outputting a final classification result for the input image based on the first classification result and the second classification result.
    Type: Application
    Filed: October 7, 2021
    Publication date: February 16, 2023
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Chan-Kyu PARK, Do-Hyung KIM, Jae-Hong KIM, Jae-Yeon LEE, Min-Su JANG
  • Patent number: 11486777
    Abstract: Embodiments relate to a torsion sensor device which measures a degree of torsion of a measurement object by using a fiber Bragg gratings (FBG) sensor, the sensor device comprising: an FBG sensor including a sensing unit formed in one section of an elongated optical fiber; and a fixing device for fixing and supporting the FBG sensor to cause displacement of the FBG sensor according to motion of the measurement object, wherein the fixing device includes a bending prevention member to enable the sensing unit to have torsion displacement without bending displacement, according to the motion of the measurement object.
    Type: Grant
    Filed: July 10, 2019
    Date of Patent: November 1, 2022
    Assignee: Korea Institute of Science and Technology
    Inventors: Jinseok Kim, Sungwook Yang, Min Su Jang, Jun Sik Kim, Kyumin Kang, Bum-Jae You
  • Publication number: 20220055221
    Abstract: Disclosed herein are an apparatus and method for generating robot interaction behavior. The method for generating robot interaction behavior includes generating co-speech gesture of a robot corresponding to utterance input of a user, generating a nonverbal behavior of the robot, that is a sequence of next joint positions of the robot, which are estimated from joint positions of the user and current joint positions of the robot based on a pre-trained neural network model for robot pose estimation, and generating a final behavior using at least one of the co-speech gesture and the nonverbal behavior.
    Type: Application
    Filed: November 27, 2020
    Publication date: February 24, 2022
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Woo-Ri KO, Do-Hyung KIM, Jae-Hong KIM, Young-Woo YOON, Jae-Yeon LEE, Min-Su JANG
  • Publication number: 20220024046
    Abstract: Disclosed herein are an apparatus and method for determining a modality of interaction between a user and a robot. The apparatus includes memory in which at least one program is recorded and a processor for executing the program. The program may perform recognizing a user state and an environment state by sensing circumstances around a robot, determining an interaction capability state associated with interaction with a user based on the recognized user state and environment state, and determining the interaction behavior of the robot for the interaction with the user based on the user state, the environment state, and the interaction capability state.
    Type: Application
    Filed: October 28, 2020
    Publication date: January 27, 2022
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Min-Su JANG, Do-Hyung KIM, Jae-Hong KIM, Jae-Yeon LEE
  • Publication number: 20220019916
    Abstract: Disclosed herein are an apparatus and method for recommending federated learning based on recognition model tendency analysis. The method for recommending federated learning based on recognition model tendency analysis in a server device may include analyzing the tendency of a recognition model trained using reinforcement learning by each of multiple user terminals, grouping the multiple user terminals according to the tendency of the recognition model, and transmitting federated-learning group information including information about other user terminals grouped together with at least one of the multiple user terminals.
    Type: Application
    Filed: December 2, 2020
    Publication date: January 20, 2022
    Applicant: Electronics and Telecommunications Research Institute
    Inventors: Jin-Hyeok JANG, Do-Hyung KIM, Jae-Hong KIM, Jae-Yeon LEE, Min-Su JANG, Jeong-Dan CHOI
  • Publication number: 20210394021
    Abstract: Disclosed herein are an apparatus and method for evaluating a human motion using a mobile robot. The method may include identifying the exercise motion of a user by analyzing an image of the entire body of the user captured using a camera installed in the mobile robot, evaluating the pose of the user by comparing the standard pose of the identified exercise motion with images of the entire body of the user captured by the camera of the mobile robot from two or more target locations, and comprehensively evaluating the exercise motion of the user based on the pose evaluation information of the user from each of the two or more target locations.
    Type: Application
    Filed: November 30, 2020
    Publication date: December 23, 2021
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Do-Hyung KIM, Jae-Hong KIM, Young-Woo YOON, Jae-Yeon LEE, Min-Su JANG, Jeong-Dan CHOI
  • Publication number: 20210293636
    Abstract: Embodiments relate to a torsion sensor device which measures a degree of torsion of a measurement object by using a fiber Bragg gratings (FBG) sensor, the sensor device comprising: an FBG sensor including a sensing unit formed in one section of an elongated optical fiber; and a fixing device for fixing and supporting the FBG sensor to cause displacement of the FBG sensor according to motion of the measurement object, wherein the fixing device includes a bending prevention member to enable the sensing unit to have torsion displacement without bending displacement, according to the motion of the measurement object.
    Type: Application
    Filed: July 10, 2019
    Publication date: September 23, 2021
    Applicants: KOREA INSTITUTE OF SCIENCE AND TECHNOLOGY, CENTER OF HUMAN-CENTERED INTERACTION FOR COEXISTENCE
    Inventors: Jinseok KIM, Sungwook YANG, Min Su JANG, Jun Sik KIM, Kyumin KANG, Bum-Jae YOU
  • Patent number: 11113988
    Abstract: Disclosed herein are an apparatus for writing a motion script and an apparatus and method for self-teaching of a motion. The method for self-teaching of a motion, in which the apparatus for writing a motion script and the apparatus for self-teaching of a motion are used, includes creating, by the apparatus for writing a motion script, a motion script based on expert motion of a first user; analyzing, by the apparatus for self-teaching of a motion, a motion of a second user, who learns the expert motion, based on the motion script; and outputting, by the apparatus for self-teaching of a motion, a result of analysis of the motion of the second user.
    Type: Grant
    Filed: July 1, 2020
    Date of Patent: September 7, 2021
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Do-Hyung Kim, Min-Su Jang, Jae-Hong Kim, Young-Woo Yoon, Jae-Il Cho
  • Publication number: 20200390371
    Abstract: Disclosed herein are an apparatus and method for evaluating a physical activity ability. The apparatus includes one or more processors and executable memory for storing at least one program executed by the one or more processors. The at least one program recognizes the position of a human by analyzing an image input through a camera, identifies the motion of the human by analyzing the sequence of the image, and evaluates the physical activity ability of the human from the motion of the human based on the body segment of the human.
    Type: Application
    Filed: December 11, 2019
    Publication date: December 17, 2020
    Inventors: Do-Hyung KIM, Jae-Hong KIM, Jae-Yeon LEE, Min-Su JANG, Sung-Woong SHIN
  • Publication number: 20200335007
    Abstract: Disclosed herein are an apparatus for writing a motion script and an apparatus and method for self-teaching of a motion. The method for self-teaching of a motion, in which the apparatus for writing a motion script and the apparatus for self-teaching of a motion are used, includes creating, by the apparatus for writing a motion script, a motion script based on expert motion of a first user; analyzing, by the apparatus for self-teaching of a motion, a motion of a second user, who learns the expert motion, based on the motion script; and outputting, by the apparatus for self-teaching of a motion, a result of analysis of the motion of the second user.
    Type: Application
    Filed: July 1, 2020
    Publication date: October 22, 2020
    Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Do-Hyung KIM, Min-Su JANG, Jae-Hong KIM, Young-Woo YOON, Jae-Il CHO
  • Patent number: 10800043
    Abstract: Disclosed herein are an interaction apparatus and method. The interaction apparatus includes an input unit for receiving multimodal information including an image and a voice of a target to allow the interaction apparatus to interact with the target, a recognition unit for recognizing turn-taking behavior of the target using the multimodal information, and an execution unit for taking an activity for interacting with the target based on results of recognition of the turn-taking behavior.
    Type: Grant
    Filed: November 30, 2018
    Date of Patent: October 13, 2020
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Cheon-Shu Park, Jae-Hong Kim, Jae-Yeon Lee, Min-Su Jang
  • Patent number: 10789458
    Abstract: Disclosed herein are a human behavior recognition apparatus and method. The human behavior recognition apparatus includes a multimodal sensor unit for generating at least one of image information, sound information, location information, and Internet-of-Things (IoT) information of a person using a multimodal sensor, a contextual information extraction unit for extracting contextual information for recognizing actions of the person from the at least one piece of generated information, a human behavior recognition unit for generating behavior recognition information by recognizing the actions of the person using the contextual information and recognizing a final action of the person using the behavior recognition information and behavior intention information, and a behavior intention inference unit for generating the behavior intention information based on context of action occurrence related to each of the actions of the person included in the behavior recognition information.
    Type: Grant
    Filed: December 7, 2018
    Date of Patent: September 29, 2020
    Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE
    Inventors: Do-Hyung Kim, Jin-Hyeok Jang, Jae-Hong Kim, Sung-Woong Shin, Jae-Yeon Lee, Min-Su Jang
  • Patent number: 10777198
    Abstract: Disclosed herein are an apparatus and method for determining the speech and motion properties of an interactive robot. The method for determining the speech and motion properties of an interactive robot includes receiving interlocutor conversation information including at least one of voice information and image information about an interlocutor that interacts with an interactive robot, extracting at least one of a verbal property and a nonverbal property of the interlocutor by analyzing the interlocutor conversation information, determining at least one of a speech property and a motion property of the interactive robot based on at least one of the verbal property, the nonverbal property, and context information inferred from a conversation between the interactive robot and the interlocutor, and controlling the operation of the interactive robot based on at least one of the determined speech property and motion property of the interactive robot.
    Type: Grant
    Filed: August 13, 2018
    Date of Patent: September 15, 2020
    Assignee: Electronics and Telecommunications Research Institute
    Inventors: Young-Woo Yoon, Jae-Hong Kim, Jae-Yeon Lee, Min-Su Jang