Patents by Inventor Woo Tae Jeong
Woo Tae Jeong has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20250068855Abstract: The present invention relates to a context-based QA generation architecture for generating diverse QA pairs from a single context. The context-based QA generation architecture includes a latent variable generating network, an answer generating network and a question generating network. The latent variable generating network comprises multiple Bi-LSTM encoders encode the a first context, the a first question and the a first answer to generate a first context vector, a first question vector and a first answer vector, respectively, a first Multi-Layer Perceptron (MLP) generate a first question latent variable based on the first context vector and the first question vector, and a second MLP generate a first answer latent variable based on the first question latent variable and the first answer vector. The answer generating network and the question generating network are trained based on the first context, the first question latent variable and the first answer latent variable.Type: ApplicationFiled: November 11, 2024Publication date: February 27, 2025Applicants: 42 Maru Inc., Korea Advanced Institute of Science and TechnologyInventors: Dong Hwan Kim, Sung Ju Hwang, Seanie Lee, Dong Bok Lee, Woo Tae Jeong, Han Su Kim, You Kyung Kwon, Hyun Ok Kim
-
Publication number: 20250021590Abstract: The invention relates to a method and a system for improving performance of text summarization and has an object of improving performance of a technique for generating a summary from a given paragraph. According to the invention to achieve the object, a method for improving performance of text summarization includes: calculating a first likelihood of each of a plurality of nodes included in a graph corresponding to a natural language-based context; calculating a second likelihood of each of the plurality of nodes by assigning a weight to a first likelihood of a node corresponding to a keyword not presenting in the context among a plurality of keywords corresponding to each of the plurality of nodes; calculating a third likelihood of each of all paths present in the graph based on the second likelihood of each of the plurality of nodes; and generating a summary for the context based on a path having the highest third likelihood among the paths.Type: ApplicationFiled: September 27, 2024Publication date: January 16, 2025Applicant: 42Maru Inc.Inventors: Dong Hwan KIM, Han Su KIM, Woo Tae JEONG, Seung Hyeon LEE, Chang Hyeon LIM
-
Patent number: 12159118Abstract: The present invention relates to a context-based QA generation architecture, and an object of the present invention is to generate diverse QA pairs from a single context. To achieve the object, the present invention includes a latent variable generating network including at least one encoder and an artificial neural network (Multi-Layer Perceptron: MLP) and configured to train the artificial neural network using a first context, a first question, and a first answer, and generate a second question latent variable and a second answer latent variable by applying the trained artificial neural network to a second context, an answer generating network configured to generate a second answer by decoding the second answer latent variable, and a question generating network configured to generate a second question based on a second context and the second answer.Type: GrantFiled: December 18, 2023Date of Patent: December 3, 2024Assignees: 42 Maru Inc., Korea Advanced Institute of Science and TechnologyInventors: Dong Hwan Kim, Sung Ju Hwang, Seanie Lee, Dong Bok Lee, Woo Tae Jeong, Han Su Kim, You Kyung Kwon, Hyun Ok Kim
-
Patent number: 12141532Abstract: Aspects of the subject disclosure may include, systems and methods, for example, including receiving a user question data in a speech format or a text format, analyzing the user question data, selecting a plurality of documents from a plurality of domains corresponding to the user question data, searching for a plurality of passages including candidates for an answer value determined as being suitable for the user question data, in the plurality of documents, obtaining candidates by inputting the user question data and the plurality of passages into a plurality of MRC question and answer units, determining the answer value based on whether a reliability value of each of the candidates exceeds a threshold value, and providing the determined answer value to a user. Other embodiments are disclosed.Type: GrantFiled: October 5, 2023Date of Patent: November 12, 2024Assignee: 42 Maru Inc.Inventors: Dong Hwan Kim, Hyun Ok Kim, Woo Tae Jeong
-
Publication number: 20240369442Abstract: A rail management device includes a measurement unit configured to measure upper and lateral profiles of the head portion, and a securing unit fixed to the rail and configured to support and fix the measurement unit, in which the securing unit includes a body connected to the measurement unit, an elastic clamp provided at a lower side of the body and provided to be in close contact with an upper surface of the head portion, an invariable clamp provided at one side of the body and provided to be in close contact with one side lower surface of the head portion, and a variable clamp provided at the other side of the body and provided to be in close contact with the other side lower surface of the head portion.Type: ApplicationFiled: November 2, 2021Publication date: November 7, 2024Inventor: Woo Tae JEONG
-
Patent number: 12130851Abstract: The invention relates to a method and a system for improving performance of text summarization and has an object of improving performance of a technique for generating a summary from a given paragraph. According to the invention to achieve the object, a method for improving performance of text summarization includes: an a step of generating an embedding vector by vectorizing a natural language-based context; a b step of generating a graph using the embedding vector and calculating a first likelihood of each of at least one node included in the graph; a c step of generating a second likelihood by assigning a weight to the first likelihood according to a result of comparing at least one node included in the graph with the context; and a d step of calculating a third likelihood for all candidate paths present in the graph based on the second likelihood, selecting a path having a highest third likelihood, and generating a summary based on the path.Type: GrantFiled: June 14, 2023Date of Patent: October 29, 2024Assignee: 42Maru Inc.Inventors: Dong Hwan Kim, Han Su Kim, Woo Tae Jeong, Seung Hyeon Lee, Chang Hyeon Lim
-
Publication number: 20240232531Abstract: The present invention relates to a method for reinforcing a multiple-choice QA model based on adversarial learning techniques, wherein incorrect answers are further generated based on a data set used in the process of training the multiple-choice QA model to enrich data which are learnable by the multiple-choice QA model.Type: ApplicationFiled: March 20, 2024Publication date: July 11, 2024Applicant: 42Maru IncInventors: Dong Hwan KIM, Han Su KIM, Woo Tae JEONG, Ki Bong SUNG, Hyeon Dey KIM
-
Publication number: 20240143940Abstract: The present invention relates to a context-based QA generation architecture, and an object of the present invention is to generate diverse QA pairs from a single context. To achieve the object, the present invention includes a latent variable generating network including at least one encoder and an artificial neural network (Multi-Layer Perceptron: MLP) and configured to train the artificial neural network using a first context, a first question, and a first answer, and generate a second question latent variable and a second answer latent variable by applying the trained artificial neural network to a second context, an answer generating network configured to generate a second answer by decoding the second answer latent variable, and a question generating network configured to generate a second question based on a second context and the second answer.Type: ApplicationFiled: December 18, 2023Publication date: May 2, 2024Inventors: Dong Hwan KIM, Sung Ju HWANG, Seanie LEE, Dong Bok LEE, Woo Tae JEONG, Han Su KIM, You Kyung KWON, Hyun Ok KIM
-
Patent number: 11960838Abstract: The present invention relates to a method for reinforcing a multiple-choice QA model based on adversarial learning techniques, wherein incorrect answers are further generated based on a data set used in the process of training the multiple-choice QA model to enrich data which are learnable by the multiple-choice QA model.Type: GrantFiled: December 11, 2020Date of Patent: April 16, 2024Assignee: 42Maru Inc.Inventors: Dong Hwan Kim, Han Su Kim, Woo Tae Jeong, Ki Bong Sung, Hyeon Dey Kim
-
Patent number: 11886233Abstract: The present invention relates to a context-based QA generation architecture, and an object of the present invention is to generate diverse QA pairs from a single context. To achieve the object, the present invention includes a latent variable generating network including at least one encoder and an artificial neural network (Multi-Layer Perceptron: MLP) and configured to train the artificial neural network using a first context, a first question, and a first answer, and generate a second question latent variable and a second answer latent variable by applying the trained artificial neural network to a second context, an answer generating network configured to generate a second answer by decoding the second answer latent variable, and a question generating network configured to generate a second question based on a second context and the second answer.Type: GrantFiled: November 12, 2020Date of Patent: January 30, 2024Inventors: Dong Hwan Kim, Sung Ju Hwang, Seanie Lee, Dong Bok Lee, Woo Tae Jeong, Han Su Kim, You Kyung Kwon, Hyun Ok Kim
-
Publication number: 20240028837Abstract: Aspects of the subject disclosure may include, systems and methods, for example, including receiving a user question data in a speech format or a text format, analyzing the user question data, selecting a plurality of documents from a plurality of domains corresponding to the user question data, searching for a plurality of passages including candidates for an answer value determined as being suitable for the user question data, in the plurality of documents, obtaining candidates by inputting the user question data and the plurality of passages into a plurality of MRC question and answer units, determining the answer value based on whether a reliability value of each of the candidates exceeds a threshold value, and providing the determined answer value to a user. Other embodiments are disclosed.Type: ApplicationFiled: October 5, 2023Publication date: January 25, 2024Applicant: 42 Maru Inc.Inventors: Dong Hwan KIM, Hyun Ok KIM, Woo Tae JEONG
-
Patent number: 11816441Abstract: A machine reading comprehension (MRC) question and answer providing method includes receiving a user question; analyzing the user question; selecting at least one document from at least one domain corresponding to an analyzed user question and searching for a passage, which is a candidate answer determined as being suitable for the user question, in the selected at least one document; obtaining at least one correct answer candidate value by inputting the user question and a corresponding passage into each of at least one MRC question and answer unit; and determining whether the at least one correct answer candidate value is a best answer.Type: GrantFiled: November 18, 2022Date of Patent: November 14, 2023Assignee: 42 MARU INC.Inventors: Dong Hwan Kim, Hyun Ok Kim, Woo Tae Jeong
-
Publication number: 20230342620Abstract: A method of generating a question-answer learning model through adversarial learning may include: sampling a latent variable based on constraints in an input passage; generating an answer based on the latent variable; generating a question based on the answer; and machine-learning the question-answer learning model using a dataset of the generated question and answer, wherein the constraints are controlled so that the latent variable is present in a data manifold while increasing a loss of the question-answer learning model.Type: ApplicationFiled: June 26, 2023Publication date: October 26, 2023Applicant: 42Maru Inc.Inventors: Dong Hwan KIM, Woo Tae JEONG, Seanie LEE, Gilje SEONG
-
Publication number: 20230325423Abstract: The invention relates to a method and a system for improving performance of text summarization and has an object of improving performance of a technique for generating a summary from a given paragraph. According to the invention to achieve the object, a method for improving performance of text summarization includes: an a step of generating an embedding vector by vectorizing a natural language-based context; a b step of generating a graph using the embedding vector and calculating a first likelihood of each of at least one node included in the graph; a c step of generating a second likelihood by assigning a weight to the first likelihood according to a result of comparing at least one node included in the graph with the context; and a d step of calculating a third likelihood for all candidate paths present in the graph based on the second likelihood, selecting a path having a highest third likelihood, and generating a summary based on the path.Type: ApplicationFiled: June 14, 2023Publication date: October 12, 2023Applicant: 42Maru Inc.Inventors: Dong Hwan KIM, Han Su Kim, Woo Tae Jeong, Seung Hyeon Lee, Chang Hyeon Lim
-
Patent number: 11727041Abstract: The invention relates to a method and a system for improving performance of text summarization and has an object of improving performance of a technique for generating a summary from a given paragraph. According to the invention to achieve the object, a method for improving performance of text summarization includes: an a step of generating an embedding vector by vectorizing a natural language-based context; a b step of generating a graph by using the embedding vector; a c step of assigning a weight depending on whether or not a keyword corresponding to at least one node included in the graph is present in the context; and a d step of selecting a path having a highest likelihood in the graph and generating a summary based on the path.Type: GrantFiled: December 17, 2020Date of Patent: August 15, 2023Inventors: Dong Hwan Kim, Han Su Kim, Woo Tae Jeong, Seung Hyeon Lee, Chang Hyeon Lim
-
Patent number: 11710046Abstract: A method of generating a question-answer learning model through adversarial learning may include: sampling a latent variable based on constraints in an input passage; generating an answer based on the latent variable; generating a question based on the answer; and machine-learning the question-answer learning model using a dataset of the generated question and answer, wherein the constraints are controlled so that the latent variable is present in a data manifold while increasing a loss of the question-answer learning model.Type: GrantFiled: November 29, 2019Date of Patent: July 25, 2023Inventors: Dong Hwan Kim, Woo Tae Jeong, Seanie Lee, Gilje Seong
-
Publication number: 20230078362Abstract: A machine reading comprehension (MRC) question and answer providing method includes receiving a user question; analyzing the user question; selecting at least one document from at least one domain corresponding to an analyzed user question and searching for a passage, which is a candidate answer determined as being suitable for the user question, in the selected at least one document; obtaining at least one correct answer candidate value by inputting the user question and a corresponding passage into each of at least one MRC question and answer unit; and determining whether the at least one correct answer candidate value is a best answer.Type: ApplicationFiled: November 18, 2022Publication date: March 16, 2023Applicant: 42 Maru Inc.Inventors: Dong Hwan KIM, Hyun Ok KIM, Woo Tae JEONG
-
Patent number: 11531818Abstract: A machine reading comprehension (MRC) question and answer providing method includes receiving a user question; analyzing the user question; selecting at least one document from at least one domain corresponding to an analyzed user question and searching for a passage, which is a candidate answer determined as being suitable for the user question, in the selected at least one document; obtaining at least one correct answer candidate value by inputting the user question and a corresponding passage into each of at least one MRC question and answer unit; and determining whether the at least one correct answer candidate value is a best answer.Type: GrantFiled: November 15, 2019Date of Patent: December 20, 2022Assignee: 42 MARU INC.Inventors: Dong Hwan Kim, Hyun Ok Kim, Woo Tae Jeong
-
Publication number: 20220179893Abstract: The invention relates to a method and a system for improving performance of text summarization and has an object of improving performance of a technique for generating a summary from a given paragraph. According to the invention to achieve the object, a method for improving performance of text summarization includes: an a step of generating an embedding vector by vectorizing a natural language-based context; a b step of generating a graph by using the embedding vector; a c step of assigning a weight depending on whether or not a keyword corresponding to at least one node included in the graph is present in the context; and a d step of selecting a path having a highest likelihood in the graph and generating a summary based on the path.Type: ApplicationFiled: December 17, 2020Publication date: June 9, 2022Inventors: Dong Hwan KIM, Han Su KIM, Woo Tae JEONG, Seung Hyeon LEE, Chang Hyeon LIM
-
Publication number: 20220180061Abstract: The present invention relates to a method for reinforcing a multiple-choice QA model based on adversarial learning techniques, wherein incorrect answers are further generated based on a data set used in the process of training the multiple-choice QA model to enrich data which are learnable by the multiple-choice QA model.Type: ApplicationFiled: December 11, 2020Publication date: June 9, 2022Inventors: Dong Hwan KIM, Han Su KIM, Woo Tae JEONG, Ki Bong SUNG, Hyeon Dey KIM