Patents by Inventor Jin Young Moon
Jin Young Moon has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12380697Abstract: Disclosed herein are a method and apparatus for action detection. The method for action detection includes extracting chunk-level features for respective video frame chunks from a streaming video ranging from a past time point to a current time point, based on RGB frames, generating elevated feature information based on a chunk-level feature corresponding to the current time point for each of the video frame chunks, and detecting an action corresponding to the current time point based on the elevated feature information.Type: GrantFiled: August 18, 2022Date of Patent: August 5, 2025Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jin-Young Moon, Sun-Ah Min
-
Patent number: 12283101Abstract: Provided are an apparatus and method for recognizing whether an action objective is achieved. The apparatus includes a video feature extraction module configured to receive a video and output a video feature sequence through an operator, such as a convolution operator, an action case memory module configured to compress and store action case information for each action type and each of success and failure groups transmitted from the video feature extraction module and return the action case information according to a query including a pair of an action type and whether the action is successful, and an action success or failure determination module configured to receive a pair of a video feature sequence and an action type identifier and output whether an action of the given video feature sequence is successful in connection with the action case memory module.Type: GrantFiled: April 1, 2022Date of Patent: April 22, 2025Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Dong Jin Sim, Jin Young Moon
-
Publication number: 20250086227Abstract: Provided is a system for detecting a video semantic interval. The system includes a communication module configured to receive a video and a query sentence, memory in which a program for outputting a semantic interval proposal from the video and the query sentence is stored, and a processor configured to execute the program stored in the memory. By executing the program, the processor outputs a semantic interval proposal having start timing and end timing, which is matched with the query sentence within the video, over a pre-trained video semantic interval detection network based on boundary refinements as the results of the detection of the semantic interval proposal, and outputs a semantic interval proposal having a variable boundary through the refinements of a predetermined semantic interval proposal.Type: ApplicationFiled: November 27, 2023Publication date: March 13, 2025Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jin Young MOON, Jonghee KIM, Muah SEOL
-
Publication number: 20250037469Abstract: A method and apparatus for predicting pedestrian safety information based on video is disclosed. Pedestrian trajectory prediction data based on video input is first generated. Then, pedestrian behavior prediction data based on the video input is generated. A potential risk to pedestrian safety is estimated based on the pedestrian trajectory prediction data, the pedestrian behavior prediction data, and a surface classification data in the video data.Type: ApplicationFiled: July 12, 2024Publication date: January 30, 2025Inventors: Sungchan Oh, Daehoe Kim, Je-Seok Ham, Jin Young Moon, Yongjin Kwon, JONGHEE KIM
-
Patent number: 12202072Abstract: The present invention relates to a secondary battery capable of improving the coupling force between a safety vent and a cap-up. For example, disclosed is a secondary battery comprising: an electrode assembly; a case for accommodating the electrode assembly; a cap assembly coupled to the upper part of the case; and a gasket interposed between the cap assembly and the case, wherein the cap assembly includes a cap-up and a safety vent, which is provided at the lower part of the cap-up and has a vent extension part extending to the upper part of the cap-up so as to encompass the edge of the cap-up, a welding region, in which the safety vent and the cap-up are welded and coupled by laser welding, is formed in the vent extension part, and the welding region is formed in a line shape.Type: GrantFiled: April 13, 2023Date of Patent: January 21, 2025Assignee: Samsung SDI Co., Ltd.Inventors: Sung Gwi Ko, Dae Kyu Kim, Jin Young Moon
-
Patent number: 12199295Abstract: The present invention relates to a secondary battery, wherein the length of a second bending portion of a safety plate covering an extension portion of a cap-up can be adjusted to prevent deformation of the safety plate and improve the sealing force of the secondary battery, even when a crimping part having a flat upper structure in which the sealing force can deteriorate due to the low compression rate of an insulating gasket is used. For example, disclosed is a secondary battery comprising: a cylindrical can; an electrode assembly which, together with an electrolyte, is accommodated in the cylindrical can; a cap assembly coupled to an upper portion of the cylindrical can; and an insulating gasket interposed between the cap assembly and the cylindrical can. The cap assembly comprises: a cap-up; and a safety plate which is installed below the cap-up and has a second bending portion that surrounds an extension portion of an edge of the cap-up and covers a portion of the upper surface of the extension portion.Type: GrantFiled: June 29, 2020Date of Patent: January 14, 2025Assignee: Samsung SDI Co., Ltd.Inventors: Dae Kyu Kim, Jin Young Moon, Sung Gwi Ko
-
Patent number: 12142028Abstract: Disclosed herein are an object recognition apparatus and method based on environment matching. The object recognition apparatus includes memory for storing at least one program, and a processor for executing the program, wherein the program performs extracting at least one key frame from a video that is input in real time, determining a similarity between the key frame extracted from the input video and each of videos used as training data of prestored multiple recognition models, based on a pretrained similarity-matching network, and selecting a recognition model pretrained with a video having a maximal similarity to the key frame extracted from the input video, preprocessing the input video such that at least one of color and size of a video used as training data of an initial model is similar to that of the input video, and recognizing the preprocessed video based on the initial model.Type: GrantFiled: December 14, 2021Date of Patent: November 12, 2024Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Ki-Min Yun, Jin-Young Moon, Jong-Won Choi, Joung-Su Youn, Seok-Jun Choi, Woo-Seok Hyung
-
Publication number: 20240320963Abstract: The present invention relates to a visual-linguistic feature fusion method and system. The visual-linguistic feature fusion method includes generating a linguistic feature using a text encoder based on text, generating a visual feature using a video encoder based on a video frame, and generating a fused feature of the linguistic feature and the visual feature using an attention technique based on the linguistic feature and the visual feature.Type: ApplicationFiled: March 20, 2024Publication date: September 26, 2024Inventors: JONGHEE KIM, Jin Young Moon
-
Patent number: 12067365Abstract: Disclosed herein are an apparatus for detecting a moment described by a sentence query in a video and a method using the same. A method for detecting a moment described by a sentence query in a video includes dividing an input video into units of chunks and generating a chunk-level feature sequence based on features that are extracted in a form of vectors from respective chunks, dividing an input sentence query into units of words and generating a sentence-level feature sequence based on features that are extracted in a form of vectors from respective words, generating a chunk-sentence relation feature sequence including contextual information of the video by extracting a relation between the chunk-level feature sequence and the sentence-level feature sequence, and estimating a temporal interval corresponding to the sentence query in the video based on the chunk-sentence relation feature sequence.Type: GrantFiled: August 10, 2021Date of Patent: August 20, 2024Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jin-Young Moon, Jung-Kyoo Shin, Hyung-Il Kim
-
Publication number: 20240274933Abstract: A cylindrical secondary battery includes: an electrode assembly including a positive electrode plate, a separator, and a negative electrode plate; a cylindrical can accommodating the electrode assembly and being electrically connected to the negative electrode plate, a lower end of the cylindrical can being open; a rivet terminal passing through an upper surface of the cylindrical can and electrically connected to the positive electrode plate; and a cap plate sealing the lower end of the cylindrical can, the cap plate having no electrical polarity.Type: ApplicationFiled: April 23, 2024Publication date: August 15, 2024Inventors: Jin Young MOON, Gun Gue PARK, Gwan Hyeon YU, Hyun Ki JUNG, Myung Seob KIM, Sung Gwi KO, Woo Hyuk CHOI
-
Publication number: 20240273889Abstract: Disclosed herein are a system and method for classifying novel class objects. The method of classifying novel class objects includes (a) constructing a novel classifier considering prior knowledge acquired from a base classifier, and (b) learning a parameterized weight coefficient of a novel classifier model during the training of the novel classifier.Type: ApplicationFiled: February 7, 2024Publication date: August 15, 2024Inventors: Ye-Bin Moon, Yongjin Kwon, Jin Young Moon, Tae-Hyun Oh
-
Patent number: 12056909Abstract: A method and apparatus for face recognition robust to an alignment of the face comprising: estimating prior information of a facial shape from an input image cropped from an image including a face using the first deep neural network (DNN); extracting feature information of facial appearance from the input image by using a second DNN; training, by using a face image decoder based on the prior information and the feature information, the face recognition apparatus; and extracting, from a test image, facial shape-aware features in the inference step by using the trained second DNN.Type: GrantFiled: October 28, 2021Date of Patent: August 6, 2024Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Hyungil Kim, Kimin Yun, Yongjin Kwon, Jin Young Moon, Jongyoul Park, Kang Min Bae, Sungchan Oh, Youngwan Lee
-
Patent number: 12019679Abstract: Disclosed herein are a method and apparatus for searching for a video section by using a natural language. The method for searching for a video section includes: extracting keywords from a natural language sentence, when the natural language sentence is input; determining whether or not the extracted keywords are included in predefined context information; and deriving and providing a final search result. In addition, when the extracted keywords are included in the predefined context information, a search result is derived by performing a first method, and when the extracted keywords are not included in the predefined context information, a search result is derived by performing a second method.Type: GrantFiled: August 19, 2022Date of Patent: June 25, 2024Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jong Hee Kim, Hyung Il Kim, Jin Young Moon, Je Seok Ham
-
Patent number: 12019678Abstract: Provided is a method of detecting a semantics section in a video. The method includes extracting all video features by inputting an inputted video to a pre-trained first deep neural network algorithm, extracting a query sentence feature by inputting an inputted query sentence to a pre-trained second deep neural network algorithm, generating video-query relation integration feature information in which all of the video features and the query sentence feature have been integrated by inputting all of the video features and the query sentence feature to a plurality of scaled-dot product attention layers, and estimating a video segment corresponding to the query sentence in the video based on the video-query relation integration feature information.Type: GrantFiled: August 4, 2022Date of Patent: June 25, 2024Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jin Young Moon, Jung Kyoo Shin
-
Publication number: 20240097297Abstract: A cylindrical secondary battery includes: an electrode assembly including a first electrode plate and a second electrode plate; a cylindrical case having a disk-shaped top portion and a side portion extending from the top portion, the cylindrical case accommodating the electrode assembly; a cathode terminal extending through the top portion and insulated therefrom; a first current collector plate electrically connected to the first electrode plate and the cathode terminal; a second current collector plate electrically connected to the second electrode plate and the side portion of the cylindrical case; a cap plate coupled to the side portion and insulated therefrom; and an insulation tape between the top portion of the cylindrical case and the first current collector plate and covering the first current collector plate.Type: ApplicationFiled: August 16, 2023Publication date: March 21, 2024Inventors: Hyun Ki JUNG, Myung Seob KIM, Kyung Rok LEE, Jin Young MOON, Ho Jae LEE, Byung Chul PARK
-
Publication number: 20240097251Abstract: An embodiment of the present invention relates to a cylindrical secondary battery in which a positive electrode terminal is adhered and fixed to a cylindrical can by an insulating sheet, and thus sealing between the cylindrical can and the positive electrode terminal can be facilitated due to an increased contact area.Type: ApplicationFiled: March 29, 2022Publication date: March 21, 2024Inventors: Hyun Ki JUNG, Byung Chul PARK, Gun Gue PARK, Gwan Hyeon YU, Jin Young MOON, Kyung Rok LEE, Myung Seob KIM, Sung Gwi KO, Woo Hyuk CHOI
-
Patent number: 11935296Abstract: Provided is an apparatus for online action detection, the apparatus including a feature extraction unit configured to extract a chunk-level feature of a video chunk sequence of a streaming video, a filtering unit configured to perform filtering on the chunk-level feature, and an action classification unit configured to classify an action class using the filtered chunk-level feature.Type: GrantFiled: August 25, 2021Date of Patent: March 19, 2024Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jin Young Moon, Hyung Il Kim, Jong Youl Park, Kang Min Bae, Ki Min Yun
-
Publication number: 20230368499Abstract: The disclosure relates to a method of extracting image features based on a vision transformer, a method of performing embedding on an input image in units of patches and extracting visual features through global attention. An apparatus for extracting an image feature based on a vision transformer according to an embodiment of the disclosure includes a memory configured to store data and a processor configured to control the memory, wherein the processor is configured to perform embedding on multi-patches for an input image, extract feature maps for the embedding multi-patches, perform transformer encoding based on a neural network using the extracted feature maps, extract a feature of the input image through a final feature map extracted through the transformer encoding, and wherein the patches have different sizes.Type: ApplicationFiled: May 16, 2023Publication date: November 16, 2023Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Young Wan LEE, Jong Hee KIM, Jin Young MOON, Kang Min BAE, Yu Seok BAE, Je Seok HAM
-
Publication number: 20230325438Abstract: Disclosed herein are a method and apparatus for searching for a video section by using a natural language. The method for searching for a video section includes: extracting keywords from a natural language sentence, when the natural language sentence is input; determining whether or not the extracted keywords are included in predefined context information; and deriving and providing a final search result. In addition, when the extracted keywords are included in the predefined context information, a search result is derived by performing a first method, and when the extracted keywords are not included in the predefined context information, a search result is derived by performing a second method.Type: ApplicationFiled: August 19, 2022Publication date: October 12, 2023Inventors: Jong Hee KIM, Hyung II KIM, Jin Young MOON, Je Seok HAM
-
Publication number: 20230259741Abstract: The present disclosure relates to a method and apparatus for constructing a network adaptable to consecutive/complex domains. An apparatus for constructing a domain adaptive network according to an embodiment of the present disclosure includes a memory configured to store data; and a processor configured to control the memory, wherein the processor is configured to determine a weight to be applied to one or more neural networks based on input data, construct a final neural network by applying the weight to the one or more neural networks, and output result data of the input data using the final neural network, wherein the one or more neural networks are trained using data for each prototype domain.Type: ApplicationFiled: December 28, 2022Publication date: August 17, 2023Applicant: Electronics and Telecommunications Research InstituteInventors: Joong Won HWANG, Yong Jin KWON, Jin Young MOON, Yu Seok BAE, Sung Chan OH