Patents by Inventor Jun-Young Jeong
Jun-Young Jeong has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240287966Abstract: Provided are a method and device for controlling a wind speed parameter of a wind-based power generation facility which include a memory; and a processor connected to the memory, wherein the processor is configured to: receive a first wind speed corresponding to an average wind speed per a preset time interval from a first real-world wind-based power generation facility installed in a real world, and receive therefrom a first power generation amount as generated per the preset time interval; generate a first dataset representing a power generation amount based on a change in a wind speed, based on the first wind speed and the first power generation amount; calculate a first power coefficient related to the first power generation amount based on each first wind speed, based on the first data set.Type: ApplicationFiled: February 21, 2024Publication date: August 29, 2024Applicant: DXLABZ Co., Ltd.Inventors: Jong Hyun KIM, Myoung Cheol KANG, Chae Young Park, Jun Young JEONG, Jin Won LEE
-
Publication number: 20240278437Abstract: A substrate processing apparatus includes a body which includes an upper face and side faces, and extends in a first direction, a plurality of robot arms which are installed on the upper face of the body, extend in the first direction, are spaced apart from each other in a second direction perpendicular to the upper face of the body, and are able to grip a wafer, and an alignment jig (JIG) which is installed on the upper face and side faces of the body, and senses positions of the plurality of robot arms, wherein the alignment jig includes, a horizontal frame disposed on the upper face of the body, a vertical frame disposed on the side faces of the body, and a displacement sensor installed on the horizontal frame and the vertical frame to sense coordinates of upper faces of the plurality of robot arms and side faces of the plurality of robot arms, the displacement sensor includes a first sensor and a second sensor which are spaced apart from side faces of the plurality of robot arms in a third direction perpeType: ApplicationFiled: January 16, 2024Publication date: August 22, 2024Inventors: Hyeon Dong SONG, Jun Young MOON, Sang Woo PARK, Un Ki JEONG, Ji Ho UH, Hyun Soo CHUN
-
Publication number: 20240276747Abstract: An organic light emitting diode (OLED) and an organic light emitting device comprising the OLED (e.g., a display device or a lighting device) are described. The OLED can include a first blue emitting material layer including a first host, and a second blue emitting material layer including a second host, disposed between two electrodes. The OLED includes two blue emitting material layers so that an exciton recombination zone is distributed within the blue emitting material layers irrespective of current density or gradation. As the amount of non-emitting excitons accumulated outside of the emitting material layers is minimized, the driving voltage of the OLED can be lowered, and the luminous efficiency and the luminous lifespan of the OLED can be improved.Type: ApplicationFiled: August 30, 2023Publication date: August 15, 2024Applicant: LG Display Co., Ltd.Inventors: Jun-Su HA, Shin-Young JEONG, Yu-Jeong LEE, Hyun-Jin CHO, Eun-Jung PARK, Ju-Hyuk KWON, Jang-Dae YOUN
-
Publication number: 20240270807Abstract: The present invention relates to: a recombinant protein in which an interferon-beta mutein, in which an amino acid residue is substituted, and an antibody binding to a specific antigen are fused together; and a pharmaceutical composition for treating cancer, the composition comprising, as active ingredients, a polynucleotide, an expression vector and a recombinant microorganism, and the recombinant protein. In the recombinant protein, the physical properties of interferon-beta are improved through site-specific mutation, and thus the recombinant protein may have improved biological activity and purification efficiency, and the productivity thereof can be increased.Type: ApplicationFiled: April 28, 2021Publication date: August 15, 2024Inventors: Hae Min JEONG, Chan Gyu LEE, Ji Sun LEE, Jun Young CHOI, Na Young KIM, Yong Jin LEE
-
Publication number: 20240276443Abstract: A method and an apparatus for timing measurement based positioning to minimize the number of scans is provided. The positioning method for a terminal includes estimating a temporary location of the terminal; analyzing characteristics of obstacles between the temporary location of the terminal and peripheral access points (APs) based on a pre-stored map on which the peripheral APs and the obstacles are indicated; calculating distances between the temporary location of the terminal and the peripheral APs based on the map; selecting at least three peripheral APs from among the peripheral APs according to at least one of the distances or the obstacle characteristics; generating timing measurement data with the at least three peripheral APs using identification information of the at least three peripheral APs; and estimating a current location of the terminal using the timing measurement data.Type: ApplicationFiled: June 2, 2021Publication date: August 15, 2024Applicant: KOREA RAILROAD RESEARCH INSTITUTEInventors: Seung Min YU, Jun LEE, So Young YOU, Sang Pil KO, Eun Bi JEONG
-
Publication number: 20240236339Abstract: A video encoding method includes classifying a plurality of view images into a basic image and additional images, performing pruning on at least one of the plurality of view images on the basis of the classification result, generating an atlas on the basis of the pruning result, and encoding the atlas and metadata for the atlas. Here, the metadata includes a first flag indicating whether depth estimation needs to be performed on a decoder side.Type: ApplicationFiled: October 20, 2023Publication date: July 11, 2024Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jun Young JEONG, Gwang Soon LEE, Dawid Mieloch, Adrian Dziembowski, Marek Domanski
-
Publication number: 20240137530Abstract: A video encoding method includes classifying a plurality of view images into a basic image and additional images, performing pruning on at least one of the plurality of view images on the basis of the classification result, generating an atlas on the basis of the pruning result, and encoding the atlas and metadata for the atlas. Here, the metadata includes a first flag indicating whether depth estimation needs to be performed on a decoder side.Type: ApplicationFiled: October 19, 2023Publication date: April 25, 2024Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jun Young JEONG, Gwang Soon LEE, Dawid Mieloch, Adrian Dziembowski, Marek Domanski
-
Publication number: 20230396803Abstract: Disclosed herein is a method for encoding/decoding an immersive image, and the method for encoding an immersive image may include extracting an invalid region from an already encoded atlas and encoding a current atlas by referring to the invalid region.Type: ApplicationFiled: April 14, 2023Publication date: December 7, 2023Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Kwan Jung OH, Gwang Soon LEE, Hong Chang SHIN, Jun Young JEONG
-
Patent number: 11838485Abstract: A method of producing an immersive video comprises decoding an atlas, parsing a flag for the atlas, and producing a viewport image using the atlas. The flag may indicate whether the viewport image is capable of being completely produced through the atlas, and, according to a value of the flag, when the viewport image is produced, it may be determined whether an additional atlas is used in addition to the atlas.Type: GrantFiled: April 16, 2021Date of Patent: December 5, 2023Assignee: Electronics and Telecommunications Research InstituteInventors: Gwang Soon Lee, Jun Young Jeong, Kug Jin Yun, Hong Chang Shin, Ho Min Eum
-
Publication number: 20230386090Abstract: An image encoding method according to the present disclosure may include classifying a plurality of view images into a basic image and an additional image; performing pruning for at least one of the plurality of view images based on a result of the classification; generating an atlas based on a result of performing the pruning; and encoding the atlas and metadata for the atlas. In this case, the metadata may include spherical harmonic function information on a point in a three-dimensional space.Type: ApplicationFiled: May 25, 2023Publication date: November 30, 2023Applicant: Electronics and Telecommunications Research InstituteInventors: Hong Chang SHIN, Gwang Soon LEE, Kwan Jung OH, Jun Young JEONG
-
Patent number: 11818395Abstract: Disclosed herein are an immersive video decoding method and an immersive video encoding method. The immersive video decoding method includes partitioning a current picture into multiple main blocks, determining whether each main block is to be partitioned into multiple sub-blocks, and when it is determined that each main block is to be partitioned into multiple sub-blocks, determining a partitioning type of the corresponding main block. Here, the corresponding main block may be partitioned into four or two sub-blocks depending on the partitioning type. When the corresponding main block is partitioned into multiple sub-blocks, whether depth information is to be updated for each of the sub-blocks generated by partitioning the main block may be determined.Type: GrantFiled: April 20, 2022Date of Patent: November 14, 2023Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jun-Young Jeong, Gwang-Soon Lee, Jin-Hwan Lee, Dawid Mieloch, Marek Domanski, Adrian Dziembowski
-
Publication number: 20230336789Abstract: An immersive image encoding method according to the present disclosure includes classifying a plurality of view images into a basic image and an additional image; performing pruning for at least one of the plurality of view images based on the classification result; generating a depth atlas based on a result of performing the pruning; and correcting an occupancy state of pixels in the depth atlas.Type: ApplicationFiled: April 18, 2023Publication date: October 19, 2023Applicant: Electronics and Telecommunications Research InstituteInventors: Kwan Jung OH, Gwang Soon LEE, Hong Chang SHIN, Jun Young JEONG, Jeong Il SEO, Jae Gon KIM, Sung Gyun LIM, Hyeon Jong HWANG
-
Publication number: 20230230285Abstract: A method of encoding an immersive image according to the present disclosure comprises classifying a plurality of view images into a basic image and an additional image, generating a plurality of texture atlases based on the plurality of view images, generating a first depth atlas including depth information of view images included in a first texture atlas among the plurality of texture atlases, and generating a second depth atlas including depth information of view images included in remaining texture atlases other than the first texture atlas.Type: ApplicationFiled: January 12, 2023Publication date: July 20, 2023Applicant: Electronics and Telecommunications Research InstituteInventors: Jun Young JEONG, Gwang Soon LEE, Dawid Mieloch, Marek Domanski, Adrian Dziembowski, Bƒazej Szydeƒko, Dominika Klóska
-
Publication number: 20230232031Abstract: A method of processing an immersive video includes classifying each of a plurality of objects included in a view image as one of a first object group and a second object group, acquiring a patch for each of the plurality of objects, and packing patches to generate at least one atlas. In this instance, patches derived from objects belonging to the first object group may be packed in a different region or a different atlas from a region or an atlas of patches derived from objects belonging to the second object group.Type: ApplicationFiled: January 13, 2023Publication date: July 20, 2023Applicant: Electronics and Telecommunications Research InstituteInventors: Gwang Soon LEE, Kwan Jung OH, Jun Young JEONG
-
Publication number: 20230222694Abstract: A method of processing an immersive video according to the present disclosure includes performing pruning for an input image, generating an atlas based on patches generated by the pruning and generating a cropped atlas by removing a background region of the atlas.Type: ApplicationFiled: January 12, 2023Publication date: July 13, 2023Applicants: Electronics and Telecommunications Research Institute, IUCF-HYU (Industry-University Cooperation Foundation Hanyang University)Inventors: Kwan Jung OH, Gwang Soon LEE, Jeong Il SEO, Hong Chang SHIN, Jun Young JEONG, Euee Seon JANG, Tian Yu Dong, Xin Li, Jai Young OH
-
Patent number: 11651472Abstract: A method for processing an immersive video includes: performing pruning for view images; generating an atlas by packing a patch that is extracted as a result of the pruning; deriving an offset for the patch that is comprised in the atlas; and correcting pixel values in the patch by using the derived offset.Type: GrantFiled: October 18, 2021Date of Patent: May 16, 2023Assignees: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE, POZNAN UNIVERSITY OF TECHNOLOGYInventors: Gwang Soon Lee, Jun Young Jeong, Dawid Mieloch, Adrian Dziembowski, Marek Domanski
-
Publication number: 20230124419Abstract: Disclosed herein are an immersive video encoding method and an immersive video decoding method. The immersive video encoding method includes setting basic view images among input images corresponding to multiple views, generating an atlas image using the basic view images, performing encoding on the atlas image, and generating metadata about an encoded atlas image, wherein the atlas image includes a texture atlas or a depth atlas.Type: ApplicationFiled: October 14, 2022Publication date: April 20, 2023Applicant: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Jun-Young JEONG, Gwang-Soon LEE, Dawid Mieloch, Marek Domanski, Blazej Szydelko, Adrian Dziembowski
-
Patent number: 11616938Abstract: Disclosed herein is an immersive video processing method. The immersive video processing method includes: determining a priority order of pruning for input videos; extracting patches from the input videos based on the priority order of pruning; generating at least one atlas based on the extracted patches; and encoding metadata. Herein, the metadata may include information on a priority order of pruning among input videos.Type: GrantFiled: September 25, 2020Date of Patent: March 28, 2023Assignee: Electronics and Telecommunications Research InstituteInventors: Hong Chang Shin, Gwang Soon Lee, Ho Min Eum, Jun Young Jeong, Kug Jin Yun
-
Patent number: 11575935Abstract: A video encoding method of encoding a multi-view image including one or more basic view images and a plurality of reference view images includes determining a pruning order of the plurality of reference view images, acquiring a plurality of residual reference view images, by pruning the plurality of reference view images based on the one or more basic view images according to the pruning order, encoding the one or more basic view images and the plurality of residual reference view images, and outputting a bitstream including encoding information of the one or more basic view images and the plurality of residual reference view images.Type: GrantFiled: June 15, 2020Date of Patent: February 7, 2023Assignees: Electronics and Telecommunications Research Institute, IUCF-HYU (INDUSTRY-UNIVERSITY COOPERATION FOUNDATION HANYANG UNIVERSITY)Inventors: Hong Chang Shin, Gwang Soon Lee, Ho Min Eum, Jun Young Jeong, Kug Jin Yun, Jun Young Yun, Jong Il Park
-
Publication number: 20230014096Abstract: Disclosed herein are an apparatus for estimating a camera pose using multi-view images of a 2D array structure and a method using the same. The method performed by the apparatus includes acquiring multi-view images from a 2D array camera system, forming a 2D image link structure corresponding to the multi-view images in consideration of the geometric structure of the camera system, estimating an initial camera pose based on an adjacent image extracted from the 2D image link structure and a pair of corresponding feature points, and estimating a final camera pose by reconstructing a 3D structure based on the initial camera pose and performing correction so as to minimize a reprojection error of the reconstructed 3D structure.Type: ApplicationFiled: March 23, 2022Publication date: January 19, 2023Applicants: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTE, SOGANG UNIVERSITY RESEARCH FOUNDATIONInventors: Joon-Soo KIM, Kug-Jin YUN, Jun-Young JEONG, Suk-Ju KANG, Jung-Hee KIM, Woo-June PARK