Patents by Inventor Eun Seok Ryu

Eun Seok Ryu has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12368823
    Abstract: Systems and methods are provided for traversing virtual spaces. The system receives image data of respective views of an environment simultaneously captured by a plurality of cameras. The system estimates a location of a subject based on the image data. The system selects a group of one or more cameras of the plurality of cameras based on the estimated location of the subject. The system generates a bitstream based on additional image data of respective views of the environment captured by the group of one or more cameras.
    Type: Grant
    Filed: January 3, 2023
    Date of Patent: July 22, 2025
    Assignee: Adeia Guides Inc.
    Inventor: Eun-Seok Ryu
  • Patent number: 12347147
    Abstract: An image encoding/decoding method and apparatus are provided. An image decoding method according to the present disclosure includes: receiving an image, in which a first atlas for a basic view of a current image and a second atlas for an additional view of the current image are merged; extracting an image divided in a predetermined image unit within the first atlas and the second atlas; dividing the first atlas and the second atlas in the predetermined image unit; and reconstructing the image divided in the predetermined image unit, wherein the dividing of the first atlas and the second atlas in the predetermined image unit may non-uniformly divide the first atlas and the second atlas.
    Type: Grant
    Filed: April 25, 2022
    Date of Patent: July 1, 2025
    Assignee: Research & Business Foundation Sungkyunkwan University
    Inventors: Eun Seok Ryu, Jong Beom Jeong, Soon Bin Lee
  • Publication number: 20250133206
    Abstract: An image encoding/decoding method and apparatus and a method for transmitting a bitstream generated by the image encoding method are provided. The image encoding method according to the present disclosure may include: encoding an image in sub-regions with different sizes and generating one or more bitstreams for the sub-regions; obtaining a user viewport for the image; allocating sub-regions corresponding to the user viewport among the sub-regions to the image, wherein the image includes an inner region located inside the user viewport, a boundary region adjacent to a boundary of the user viewport, and an outer region located outside the user viewport; and generating at least one bitstream corresponding to the allocated sub-regions from bitstreams for the sub-regions, and a sub-region with a relatively large size may be allocated within the inner region, and a sub-region with a relatively small size may be allocated within the boundary region.
    Type: Application
    Filed: July 26, 2024
    Publication date: April 24, 2025
    Applicant: RESEARCH & BUSINESS FOUNDATION SUNGKYUNKWAN UNIVERSITY
    Inventors: Eun Seok RYU, Jong Beom JEONG, Jun Hyeong PARK
  • Publication number: 20250071396
    Abstract: Systems and methods are provided herein for generating a summary for a piece of content using a thumbnail container. This may be accomplished by a system receiving a thumbnail container related to a piece of content. The system may also receive user information, a device characteristic, and/or content information related to the piece of content and use the received data to select a machine learning model. The selected machine learning model can identify one or more thumbnails of the thumbnail container as a thumbnail of interest to a user. The system can then generate a summary of the piece of content based on the thumbnail identified by the machine learning model and display the generated summary for the user.
    Type: Application
    Filed: November 8, 2024
    Publication date: February 27, 2025
    Inventors: Ghulam Mujtaba, Eun Seok Ryu, Reda Harb
  • Patent number: 12177541
    Abstract: Systems and methods are provided herein for generating a summary for a piece of content using a thumbnail container. This may be accomplished by a system receiving a thumbnail container related to a piece of content. The system may also receive user information, a device characteristic, and/or content information related to the piece of content and use the received data to select a machine learning model. The selected machine learning model can identify one or more thumbnails of the thumbnail container as a thumbnail of interest to a user. The system can then generate a summary of the piece of content based on the thumbnail identified by the machine learning model and display the generated summary for the user.
    Type: Grant
    Filed: August 30, 2022
    Date of Patent: December 24, 2024
    Assignee: Adeia Guides Inc.
    Inventors: Ghulam Mujtaba, Eun Seok Ryu, Reda Harb
  • Publication number: 20240320946
    Abstract: System and method are provided for video streaming. The system determines (i) a first partitioning of a portion of video content into a first set of zones having a first zone size and (ii) a second partitioning of the portion of video content into a second set of zones having a second zone size smaller than the first zone size. The system receives a message including a request for the portion of the video content. The system determines a viewport region of interest (ROI) and a plurality of viewport regions based on proximity to the ROI.
    Type: Application
    Filed: March 23, 2023
    Publication date: September 26, 2024
    Inventor: Eun-Seok Ryu
  • Patent number: 12058351
    Abstract: Methods and systems are disclosed for a mobile device to decode video based on available power and/or energy. For example, the mobile device may receive a media description file (MDF) from for a video stream from a video server. The MDF may include complexity information associated with a plurality of video segments. The complexity information may be related to the amount of processing power to be utilized for decoding the segment at the mobile device. The mobile device may determine at least one power metric for the mobile device. The mobile device may determine a first complexity level to be requested for a first video segment based on the complexity information from the MDF and the power metric. The mobile device may dynamically alter the decoding process to save energy based on the detected power/energy level.
    Type: Grant
    Filed: November 28, 2022
    Date of Patent: August 6, 2024
    Assignee: VID SCALE, Inc.
    Inventors: Yuwen He, Yan Ye, Yong He, George W. McClellan, Eun Seok Ryu
  • Publication number: 20240223731
    Abstract: Systems and methods are provided for traversing virtual spaces. The system receives image data of respective views of an environment simultaneously captured by a plurality of cameras. The system estimates a location of a subject based on the image data. The system selects a group of one or more cameras of the plurality of cameras based on the estimated location of the subject. The system generates a bitstream based on additional image data of respective views of the environment captured by the group of one or more cameras.
    Type: Application
    Filed: January 3, 2023
    Publication date: July 4, 2024
    Inventor: Eun-Seok Ryu
  • Publication number: 20240221187
    Abstract: Systems and methods are provided for traversing virtual spaces. The system receives first and second image data of respective views of an environment simultaneously captured by a first and a second plurality of cameras associated with a first and a second space of the environment, respectively. The system detects, based on at least one of the first or the second image data, that a subject is located within the first space of the environment. In response to detecting that the subject is located within the first space of the environment: the system generates, for transmission at a first bitrate, a first bitstream based on at least a portion of the first image data; and the system generates, for transmission at a second bitrate lower than the first bitrate, a second bitstream based on at least a portion of the second image data.
    Type: Application
    Filed: January 3, 2023
    Publication date: July 4, 2024
    Inventor: Eun-Seok Ryu
  • Publication number: 20240193816
    Abstract: Disclosed herein are an immersive image encoding/decoding method and apparatus, and a method for transmitting a bitstream generated by the immersive image encoding method. An immersive image encoding method according to the present disclosure, which is performed in an immersive image encoding apparatus, may include: grouping images for a virtual reality space into groups; calculating, based on view information, a view weight of each of the groups; and determining, based on the view weight, a bitstream level of the each of the groups.
    Type: Application
    Filed: February 28, 2023
    Publication date: June 13, 2024
    Applicant: RESEARCH & BUSINESS FOUNDATION SUNGKYUNKWAN UNIVERSITY
    Inventors: Eun Seok RYU, Soon Bin LEE, Jong Beom JEONG
  • Patent number: 11979544
    Abstract: A video transmission method according to embodiments comprises: a pre-processing step for processing video data; a step for encoding the video data; and/or a step for transmitting a bitstream including the video data. A video reception method according to embodiments comprises the steps of: receiving video data; decoding the video data; and/or rendering the video data.
    Type: Grant
    Filed: March 12, 2020
    Date of Patent: May 7, 2024
    Assignee: LG ELECTRONICS INC.
    Inventors: Hyunmook Oh, Sejin Oh, Eun Seok Ryu, Soonbin Lee, Dongmin Jang, Jong Beom Jeong
  • Publication number: 20240135590
    Abstract: A method of learning a neural network-based image compression model according to the disclosed embodiment, may include receiving a learning target image as an input; encoding the input image through the neural network-based image compression model configured to include a weight parameter, and decoding the encoded image through the neural network-based image compression model; calculating an entropy estimation value for a network model weight of the neural network-based image compression model; calculating a reconstruction performance value by comparing qualities of the learning target image and the decoded image; and learning the neural network-based image compression model by updating the weight parameter of the neural network-based image compression model based on the entropy estimation value for the network model weight and the reconstruction performance value. Accordingly, it is possible to minimize the size of the weight of the neural network-based image compression model.
    Type: Application
    Filed: June 21, 2023
    Publication date: April 25, 2024
    Applicant: Research & Business Foundation SUNGKYUNKWAN UNIVERSITY
    Inventors: Eun-Seok RYU, Soonbin LEE, Jong-Beom JEONG
  • Publication number: 20240073493
    Abstract: Systems and methods are provided herein for generating a summary for a piece of content using a thumbnail container. This may be accomplished by a system receiving a thumbnail container related to a piece of content. The system may also receive user information, a device characteristic, and/or content information related to the piece of content and use the received data to select a machine learning model. The selected machine learning model can identify one or more thumbnails of the thumbnail container as a thumbnail of interest to a user. The system can then generate a summary of the piece of content based on the thumbnail identified by the machine learning model and display the generated summary for the user.
    Type: Application
    Filed: August 30, 2022
    Publication date: February 29, 2024
    Inventors: Ghulam Mujtaba, Eun Seok Ryu, Reda Harb
  • Publication number: 20240048764
    Abstract: Disclosed herein are a multi-view video encoding/decoding method and apparatus, and a method for transmitting a bitstream generated by the multi-view video encoding method. The multi-view video encoding method according to the present disclosure may include generating first bitstreams for each of a plurality of quantization parameters (QPs) different from each other by encoding an atlas for a multi-view video in the quantization parameters; extracting second bitstreams for sub-regions in the atlas from the first bitstreams; and generating a third bitstream by merging, among the second bitstreams, second bitstreams with a predetermined bit rate.
    Type: Application
    Filed: July 19, 2023
    Publication date: February 8, 2024
    Applicant: RESEARCH & BUSINESS FOUNDATION SUNGKYUNKWAN UNIVERSITY
    Inventors: Eun Seok RYU, Jong Beom JEONG, Soon Bin LEE
  • Publication number: 20230334706
    Abstract: Disclosed herein are an immersive image encoding/decoding method and apparatus, and a method for transmitting a bitstream generated by the immersive image encoding method. An immersive image encoding method according to the present disclosure may include: generating geometry atlases for an immersive image; aligning the geometry atlases to make an alignment height of the geometry atlases correspond to a height of a texture atlas; and generating a geometry bitstream by encoding the aligned geometry atlases.
    Type: Application
    Filed: March 2, 2023
    Publication date: October 19, 2023
    Applicant: RESEARCH & BUSINESS FOUNDATION SUNGKYUNKWAN UNIVERSITY
    Inventors: Eun Seok RYU, Jong Beom JEONG, Soon Bin LEE
  • Patent number: 11627340
    Abstract: Systems, methods, and instrumentalities are provided to implement video coding system (VCS). The VCS may be configured to receive a video signal, which may include one or more layers (e.g., a base layer (BL) and/or one or more enhancement layers (ELs)). The VCS may be configured to process a BL picture into an inter-layer reference (ILR) picture, e.g., using picture level inter-layer prediction process. The VCS may be configured to select one or both of the processed ILR picture or an enhancement layer (EL) reference picture. The selected reference picture(s) may comprise one of the EL reference picture, or the ILR picture. The VCS may be configured to predict a current EL picture using one or more of the selected ILR picture or the EL reference picture. The VCS may be configured to store the processed ILR picture in an EL decoded picture buffer (DPB).
    Type: Grant
    Filed: April 30, 2021
    Date of Patent: April 11, 2023
    Assignee: VID SCALE, Inc.
    Inventors: Yan Ye, George W. McClellan, Yong He, Xiaoyu Xiu, Yuwen He, Jie Dong, Can Bal, Eun Seok Ryu
  • Publication number: 20230086192
    Abstract: Methods and systems are disclosed for a mobile device to decode video based on available power and/or energy. For example, the mobile device may receive a media description file (MDF) from for a video stream from a video server. The MDF may include complexity information associated with a plurality of video segments. The complexity information may be related to the amount of processing power to be utilized for decoding the segment at the mobile device. The mobile device may determine at least one power metric for the mobile device. The mobile device may determine a first complexity level to be requested for a first video segment based on the complexity information from the MDF and the power metric. The mobile device may dynamically alter the decoding process to save energy based on the detected power/energy level.
    Type: Application
    Filed: November 28, 2022
    Publication date: March 23, 2023
    Applicant: VID SCALE, Inc.
    Inventors: Yuwen He, Yan Ye, Yong He, George W. McClellan, Eun Seok Ryu
  • Patent number: 11575871
    Abstract: Provided are a method and an apparatus for streaming a multi-view 360 degree video, and a method for streaming a 360 degree video according to an embodiment of the present disclosure includes: encoding a multi-view video to a bitstream of a base layer and a bitstream of a tile layer constituted by at least one tile; selecting a tile included in a user view video in the encoded bitstream of the tile layer by using user view information received from a 360 degree video rendering apparatus, and video information of the multi-view video; extracting tile data included in the selected user view video from the encoded bitstream of the tile layer, and generating a tile bitstream corresponding to the extracted tile data; and transmitting the encoded bitstream of the base layer and the generated tile bitstream to the 360 degree video rendering apparatus.
    Type: Grant
    Filed: October 13, 2021
    Date of Patent: February 7, 2023
    Assignee: Research & Business Foundation Sungkyunkwan University
    Inventors: Jong Beom Jeong, Soon Bin Lee, Eun Seok Ryu
  • Patent number: 11516485
    Abstract: Methods and systems are disclosed for a mobile device to decode video based on available power and/or energy. For example, the mobile device may receive a media description file (MDF) from for a video stream from a video server. The MDF may include complexity information associated with a plurality of video segments. The complexity information may be related to the amount of processing power to be utilized for decoding the segment at the mobile device. The mobile device may determine at least one power metric for the mobile device. The mobile device may determine a first complexity level to be requested for a first video segment based on the complexity information from the MDF and the power metric. The mobile device may dynamically alter the decoding process to save energy based on the detected power/energy level.
    Type: Grant
    Filed: May 18, 2021
    Date of Patent: November 29, 2022
    Assignee: VID SCALE, Inc.
    Inventors: Yuwen He, Yan Ye, Yong He, George W. McClellan, Eun Seok Ryu
  • Publication number: 20220343545
    Abstract: An image encoding/decoding method and apparatus are provided. An image decoding method according to the present disclosure includes: receiving an image, in which a first atlas for a basic view of a current image and a second atlas for an additional view of the current image are merged; extracting an image divided in a predetermined image unit within the first atlas and the second atlas; dividing the first atlas and the second atlas in the predetermined image unit; and reconstructing the image divided in the predetermined image unit, wherein the dividing of the first atlas and the second atlas in the predetermined image unit may non-uniformly divide the first atlas and the second atlas.
    Type: Application
    Filed: April 25, 2022
    Publication date: October 27, 2022
    Applicant: Research & Business Foundation Sungkyunkwan University
    Inventors: Eun Seok RYU, Jong Beom JEONG, Soon Bin LEE