Patents by Inventor Bum Chul JANG

Bum Chul JANG has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20250139899
    Abstract: Provided is an apparatus and method for managing a spatial model, and a spatial model management apparatus includes a data collector configured to collect dynamic method data including shape and color information using a dynamic space scanner and to collect static method data including color information using a static space scanner, for an area for which a spatial model is to be produced; a spatial information deriver configured to derive spatial information using the collected dynamic method data or the collected static method data; and a model generator configured to generate the spatial model using at least one or combination of the spatial information derived from the dynamic method data and the spatial information derived using the static method data.
    Type: Application
    Filed: September 23, 2024
    Publication date: May 1, 2025
    Applicants: Korea University Research and Business Foundation, TeeLabs Co., Ltd.
    Inventors: Nak Ju DOH, Bum Chul JANG, Bo Kyeon JEONG, Ga Hyeon LIM, Hyung A CHOI
  • Patent number: 11915449
    Abstract: The present invention relates to a method and an apparatus for estimating a user pose using a three-dimensional virtual space model. The method of estimating a user pose including the position and orientation information of a user for a three-dimensional space includes a step of receiving user information including an image acquired in a three-dimensional space, a step of confirming a three-dimensional virtual space model constructed based on spatial information including depth information and image information for the three-dimensional space, a step of generating corresponding information corresponding to the user information in the three-dimensional virtual space model, a step of calculating similarity between the corresponding information and the user information, and a step of estimating a user pose based on the similarity.
    Type: Grant
    Filed: April 7, 2020
    Date of Patent: February 27, 2024
    Assignee: Korea University Research and Business Foundation
    Inventors: Nak Ju Doh, Ga Hyeon Lim, Jang Hun Hyeon, Dong Woo Kim, Bum Chul Jang, Hyung A Choi
  • Publication number: 20230177723
    Abstract: The present invention relates to a method and an apparatus for estimating a user pose using a three-dimensional virtual space model. The method of estimating a user pose including the position and orientation information of a user for a three-dimensional space includes a step of receiving user information including an image acquired in a three-dimensional space, a step of confirming a three-dimensional virtual space model constructed based on spatial information including depth information and image information for the three-dimensional space, a step of generating corresponding information corresponding to the user information in the three-dimensional virtual space model, a step of calculating similarity between the corresponding information and the user information, and a step of estimating a user pose based on the similarity.
    Type: Application
    Filed: April 7, 2020
    Publication date: June 8, 2023
    Applicant: Korea University Research and Business Foundation
    Inventors: Nak Ju DOH, Ga Hyeon LIM, Jang Hun HYEON, Dong Woo KIM, Bum Chul JANG, Hyung A CHOI
  • Patent number: 11516448
    Abstract: The projection image compensating method according to an embodiment of the present disclosure includes acquiring mesh data, at least one representative image, at least one supplementary image, and position information which is information about an obtaining pose of each image for an indoor space; adding an index of each of the plurality of faces to a matrix corresponding to a size of the at least one representative image in accordance with a result of projecting the plurality of faces which configures the mesh data to the at least one representative image; detecting at least one occluded face among the plurality of faces, using an index added to the matrix; and extracting pixel information which is information of a pixel value corresponding to the at least one occluded face from the at least one supplementary image.
    Type: Grant
    Filed: November 4, 2019
    Date of Patent: November 29, 2022
    Assignee: Korea University Research and Business Foundation
    Inventors: Nak Ju Doh, Hyung A Choi, Bum Chul Jang, Sang Min Ahn
  • Publication number: 20200286205
    Abstract: The 360-degree image producing method according to an embodiment of the present invention includes: an information receiving step of receiving 360-degree image producing information including a plurality of camera images, pose information, position information, depth information, a camera model, and a 360-degree model; a target selecting step of selecting a depth information point corresponding to a target pixel included in the 360-degree image among a plurality of points included in the depth information, using the position information, the 360-degree model and the depth information; an image pixel value acquiring step of acquiring a pixel value of a pixel of a camera image corresponding to the depth information point among the plurality of camera images; and a target pixel constructing step of constructing a pixel value of the target pixel.
    Type: Application
    Filed: October 4, 2019
    Publication date: September 10, 2020
    Applicant: Korea University Research and Business Foundation
    Inventors: Nak Ju DOH, Hyung A CHOI, Bum Chul JANG
  • Publication number: 20200145630
    Abstract: The projection image compensating method according to an embodiment of the present disclosure includes acquiring mesh data, at least one representative image, at least one supplementary image, and position information which is information about an obtaining pose of each image for an indoor space; adding an index of each of the plurality of faces to a matrix corresponding to a size of the at least one representative image in accordance with a result of projecting the plurality of faces which configures the mesh data to the at least one representative image; detecting at least one occluded face among the plurality of faces, using an index added to the matrix; and extracting pixel information which is information of a pixel value corresponding to the at least one occluded face from the at least one supplementary image.
    Type: Application
    Filed: November 4, 2019
    Publication date: May 7, 2020
    Applicant: Korea University Research and Business Foundation
    Inventors: Nak Ju DOH, Hyung A CHOI, Bum Chul JANG, Sang Min AHN