Patents by Inventor Gyeongmin Choe

Gyeongmin Choe has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 12200398
    Abstract: An apparatus includes at least one processing device configured to obtain input frames from a video. The at least one processing device is also configured to generate a forward flow from a first input frame to a second input frame and a backward flow from the second input frame to the first input frame. The at least one processing device is further configured to generate an occlusion map at an interpolated frame coordinate using the forward flow and the backward flow. The at least one processing device is also configured to generate a consistency map at the interpolated frame coordinate using the forward flow and the backward flow. In addition, the at least one processing device is configured to perform blending using the occlusion map and the consistency map to generate an interpolated frame at the interpolated frame coordinate.
    Type: Grant
    Filed: February 2, 2022
    Date of Patent: January 14, 2025
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Gyeongmin Choe, Yingmao Li, John Seokjun Lee, Hamid R. Sheikh, Michael O. Polley
  • Patent number: 12192673
    Abstract: A method includes obtaining multiple video frames. The method also includes determining whether a bi-directional optical flow between the multiple video frames satisfies an image quality criterion for bi-directional consistency. The method further includes identifying a non-linear curve based on pixel coordinate values from at least two of the video frames. The at least two video frames include first and second video frames. The method also includes generating interpolated video frames between the first and second video frames by applying non-linear interpolation based on the non-linear curve. In addition, the method includes outputting the interpolated video frames for presentation.
    Type: Grant
    Filed: February 2, 2022
    Date of Patent: January 7, 2025
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Gyeongmin Choe, Yingmao Li, John Seokjun Lee, Hamid R. Sheikh, Michael O. Polley
  • Patent number: 12148175
    Abstract: A method includes obtaining a first optical flow vector representing motion between consecutive video frames during a previous time step. The method also includes generating a first predicted optical flow vector from the first optical flow vector using a trained prediction model, where the first predicted optical flow vector represents predicted motion during a current time step. The method further includes refining the first predicted optical flow vector using a trained update model to generate a second optical flow vector representing motion during the current time step. The trained update model uses the first predicted optical flow vector, a video frame of the previous time step, and a video frame of the current time step to generate the second optical flow vector.
    Type: Grant
    Filed: February 2, 2022
    Date of Patent: November 19, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Yingmao Li, Chenchi Luo, Gyeongmin Choe, John Seokjun Lee
  • Patent number: 11720782
    Abstract: A method includes obtaining, using at least one processor of an electronic device, multiple calibration parameters associated with multiple sensors of a selected mobile device. The method also includes obtaining, using the at least one processor, an identification of multiple imaging tasks. The method further includes obtaining, using the at least one processor, multiple synthetically-generated scene images. In addition, the method includes generating, using the at least one processor, multiple training images and corresponding meta information based on the calibration parameters, the identification of the imaging tasks, and the scene images. The training images and corresponding meta information are generated concurrently, different ones of the training images correspond to different ones of the sensors, and different pieces of the meta information correspond to different ones of the imaging tasks.
    Type: Grant
    Filed: December 28, 2020
    Date of Patent: August 8, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Chenchi Luo, Gyeongmin Choe, Yingmao Li, Zeeshan Nadir, Hamid R. Sheikh, John Seokjun Lee, Youngjun Yoo
  • Publication number: 20230245328
    Abstract: A method includes obtaining a first optical flow vector representing motion between consecutive video frames during a previous time step. The method also includes generating a first predicted optical flow vector from the first optical flow vector using a trained prediction model, where the first predicted optical flow vector represents predicted motion during a current time step. The method further includes refining the first predicted optical flow vector using a trained update model to generate a second optical flow vector representing motion during the current time step. The trained update model uses the first predicted optical flow vector, a video frame of the previous time step, and a video frame of the current time step to generate the second optical flow vector.
    Type: Application
    Filed: February 2, 2022
    Publication date: August 3, 2023
    Inventors: Yingmao Li, Chenchi Luo, Gyeongmin Choe, John Seokjun Lee
  • Publication number: 20220303495
    Abstract: An apparatus includes at least one processing device configured to obtain input frames from a video. The at least one processing device is also configured to generate a forward flow from a first input frame to a second input frame and a backward flow from the second input frame to the first input frame. The at least one processing device is further configured to generate an occlusion map at an interpolated frame coordinate using the forward flow and the backward flow. The at least one processing device is also configured to generate a consistency map at the interpolated frame coordinate using the forward flow and the backward flow. In addition, the at least one processing device is configured to perform blending using the occlusion map and the consistency map to generate an interpolated frame at the interpolated frame coordinate.
    Type: Application
    Filed: February 2, 2022
    Publication date: September 22, 2022
    Inventors: Gyeongmin Choe, Yingmao Li, John Seokjun Lee, Hamid R. Sheikh, Michael O. Polley
  • Publication number: 20220301184
    Abstract: A method includes obtaining multiple video frames. The method also includes determining whether a bi-directional optical flow between the multiple video frames satisfies an image quality criterion for bi-directional consistency. The method further includes identifying a non-linear curve based on pixel coordinate values from at least two of the video frames. The at least two video frames include first and second video frames. The method also includes generating interpolated video frames between the first and second video frames by applying non-linear interpolation based on the non-linear curve. In addition, the method includes outputting the interpolated video frames for presentation.
    Type: Application
    Filed: February 2, 2022
    Publication date: September 22, 2022
    Inventors: Gyeongmin Choe, Yingmao Li, John Seokjun Lee, Hamid R. Sheikh, Michael O. Polley
  • Patent number: 11297244
    Abstract: A method includes receiving a selection of a selected zoom area on an input image frame displayed on a user interface; determining one or more candidate zoom previews proximate to the selected zoom area using a saliency detecting algorithm; and displaying the one or more candidate zoom previews on the user interface adjacent to the selected zoom area.
    Type: Grant
    Filed: July 27, 2020
    Date of Patent: April 5, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Gyeongmin Choe, Hamid R. Sheikh, John SeokJun Lee, Youngjun Yoo
  • Publication number: 20210390375
    Abstract: A method includes obtaining, using at least one processor of an electronic device, multiple calibration parameters associated with multiple sensors of a selected mobile device. The method also includes obtaining, using the at least one processor, an identification of multiple imaging tasks. The method further includes obtaining, using the at least one processor, multiple synthetically-generated scene images. In addition, the method includes generating, using the at least one processor, multiple training images and corresponding meta information based on the calibration parameters, the identification of the imaging tasks, and the scene images. The training images and corresponding meta information are generated concurrently, different ones of the training images correspond to different ones of the sensors, and different pieces of the meta information correspond to different ones of the imaging tasks.
    Type: Application
    Filed: December 28, 2020
    Publication date: December 16, 2021
    Inventors: Chenchi Luo, Gyeongmin Choe, Yingmao Li, Zeeshan Nadir, Hamid R. Sheikh, John Seokjun Lee, Youngjun Yoo
  • Patent number: 11132772
    Abstract: A method includes obtaining a first image of a scene using a first image sensor of an electronic device and a second image of the scene using a second image sensor of the electronic device. The method also includes generating a first feature map from the first image and a second feature map from the second image. The method further includes generating a third feature map based on the first feature map, the second feature map, and an asymmetric search window. The method additionally includes generating a depth map by restoring spatial resolution to the third feature map.
    Type: Grant
    Filed: December 12, 2019
    Date of Patent: September 28, 2021
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Chenchi Luo, Yingmao Li, Youngjun Yoo, George Q. Chen, Kaimo Lin, David D. Liu, Gyeongmin Choe
  • Publication number: 20210250510
    Abstract: A method includes receiving a selection of a selected zoom area on an input image frame displayed on a user interface; determining one or more candidate zoom previews proximate to the selected zoom area using a saliency detecting algorithm; and displaying the one or more candidate zoom previews on the user interface adjacent to the selected zoom area.
    Type: Application
    Filed: July 27, 2020
    Publication date: August 12, 2021
    Inventors: Gyeongmin Choe, Hamid R. Sheikh, John SeokJun Lee, Youngjun Yoo
  • Publication number: 20200394759
    Abstract: A method includes obtaining a first image of a scene using a first image sensor of an electronic device and a second image of the scene using a second image sensor of the electronic device. The method also includes generating a first feature map from the first image and a second feature map from the second image. The method further includes generating a third feature map based on the first feature map, the second feature map, and an asymmetric search window. The method additionally includes generating a depth map by restoring spatial resolution to the third feature map.
    Type: Application
    Filed: December 12, 2019
    Publication date: December 17, 2020
    Inventors: Chenchi Luo, Yingmao Li, Youngjun Yoo, George Q. Chen, Kaimo Lin, David D. Liu, Gyeongmin Choe