Patents by Inventor John Seokjun Lee

John Seokjun Lee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240135673
    Abstract: A method includes obtaining an under-display camera (UDC) image captured using a camera located under a display. The method also includes processing, using at least one processing device of an electronic device, the UDC image based on a machine learning model to restore the UDC image. The method further includes displaying or storing the restored image corresponding to the UDC image. The machine learning model is trained using (i) a ground truth image and (ii) a synthetic image generated using the ground truth image and a point spread function that is based on an optical transmission model of the display.
    Type: Application
    Filed: October 23, 2022
    Publication date: April 25, 2024
    Inventors: Yibo Xu, Weidi Liu, Hamid R. Sheikh, John Seokjun Lee
  • Publication number: 20240119570
    Abstract: A method includes identifying, using at least one processing device of an electronic device, a spatially-variant point spread function associated with an under-display camera. The spatially-variant point spread function is based on an optical transmission model and a layout of a display associated with the under-display camera. The method also includes generating, using the at least one processing device, a ground truth image. The method further includes performing, using the at least one processing device, a convolution of the ground truth image based on the spatially-variant point spread function in order to generate a synthetic sensor image. The synthetic sensor image represents a simulated image captured by the under-display camera. In addition, the method includes providing, using the at least one processing device, the synthetic sensor image and the ground truth image as an image pair to train a machine learning model to perform under-display camera point spread function inversion.
    Type: Application
    Filed: October 11, 2022
    Publication date: April 11, 2024
    Inventors: Yibo Xu, Weidi Liu, Hamid R. Sheikh, John Seokjun Lee
  • Publication number: 20240062342
    Abstract: A method includes obtaining an input image that contains blur. The method also includes providing the input image to a trained machine learning model, where the trained machine learning model includes (i) a shallow feature extractor configured to extract one or more feature maps from the input image and (ii) a deep feature extractor configured to extract deep features from the one or more feature maps. The method further includes using the trained machine learning model to generate a sharpened output image. The trained machine learning model is trained using ground truth training images and input training images, where the input training images include versions of the ground truth training images with blur created using demosaic and noise filtering operations.
    Type: Application
    Filed: August 18, 2022
    Publication date: February 22, 2024
    Inventors: Devendra K. Jangid, John Seokjun Lee, Hamid R. Sheikh
  • Patent number: 11869118
    Abstract: An apparatus includes at least one memory configured to store an AI network and at least one processor. The at least one processor is configured to generate a dead leaves model. The at least one processor is also configured to capture a ground truth frame from the dead leaves. The at least one processor is further configured to apply a mathematical noise model to the ground truth frame to produce a noisy frame. In addition, the at least one processor is configured to train the AI network using the ground truth frame and the noisy frame.
    Type: Grant
    Filed: January 28, 2022
    Date of Patent: January 9, 2024
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Pavan Chennagiri, John Seokjun Lee, Hamid R. Sheikh
  • Publication number: 20230267702
    Abstract: An electronic device is provided. The electronic device includes a display, a camera module disposed under the display, and a processor electrically connected to the display and the camera module. The processor is configured to acquire a sample frame by using the camera module, identify whether a light source object is included in the sample frame, determine an imaging parameter for acquisition of first multiple frames when the light source object is identified to be included in the sample frame, acquire multiple frames, based on the imaging parameter, composite the multiple frames to generate a composite frame, identify an attribute of the light source object included in the composite frame, and perform frame correction of the composite frame, based on the identified attribute.
    Type: Application
    Filed: April 21, 2023
    Publication date: August 24, 2023
    Inventors: Woojhon CHOI, Wonjoon DO, Jaesung CHOI, Alok Shankarlal SHUKLA, Manoj Kumar MARRAMREDDY, Saketh SHARMA, Hamid Rahim SHEIKH, John Seokjun LEE, Akira OSAMOTO, Yibo XU
  • Publication number: 20230252608
    Abstract: A method includes obtaining, using a stationary sensor of an electronic device, multiple image frames including first and second image frames. The method also includes generating, using multiple previously generated motion vectors, a first motion-distorted image frame using the first image frame and a second motion-distorted image frame using the second image frame. The method further includes adding noise to the motion-distorted image frames to generate first and second noisy motion-distorted image frames. The method also includes performing (i) a first multi-frame processing (MFP) operation to generate a ground truth image using the motion-distorted image frames and (ii) a second MFP operation to generate an input image using the noisy motion-distorted image frames.
    Type: Application
    Filed: February 7, 2022
    Publication date: August 10, 2023
    Inventors: Yingmao Li, Hamid R. Sheikh, John Seokjun Lee, Youngmin Kim, Jun Ki Cho, Seung-Chul Jeon
  • Publication number: 20230252770
    Abstract: A method for training data generation includes obtaining a first set of image frames of a scene and a second set of image frames of the scene using multiple exposure settings. The method also includes generating an alignment map, a blending map, and an input image using the first set of image frames. The method further includes generating a ground truth image using the alignment map, the blending map, and the second set of image frames. In addition, the method includes using the ground truth image and the input image as an image pair in a training dataset when training a machine learning model to reduce image distortion and noise.
    Type: Application
    Filed: October 11, 2022
    Publication date: August 10, 2023
    Inventors: Tyler Luu, John W. Glotzbach, Hamid R. Sheikh, John Seokjun Lee, Youngmin Kim, Jun Ki Cho, Seung-Chul Jeon
  • Patent number: 11720782
    Abstract: A method includes obtaining, using at least one processor of an electronic device, multiple calibration parameters associated with multiple sensors of a selected mobile device. The method also includes obtaining, using the at least one processor, an identification of multiple imaging tasks. The method further includes obtaining, using the at least one processor, multiple synthetically-generated scene images. In addition, the method includes generating, using the at least one processor, multiple training images and corresponding meta information based on the calibration parameters, the identification of the imaging tasks, and the scene images. The training images and corresponding meta information are generated concurrently, different ones of the training images correspond to different ones of the sensors, and different pieces of the meta information correspond to different ones of the imaging tasks.
    Type: Grant
    Filed: December 28, 2020
    Date of Patent: August 8, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Chenchi Luo, Gyeongmin Choe, Yingmao Li, Zeeshan Nadir, Hamid R. Sheikh, John Seokjun Lee, Youngjun Yoo
  • Publication number: 20230245328
    Abstract: A method includes obtaining a first optical flow vector representing motion between consecutive video frames during a previous time step. The method also includes generating a first predicted optical flow vector from the first optical flow vector using a trained prediction model, where the first predicted optical flow vector represents predicted motion during a current time step. The method further includes refining the first predicted optical flow vector using a trained update model to generate a second optical flow vector representing motion during the current time step. The trained update model uses the first predicted optical flow vector, a video frame of the previous time step, and a video frame of the current time step to generate the second optical flow vector.
    Type: Application
    Filed: February 2, 2022
    Publication date: August 3, 2023
    Inventors: Yingmao Li, Chenchi Luo, Gyeongmin Choe, John Seokjun Lee
  • Publication number: 20230091909
    Abstract: An apparatus includes at least one memory configured to store an AI network and at least one processor. The at least one processor is configured to generate a dead leaves model. The at least one processor is also configured to capture a ground truth frame from the dead leaves. The at least one processor is further configured to apply a mathematical noise model to the ground truth frame to produce a noisy frame. In addition, the at least one processor is configured to train the AI network using the ground truth frame and the noisy frame.
    Type: Application
    Filed: January 28, 2022
    Publication date: March 23, 2023
    Inventors: Pavan Chennagiri, John Seokjun Lee, Hamid R. Sheikh
  • Patent number: 11593637
    Abstract: A method, an electronic device, and computer readable medium are provided. The method includes receiving an input into a neural network that includes a kernel. The method also includes generating, during a convolution operation of the neural network, multiple panel matrices based on different portions of the input. The method additionally includes successively combining each of the multiple panel matrices with the kernel to generate an output. Generating the multiple panel matrices can include mapping elements within a moving window of the input onto columns of an indexing matrix, where a size of the window corresponds to the size of the kernel.
    Type: Grant
    Filed: April 30, 2019
    Date of Patent: February 28, 2023
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Chenchi Luo, Yuming Zhu, Hyejung Kim, John Seokjun Lee, Manish Goel
  • Publication number: 20220337747
    Abstract: A method for operating an electronic device includes capturing, by an image sensor module, a stream from a pixel array. The method also includes processing, by the image sensor module, the stream to generate a preview stream and a full frame stream. The method further includes compressing, by the image sensor module, the preview stream using a first compression and the full frame stream using a second compression. In addition, the method includes outputting, by the image sensor module, the compressed preview stream and the compressed full frame stream.
    Type: Application
    Filed: June 30, 2022
    Publication date: October 20, 2022
    Inventors: Hamid R. Sheikh, Youngjun Yoo, John Seokjun Lee, Michael O. Polley
  • Publication number: 20220303495
    Abstract: An apparatus includes at least one processing device configured to obtain input frames from a video. The at least one processing device is also configured to generate a forward flow from a first input frame to a second input frame and a backward flow from the second input frame to the first input frame. The at least one processing device is further configured to generate an occlusion map at an interpolated frame coordinate using the forward flow and the backward flow. The at least one processing device is also configured to generate a consistency map at the interpolated frame coordinate using the forward flow and the backward flow. In addition, the at least one processing device is configured to perform blending using the occlusion map and the consistency map to generate an interpolated frame at the interpolated frame coordinate.
    Type: Application
    Filed: February 2, 2022
    Publication date: September 22, 2022
    Inventors: Gyeongmin Choe, Yingmao Li, John Seokjun Lee, Hamid R. Sheikh, Michael O. Polley
  • Publication number: 20220301184
    Abstract: A method includes obtaining multiple video frames. The method also includes determining whether a bi-directional optical flow between the multiple video frames satisfies an image quality criterion for bi-directional consistency. The method further includes identifying a non-linear curve based on pixel coordinate values from at least two of the video frames. The at least two video frames include first and second video frames. The method also includes generating interpolated video frames between the first and second video frames by applying non-linear interpolation based on the non-linear curve. In addition, the method includes outputting the interpolated video frames for presentation.
    Type: Application
    Filed: February 2, 2022
    Publication date: September 22, 2022
    Inventors: Gyeongmin Choe, Yingmao Li, John Seokjun Lee, Hamid R. Sheikh, Michael O. Polley
  • Patent number: 11412136
    Abstract: A method includes, in a first mode, positioning first and second tiltable image sensor modules of an image sensor array of an electronic device so that a first optical axis of the first tiltable image sensor module and a second optical axis of the second tiltable image sensor module are substantially perpendicular to a surface of the electronic device, and the first and second tiltable image sensor modules are within a thickness profile of the electronic device. The method also includes, in a second mode, tilting the first and second tiltable image sensor modules so that the first optical axis of the first tiltable image sensor module and the second optical axis of the second tiltable image sensor module are not perpendicular to the surface of the electronic device, and at least part of the first and second tiltable image sensor modules are no longer within the thickness profile of the electronic device.
    Type: Grant
    Filed: December 5, 2019
    Date of Patent: August 9, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Hamid R. Sheikh, Youngjun Yoo, John Seokjun Lee, Michael O. Polley
  • Patent number: 11297244
    Abstract: A method includes receiving a selection of a selected zoom area on an input image frame displayed on a user interface; determining one or more candidate zoom previews proximate to the selected zoom area using a saliency detecting algorithm; and displaying the one or more candidate zoom previews on the user interface adjacent to the selected zoom area.
    Type: Grant
    Filed: July 27, 2020
    Date of Patent: April 5, 2022
    Assignee: Samsung Electronics Co., Ltd.
    Inventors: Gyeongmin Choe, Hamid R. Sheikh, John SeokJun Lee, Youngjun Yoo
  • Publication number: 20210390375
    Abstract: A method includes obtaining, using at least one processor of an electronic device, multiple calibration parameters associated with multiple sensors of a selected mobile device. The method also includes obtaining, using the at least one processor, an identification of multiple imaging tasks. The method further includes obtaining, using the at least one processor, multiple synthetically-generated scene images. In addition, the method includes generating, using the at least one processor, multiple training images and corresponding meta information based on the calibration parameters, the identification of the imaging tasks, and the scene images. The training images and corresponding meta information are generated concurrently, different ones of the training images correspond to different ones of the sensors, and different pieces of the meta information correspond to different ones of the imaging tasks.
    Type: Application
    Filed: December 28, 2020
    Publication date: December 16, 2021
    Inventors: Chenchi Luo, Gyeongmin Choe, Yingmao Li, Zeeshan Nadir, Hamid R. Sheikh, John Seokjun Lee, Youngjun Yoo
  • Publication number: 20210250510
    Abstract: A method includes receiving a selection of a selected zoom area on an input image frame displayed on a user interface; determining one or more candidate zoom previews proximate to the selected zoom area using a saliency detecting algorithm; and displaying the one or more candidate zoom previews on the user interface adjacent to the selected zoom area.
    Type: Application
    Filed: July 27, 2020
    Publication date: August 12, 2021
    Inventors: Gyeongmin Choe, Hamid R. Sheikh, John SeokJun Lee, Youngjun Yoo
  • Publication number: 20200349426
    Abstract: A method, an electronic device, and computer readable medium are provided. The method includes receiving an input into a neural network that includes a kernel. The method also includes generating, during a convolution operation of the neural network, multiple panel matrices based on different portions of the input. The method additionally includes successively combining each of the multiple panel matrices with the kernel to generate an output. Generating the multiple panel matrices can include mapping elements within a moving window of the input onto columns of an indexing matrix, where a size of the window corresponds to the size of the kernel.
    Type: Application
    Filed: April 30, 2019
    Publication date: November 5, 2020
    Inventors: Chenchi Luo, Yuming Zhu, Hyejung Kim, John Seokjun Lee, Manish Goel