Patents by Inventor John Seokjun Lee
John Seokjun Lee has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12272032Abstract: A method includes obtaining an input image that contains blur. The method also includes providing the input image to a trained machine learning model, where the trained machine learning model includes (i) a shallow feature extractor configured to extract one or more feature maps from the input image and (ii) a deep feature extractor configured to extract deep features from the one or more feature maps. The method further includes using the trained machine learning model to generate a sharpened output image. The trained machine learning model is trained using ground truth training images and input training images, where the input training images include versions of the ground truth training images with blur created using demosaic and noise filtering operations.Type: GrantFiled: August 18, 2022Date of Patent: April 8, 2025Assignee: Samsung Electronics Co., Ltd.Inventors: Devendra K. Jangid, John Seokjun Lee, Hamid R. Sheikh
-
Publication number: 20250113097Abstract: A method includes obtaining, using at least one under display camera, one or more first image frames associated with a first diffraction pattern and one or more second image frames associated with a second diffraction pattern. The first diffraction pattern and the second diffraction pattern are related through a transformation. The method also includes generating a first deblurred image using the one or more first image frames and a second deblurred image using the one or more second image frames. The method further includes combining the first and second deblurred images while exploiting complementary types of image artifacts created by the first and second diffraction patterns to generate an image of a scene.Type: ApplicationFiled: September 28, 2023Publication date: April 3, 2025Inventors: Jinhan Hu, Jing Li, Chengyu Wang, Pavan C. Madhusudanarao, Hamid R. Sheikh, John Seokjun Lee
-
Publication number: 20250106416Abstract: A method includes obtaining a raw image and mapping, using a raw image encoder, the raw image to a compressed domain. The raw image is represented using latent variables in the compressed domain. The method also includes performing one or more image signal processing operations on the latent variables, where (i) each of the one or more image signal processing operations is configured to operate in the compressed domain and (ii) the one or more image signal processing operations generate processed latent variables. The method further includes mapping, using an output image decoder, the processed latent variables to an output image in an output color space.Type: ApplicationFiled: August 27, 2024Publication date: March 27, 2025Inventors: Molin Zhang, Soumendu Majee, Chengyu Wang, John Seokjun Lee, Hamid Rahim Sheikh
-
Publication number: 20250045867Abstract: A method includes obtaining a ground truth image and generating multiple image frames using the ground truth image, a modeled optical blur, and a modeled global motion. The method also includes generating multiple mosaic image frames using the image frames and a color filter array and generating multiple raw input image frames using the mosaic image frames and a noise model associated with at least one imaging sensor. The method further includes providing the raw input image frames to a multi-frame processing pipeline in order to generate synthetic training data. In addition, the method includes training a machine learning-based image processing engine using the ground truth image and the synthetic training data.Type: ApplicationFiled: August 1, 2023Publication date: February 6, 2025Inventors: Abhiram Gnanasambandam, John W. Glotzbach, John Seokjun Lee, Hamid R. Sheikh
-
Publication number: 20250037237Abstract: A method includes extracting multiple shallow features from a low-resolution image using a shallow feature extractor that includes a quaternion convolutional network. The method also includes extracting multiple deep features from the multiple shallow features using a deep feature extractor that includes multiple quaternion residual distillation blocks (QRDBs), where each QRDB includes a quaternion self-attention module. The method further includes reconstructing the multiple deep features into a high-resolution image. Each QRDB may further include a quaternion gated deconvolutional feed forward network (QGDFN) configured to suppress one or more of the multiple deep features.Type: ApplicationFiled: July 27, 2023Publication date: January 30, 2025Inventors: Devendra Kumar Jangid, Abhiram Gnanasambandam, John W. Glotzbach, John Seokjun Lee, Hamid R. Sheikh
-
Patent number: 12200398Abstract: An apparatus includes at least one processing device configured to obtain input frames from a video. The at least one processing device is also configured to generate a forward flow from a first input frame to a second input frame and a backward flow from the second input frame to the first input frame. The at least one processing device is further configured to generate an occlusion map at an interpolated frame coordinate using the forward flow and the backward flow. The at least one processing device is also configured to generate a consistency map at the interpolated frame coordinate using the forward flow and the backward flow. In addition, the at least one processing device is configured to perform blending using the occlusion map and the consistency map to generate an interpolated frame at the interpolated frame coordinate.Type: GrantFiled: February 2, 2022Date of Patent: January 14, 2025Assignee: Samsung Electronics Co., Ltd.Inventors: Gyeongmin Choe, Yingmao Li, John Seokjun Lee, Hamid R. Sheikh, Michael O. Polley
-
Patent number: 12192673Abstract: A method includes obtaining multiple video frames. The method also includes determining whether a bi-directional optical flow between the multiple video frames satisfies an image quality criterion for bi-directional consistency. The method further includes identifying a non-linear curve based on pixel coordinate values from at least two of the video frames. The at least two video frames include first and second video frames. The method also includes generating interpolated video frames between the first and second video frames by applying non-linear interpolation based on the non-linear curve. In addition, the method includes outputting the interpolated video frames for presentation.Type: GrantFiled: February 2, 2022Date of Patent: January 7, 2025Assignee: Samsung Electronics Co., Ltd.Inventors: Gyeongmin Choe, Yingmao Li, John Seokjun Lee, Hamid R. Sheikh, Michael O. Polley
-
Patent number: 12148175Abstract: A method includes obtaining a first optical flow vector representing motion between consecutive video frames during a previous time step. The method also includes generating a first predicted optical flow vector from the first optical flow vector using a trained prediction model, where the first predicted optical flow vector represents predicted motion during a current time step. The method further includes refining the first predicted optical flow vector using a trained update model to generate a second optical flow vector representing motion during the current time step. The trained update model uses the first predicted optical flow vector, a video frame of the previous time step, and a video frame of the current time step to generate the second optical flow vector.Type: GrantFiled: February 2, 2022Date of Patent: November 19, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Yingmao Li, Chenchi Luo, Gyeongmin Choe, John Seokjun Lee
-
Patent number: 12079971Abstract: A method includes obtaining, using a stationary sensor of an electronic device, multiple image frames including first and second image frames. The method also includes generating, using multiple previously generated motion vectors, a first motion-distorted image frame using the first image frame and a second motion-distorted image frame using the second image frame. The method further includes adding noise to the motion-distorted image frames to generate first and second noisy motion-distorted image frames. The method also includes performing (i) a first multi-frame processing (MFP) operation to generate a ground truth image using the motion-distorted image frames and (ii) a second MFP operation to generate an input image using the noisy motion-distorted image frames.Type: GrantFiled: February 7, 2022Date of Patent: September 3, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Yingmao Li, Hamid R. Sheikh, John Seokjun Lee, Youngmin Kim, Jun Ki Cho, Seung-Chul Jeon
-
Publication number: 20240257325Abstract: A method includes obtaining multiple image frames captured using at least one imaging sensor. The method also includes generating a local tone map, a global tone map look-up table (LUT), and one or more contrast enhancement LUTs based on at least one of the image frames and one or more parameters of the at least one imaging sensor. The method further includes generating a blended and demosaiced image based on the image frames and generating a local tone mapped image based on the blended and demosaiced image and the local tone map. The method also includes adjusting color saturation based on the local tone mapped image to generate a corrected image. In addition, the method includes generating an output image based on the corrected image, the global tone map LUT, and the one or more contrast enhancement LUTs.Type: ApplicationFiled: October 12, 2023Publication date: August 1, 2024Inventors: Abhiram Gnanasambandam, John W. Glotzbach, Zeeshan Nadir, Gunawath Dilshan Godaliyadda, John Seokjun Lee, Hamid R. Sheikh
-
Publication number: 20240233319Abstract: A method includes obtaining an under-display camera (UDC) image captured using a camera located under a display. The method also includes processing, using at least one processing device of an electronic device, the UDC image based on a machine learning model to restore the UDC image. The method further includes displaying or storing the restored image corresponding to the UDC image. The machine learning model is trained using (i) a ground truth image and (ii) a synthetic image generated using the ground truth image and a point spread function that is based on an optical transmission model of the display.Type: ApplicationFiled: October 24, 2022Publication date: July 11, 2024Inventors: Yibo Xu, Weidi Liu, Hamid R. Sheikh, John Seokjun Lee
-
Publication number: 20240185431Abstract: A method includes obtaining a reference frame from among multiple image frames of a scene. The method also includes generating a segmentation mask using the reference frame, where the segmentation mask contains information for separation of foreground and background in the scene. The method further includes applying the segmentation mask to each of the multiple image frames to generate foreground image frames and background image frames. The method also includes performing multi-frame registration on each of the foreground image frames to generate registered foreground image frames. The method further includes performing multi-frame registration on each of the background image frames to generate registered background image frames. In addition, the method includes combining the registered foreground image frames and the registered background image frames to generate a combined registered multi-frame image of the scene.Type: ApplicationFiled: December 1, 2022Publication date: June 6, 2024Inventors: Yufang Sun, Akira Osamoto, John Seokjun Lee, Hamid R. Sheikh
-
Publication number: 20240135673Abstract: A method includes obtaining an under-display camera (UDC) image captured using a camera located under a display. The method also includes processing, using at least one processing device of an electronic device, the UDC image based on a machine learning model to restore the UDC image. The method further includes displaying or storing the restored image corresponding to the UDC image. The machine learning model is trained using (i) a ground truth image and (ii) a synthetic image generated using the ground truth image and a point spread function that is based on an optical transmission model of the display.Type: ApplicationFiled: October 23, 2022Publication date: April 25, 2024Inventors: Yibo Xu, Weidi Liu, Hamid R. Sheikh, John Seokjun Lee
-
Publication number: 20240119570Abstract: A method includes identifying, using at least one processing device of an electronic device, a spatially-variant point spread function associated with an under-display camera. The spatially-variant point spread function is based on an optical transmission model and a layout of a display associated with the under-display camera. The method also includes generating, using the at least one processing device, a ground truth image. The method further includes performing, using the at least one processing device, a convolution of the ground truth image based on the spatially-variant point spread function in order to generate a synthetic sensor image. The synthetic sensor image represents a simulated image captured by the under-display camera. In addition, the method includes providing, using the at least one processing device, the synthetic sensor image and the ground truth image as an image pair to train a machine learning model to perform under-display camera point spread function inversion.Type: ApplicationFiled: October 11, 2022Publication date: April 11, 2024Inventors: Yibo Xu, Weidi Liu, Hamid R. Sheikh, John Seokjun Lee
-
Publication number: 20240062342Abstract: A method includes obtaining an input image that contains blur. The method also includes providing the input image to a trained machine learning model, where the trained machine learning model includes (i) a shallow feature extractor configured to extract one or more feature maps from the input image and (ii) a deep feature extractor configured to extract deep features from the one or more feature maps. The method further includes using the trained machine learning model to generate a sharpened output image. The trained machine learning model is trained using ground truth training images and input training images, where the input training images include versions of the ground truth training images with blur created using demosaic and noise filtering operations.Type: ApplicationFiled: August 18, 2022Publication date: February 22, 2024Inventors: Devendra K. Jangid, John Seokjun Lee, Hamid R. Sheikh
-
Patent number: 11869118Abstract: An apparatus includes at least one memory configured to store an AI network and at least one processor. The at least one processor is configured to generate a dead leaves model. The at least one processor is also configured to capture a ground truth frame from the dead leaves. The at least one processor is further configured to apply a mathematical noise model to the ground truth frame to produce a noisy frame. In addition, the at least one processor is configured to train the AI network using the ground truth frame and the noisy frame.Type: GrantFiled: January 28, 2022Date of Patent: January 9, 2024Assignee: Samsung Electronics Co., Ltd.Inventors: Pavan Chennagiri, John Seokjun Lee, Hamid R. Sheikh
-
Publication number: 20230267702Abstract: An electronic device is provided. The electronic device includes a display, a camera module disposed under the display, and a processor electrically connected to the display and the camera module. The processor is configured to acquire a sample frame by using the camera module, identify whether a light source object is included in the sample frame, determine an imaging parameter for acquisition of first multiple frames when the light source object is identified to be included in the sample frame, acquire multiple frames, based on the imaging parameter, composite the multiple frames to generate a composite frame, identify an attribute of the light source object included in the composite frame, and perform frame correction of the composite frame, based on the identified attribute.Type: ApplicationFiled: April 21, 2023Publication date: August 24, 2023Inventors: Woojhon CHOI, Wonjoon DO, Jaesung CHOI, Alok Shankarlal SHUKLA, Manoj Kumar MARRAMREDDY, Saketh SHARMA, Hamid Rahim SHEIKH, John Seokjun LEE, Akira OSAMOTO, Yibo XU
-
Publication number: 20230252608Abstract: A method includes obtaining, using a stationary sensor of an electronic device, multiple image frames including first and second image frames. The method also includes generating, using multiple previously generated motion vectors, a first motion-distorted image frame using the first image frame and a second motion-distorted image frame using the second image frame. The method further includes adding noise to the motion-distorted image frames to generate first and second noisy motion-distorted image frames. The method also includes performing (i) a first multi-frame processing (MFP) operation to generate a ground truth image using the motion-distorted image frames and (ii) a second MFP operation to generate an input image using the noisy motion-distorted image frames.Type: ApplicationFiled: February 7, 2022Publication date: August 10, 2023Inventors: Yingmao Li, Hamid R. Sheikh, John Seokjun Lee, Youngmin Kim, Jun Ki Cho, Seung-Chul Jeon
-
Publication number: 20230252770Abstract: A method for training data generation includes obtaining a first set of image frames of a scene and a second set of image frames of the scene using multiple exposure settings. The method also includes generating an alignment map, a blending map, and an input image using the first set of image frames. The method further includes generating a ground truth image using the alignment map, the blending map, and the second set of image frames. In addition, the method includes using the ground truth image and the input image as an image pair in a training dataset when training a machine learning model to reduce image distortion and noise.Type: ApplicationFiled: October 11, 2022Publication date: August 10, 2023Inventors: Tyler Luu, John W. Glotzbach, Hamid R. Sheikh, John Seokjun Lee, Youngmin Kim, Jun Ki Cho, Seung-Chul Jeon
-
Patent number: 11720782Abstract: A method includes obtaining, using at least one processor of an electronic device, multiple calibration parameters associated with multiple sensors of a selected mobile device. The method also includes obtaining, using the at least one processor, an identification of multiple imaging tasks. The method further includes obtaining, using the at least one processor, multiple synthetically-generated scene images. In addition, the method includes generating, using the at least one processor, multiple training images and corresponding meta information based on the calibration parameters, the identification of the imaging tasks, and the scene images. The training images and corresponding meta information are generated concurrently, different ones of the training images correspond to different ones of the sensors, and different pieces of the meta information correspond to different ones of the imaging tasks.Type: GrantFiled: December 28, 2020Date of Patent: August 8, 2023Assignee: Samsung Electronics Co., Ltd.Inventors: Chenchi Luo, Gyeongmin Choe, Yingmao Li, Zeeshan Nadir, Hamid R. Sheikh, John Seokjun Lee, Youngjun Yoo