Patents by Inventor Kamal Jnawali

Kamal Jnawali has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240119706
    Abstract: One embodiment provides a method comprising detecting at least one object displayed within at least one input frame of an input video. The method further comprises cropping, from the at least one input frame, at least one cropped image including the at least one object. The method further comprises generating at least one training image by overlaying simulated text on the at least one cropped image. The method further comprises providing the at least one training image to a pruned convolutional neural network (CNN). The pruned CNN learns, from the at least one training image, to reconstruct objects and textual regions during image super-resolution.
    Type: Application
    Filed: September 6, 2023
    Publication date: April 11, 2024
    Inventors: Kamal Jnawali, Tien Cheng Bau, Joonsoo Kim
  • Publication number: 20230360595
    Abstract: One embodiment provides a computer-implemented method that includes receiving region information from a stationary region detection process for a video. A processor performs a flat region ghosting artifact removal process that updates the region information with a flat region indicator utilizing the region information and the video. The processor further performs a region based luminance reduction process utilizing the updated region information with the flat region indicator for display ghosting artifact removal and burn-in protection.
    Type: Application
    Filed: May 4, 2023
    Publication date: November 9, 2023
    Inventors: Kamal Jnawali, Joonsoo Kim, Chenguang Liu, Chang Su
  • Publication number: 20230095237
    Abstract: One embodiment provides a method comprising receiving an input video comprising low-resolution (LR) frames and corresponding super-resolution (SR) frames, and generating a motion-compensated previous SR frame based on a current LR frame of the video and a motion-compensated previous residual frame of the video. The previous SR frame aligns with a current SR frame corresponding to the current LR frame. The method further comprises, in response to determining there is a mismatch between the previous SR frame and the current SR frame, correcting in the current SR frame errors that result from motion compensation based on the motion-compensated previous SR frame. The method further comprises restoring details to the current SR frame that were lost as a result of the correcting, and suppressing flickers of the current SR frame on the frequency domain, resulting in a flicker-suppressed current SR frame for presentation on a display.
    Type: Application
    Filed: August 16, 2022
    Publication date: March 30, 2023
    Inventors: Joonsoo Kim, Tien Cheng Bau, Hrishikesh Deepak Garud, Kamal Jnawali
  • Publication number: 20230047673
    Abstract: One embodiment provides a computer-implemented method that includes adaptively adjusting a detection time interval based on stationary region type of one or more stationary regions and a scene length in a video. The method further includes tracking pixels of the one or more stationary regions from a number of previous frames to a current frame in the video in real-time. A minimum and a maximum of max-Red-Green-Blue (MaxRGB) pixel values are extracted from each frame in a scene of the video as minimum and a maximum temporal feature maps for representing pixel variance over time. Segmentation and block matching are applied on the minimum and maximum temporal feature maps to detect the stationary region type.
    Type: Application
    Filed: August 9, 2022
    Publication date: February 16, 2023
    Inventors: Joonsoo Kim, Kamal Jnawali, Chenguang Liu
  • Publication number: 20230050664
    Abstract: One embodiment provides a computer-implemented method that includes providing a dynamic list structure that stores one or more detected object bounding boxes. Temporal analysis is applied that updates the dynamic list structure with object validation to reduce temporal artifacts. A two-dimensional (2D) buffer is utilized to store a luminance reduction ratio of a whole video frame. The luminance reduction ratio is applied to each pixel in the whole video frame based on the 2D buffer. One or more spatial smoothing filters are applied to the 2D buffer to reduce a likelihood of one or more spatial artifacts occurring in a luminance reduced region.
    Type: Application
    Filed: August 9, 2022
    Publication date: February 16, 2023
    Inventors: Kamal Jnawali, Joonsoo Kim, Chenguang Liu, Chang Su
  • Publication number: 20230041888
    Abstract: One embodiment provides a method generating a first image crop and a second image crop randomly extracted from a low-quality image and a high-quality image, respectively. The method further comprises comparing the first image crop and the second image crop using a plurality of loss functions including pixel-wise loss to calculate losses, and optimizing a model trained to estimate a realistic scale-independent blur kernel of a low-resolution (LR) blurred image by minimizing the losses.
    Type: Application
    Filed: July 19, 2022
    Publication date: February 9, 2023
    Inventors: Kamal Jnawali, Tien Cheng Bau