Patents by Inventor Kim C. Ng

Kim C. Ng has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20230334637
    Abstract: This application is directed to fusion of two images (e.g., a near infrared (NIR) image and an RGB image) that are captured simultaneously in a scene. A computer system extracts a first luminance component and a first color component from the first image, and extracts a second luminance component from the second image. An infrared emission strength is determined based on the first and second luminance components. The computer system combines the first and second luminance components based on the infrared emission strength to obtain a combined luminance component. The combined luminance component is combined with the first color component to obtain a fused image.
    Type: Application
    Filed: May 9, 2023
    Publication date: October 19, 2023
    Applicant: INNOPEAK TECHNOLOGY, INC.
    Inventors: Kim C Ng, Jinglin Shen, Chiu Man Ho
  • Publication number: 20230281839
    Abstract: This application is directed to image registration. A computer system aligns two images of a scene globally to generate a third image and a fourth image that correspond to the two images and are aligned with each other. The computer system divides each of the third and fourth images to multiple grid cells including a respective first grid cell. The respective first grid cells of the third and fourth images are aligned with each other. For the respective first grid cells, the computer system identifies one or more first feature points, divides each respective first grid cell to a set of sub-cells and updating the first feature points in accordance with a determination that a grid ghosting level is greater than a grid ghosting threshold, and aligns the third and fourth images based on the updated first feature points of the respective first grid cells.
    Type: Application
    Filed: May 10, 2023
    Publication date: September 7, 2023
    Inventors: Kim C. NG, Jinglin SHEN, Chiuman HO
  • Publication number: 20230274403
    Abstract: This disclosure is directed to image fusion. A computer system obtains a near infrared (NIR) image and an RGB image of a scene. A first NIR image layer is generated from the NIR image. A first RGB image layer and a second RGB image layer are generated from the RGB image. The first NIR image layer and first RGB image layer have a first resolution. A depth map is also generated and has the first resolution. Each pixel of the first NIR image layer and a corresponding pixel of the first RGB image layer are combined based on a respective weight to generate a first combined image layer used to reconstruct a fused image. For each pair of pixels of the first NIR and RGB layers, the respective weight is determined based on a depth value of a respective pixel of the depth map and a predefined cutoff depth.
    Type: Application
    Filed: May 10, 2023
    Publication date: August 31, 2023
    Inventors: Jinglin SHEN, Kim C. NG, Jinsong LIAO, Chiuman HO
  • Publication number: 20230267587
    Abstract: A computer system obtains a first image and a second image of a scene, and generates a fused image that combines the first and second images. The fused image is decomposed to a fusion base component and a fusion detail component. The first image is decomposed to a first base component and a first detail component. The computer system combines the first base component of the first image and the fusion detail component of the fused image to a final image.
    Type: Application
    Filed: March 28, 2023
    Publication date: August 24, 2023
    Inventors: Kim C. NG, Jinglin SHEN, Chiuman HO
  • Publication number: 20230267588
    Abstract: A first image and a second image are captured for a scene and fused to a fused image. The first and fused images correspond to a plurality of color channels in a color space. A first color channel is selected as an anchor channel. An anchor ratio is determined between a first color information item and a second color information item corresponding to the first color channel of the first and fused images, respectively. For each second color channel, a respective corrected color information item is determined based on the anchor ratio and at least a respective third information item of the first image. The second color information item of the first color channel of the fused image is combined with the respective corrected color information item of each of second color channel to generate a final image in the color space.
    Type: Application
    Filed: May 1, 2023
    Publication date: August 24, 2023
    Inventors: Kim C. NG, Jinglin SHEN, Chiuman HO
  • Publication number: 20230260092
    Abstract: An image is dehazed by using localized white balance adjustment. An input image is obtained and one or more hazy zones are detected in the input image. A predefined portion of pixels having minimum pixel values are identified in each of the one or more hazy zones. The input image is modified to a first image by locally saturating the predefined portion of pixels in each of the one or more hazy zones to a low-end pixel value limit. The input image and the first image are blended to form a target image.
    Type: Application
    Filed: April 4, 2023
    Publication date: August 17, 2023
    Inventors: Kim C. NG, Jinglin SHEN, Chiuman HO
  • Publication number: 20230245290
    Abstract: A method for image fusion includes the following. The two images in an image domain are converted to a first image and a second image in a radiance domain. The first image has a first radiance covering a first dynamic range, and the second image has a second radiance covering a second dynamic range. When the first dynamic range is greater than the second dynamic range, a radiance mapping function is determined between the first and second dynamic ranges to map the second radiance of the second image to the first dynamic range according to the mapping function. The first radiance of the first image is combined with the mapped second radiance of the second image to generate a fused radiance image. The fused radiance image in the radiance domain is converted to a fused pixel image in the image domain.
    Type: Application
    Filed: April 11, 2023
    Publication date: August 3, 2023
    Inventors: Kim C. NG, Jinglin SHEN, Chiuman HO
  • Publication number: 20230245289
    Abstract: A method for image fusion includes the following. One or more geometric characteristics of the NIR image and the RGB image are normalized. The normalized NIR image and the normalized RGB image are converted to a first NIR image and a first RGB image in a radiance domain, respectively. The first NIR image is decomposed to an NIR base portion and an NIR detail portion, and the first RGB image is decomposed to an RGB base portion and an RGB detail portion. The NIR base portion, RGB base portion, NIR detail portion and RGB detail portion are combined using a set of weights. A resulting weighted combination of these base and detail portions is converted from the radiance domain to a fused image in an image domain.
    Type: Application
    Filed: March 29, 2023
    Publication date: August 3, 2023
    Inventors: Kim C. NG, Jinglin SHEN, Chiuman HO
  • Patent number: 7054491
    Abstract: An image processing system which processes, in real time, multiple images, which are different views of the same object, of video data in order to match features in the images to support 3 dimensional motion picture production. The different images are captured by multiple cameras, processed by digital processing equipment to identify features and perform preliminary, two-view feature matching. The image data and matched feature point definitions are communicated to an adjacent camera to support at least two image matching. The matched feature point data are then transferred to a central computer, which performs a multiple-view correspondence between all of the images.
    Type: Grant
    Filed: November 16, 2001
    Date of Patent: May 30, 2006
    Assignee: STMicroelectronics, Inc.
    Inventors: Peter J. McGuinness, George Q. Chen, Clifford M. Stein, Kim C. Ng
  • Publication number: 20030095711
    Abstract: An image processing system which processes, in real time, multiple images, which are different views of the same object, of video data in order to match features in the images to support 3 dimensional motion picture production. The different images are captured by multiple cameras, processed by digital processing equipment to identify features and perform preliminary, two-view feature matching. The image data and matched feature point definitions are communicated to an adjacent camera to support at least two image matching. The matched feature point data are then transferred to a central computer, which performs a multiple-view correspondence between all of the images.
    Type: Application
    Filed: November 16, 2001
    Publication date: May 22, 2003
    Applicant: STMICROELECTRONICS, INC.
    Inventors: Peter J. McGuinness, George Q. Chen, Clifford M. Stein, Kim C. Ng