Patents by Inventor Sing Kang

Sing Kang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 7408362
    Abstract: An integrated circuit package includes at least two electronic circuits. A first of the at least two electronic circuits includes a digital input and a digital output and a test mode control line for setting the first integrated circuit chip into a determined test mode. The digital input includes at least two parallel input paths and the digital output includes at least two parallel output paths. The at least two parallel input paths and at least two parallel output paths provide a corresponding number of internal paths by which the first electronic circuit and a second electronic circuit can be tested essentially simultaneously.
    Type: Grant
    Filed: June 7, 2007
    Date of Patent: August 5, 2008
    Assignee: Infineon Technologies AG
    Inventors: Shakil Ahmad, Poh Sing Kang, Narang Jasmeet Singh
  • Publication number: 20070263119
    Abstract: Foreground object matting uses flash/no-flash images pairs to obtain a flash-only image. A trimap is obtained from the flash-only image. A joint Bayesian algorithm uses the flash-only image, the trimap and one of the image of the scene taken without the flash or the image of the scene taken with the flash to generate a high quality matte that can be used to extract the foreground from the background.
    Type: Application
    Filed: May 15, 2006
    Publication date: November 15, 2007
    Applicant: Microsoft Corporation
    Inventors: Heung-Yeung Shum, Jian Sun, Sing Kang, Yin Li
  • Publication number: 20070177817
    Abstract: An “Image Denoiser” provides a probabilistic process for denoising color images by segmenting an input image into regions, estimating statistics within each region, and then estimating a clean (or denoised) image using a probabilistic model of image formation. In one embodiment, estimated blur between each region is used to reduce artificial sharpening of region boundaries resulting from denoising the input image. In further embodiments, the estimated blur is used for additional purposes, including sharpening edges between one or more regions, and selectively blurring or sharpening one or more specific regions of the image (i.e., “selective focus”) while maintaining the original blurring between the various regions.
    Type: Application
    Filed: January 27, 2006
    Publication date: August 2, 2007
    Applicant: Microsoft Corporation
    Inventors: Richard Szeliski, Sing Kang, Ce Liu, Charles Zitnick
  • Publication number: 20070177033
    Abstract: A Bayesian two-color image demosaicer and method for processing a digital color image to demosaic the image in such a way as to reduce image artifacts. The method and system are an improvement on and an enhancement to previous demosaicing techniques. A preliminary demosaicing pass is performed on the image to assign each pixel a fully specified RGB triple color value. The final color value of pixel in the processed image is restricted to be a linear combination of two colors. Fully-specified RGB triple color values for each pixel in an image used to find two clusters represented favored two colors. The amount of contribution from these favored two colors on the final color value then is determined. The method and system also can process multiple images to improve the demosaicing results. When using multiple images, sampling can be performed at a finer resolution, known as super resolution.
    Type: Application
    Filed: January 30, 2006
    Publication date: August 2, 2007
    Applicant: Microsoft Corporation
    Inventors: Eric Bennett, Matthew Uyttendaele, Charles Zitnick, Sing Kang, Richard Szeliski
  • Publication number: 20070153341
    Abstract: An automatic purple fringing removal system and method for automatically eliminating purple-fringed regions from high-resolution images. The technique is based on the observations that purple-fringing regions often are adjacent near-saturated regions, and that purple-fringed regions are regions in which the blue and red color intensities are substantially greater than the green color intensity. The automatic purple fringing removal system and method implements these two observations by automatically detecting a purple-fringed region in an image and then automatically correcting the region. Automatic detection is achieved by finding near-saturated regions and candidate regions, and then defining a purple-fringed region as a candidate region adjacent a near-saturated region.
    Type: Application
    Filed: December 30, 2005
    Publication date: July 5, 2007
    Applicant: Microsoft Corporation
    Inventor: Sing Kang
  • Publication number: 20070146506
    Abstract: A system and process for determining the vignetting function of an image and using the function to correct for the vignetting is presented. The image can be any arbitrary image and no other images are required. The system and process is designed to handle both textured and untextured segments in order to maximize the use of available information. To extract vignetting information from an image, segmentation techniques are employed that locate image segments with reliable data for vignetting estimation. Within each image segment, the system and process capitalizes on frequency characteristics and physical properties of vignetting to distinguish it from other sources of intensity variation. The vignetting data acquired from segments are weighted according to a presented reliability measure to promote robustness in estimation.
    Type: Application
    Filed: March 17, 2006
    Publication date: June 28, 2007
    Applicant: Microsoft Corporation
    Inventors: Stephen Lin, Baining Guo, Sing Kang, Yuanjie Zheng
  • Publication number: 20070122028
    Abstract: The present symmetric stereo matching technique provides a method for iteratively estimating a minimum energy for occlusion and disparity using belief propagation. The minimum energy is based on an energy minimization framework in which a visibility constraint is embedded. By embedding the visibility constraint, the present symmetric stereo matching technique treats both images equally, instead of treating one as a reference image. The visibility constraint ensures that occlusion in one view and the disparity in another view are consistent.
    Type: Application
    Filed: November 30, 2005
    Publication date: May 31, 2007
    Applicant: Microsoft Corporation
    Inventors: Jian Sun, Yin Li, Sing Kang, Heung-Yeung Shum
  • Publication number: 20070109310
    Abstract: Systems and methods for sketching reality are described. In one aspect, a set of vector primitives is identified from a 2-D sketch. In one implementation, the 2-D sketch is hand-drawn by a user. A 2.5D geometry model is automatically generated from the vector primitives. The 2.5D geometry model is automatically rendered and presented to a user. In one implementation, the user provides 2-D sketch-based user inputs to modify one or more of lighting position, lighting direction, lighting intensity, texture, color, and geometry of the presentation.
    Type: Application
    Filed: October 31, 2006
    Publication date: May 17, 2007
    Applicant: Microsoft Corporation
    Inventors: Ying-Qing Xu, Sing Kang, Heung-Yeung Shum, Xuejin Chen
  • Publication number: 20060228002
    Abstract: A technique for estimating the optical flow between images of a scene and a segmentation of the images is presented. This involves first establishing an initial segmentation of the images and an initial optical flow estimate for each segment of each images and its neighboring image or images. A refined optical flow estimate is computed for each segment of each image from the initial segmentation of that image and the initial optical flow of the segments of that image. Next, the segmentation of each image is refined from the last-computed optical flow estimates for each segment of the image. This process can continue in an iterative manner by further refining the optical flow estimates for the images using their respective last-computed segmentation, followed by further refining the segmentation of each image using their respective last-computed optical flow estimates, until a prescribed number of iterations have been completed.
    Type: Application
    Filed: July 30, 2005
    Publication date: October 12, 2006
    Applicant: Microsoft Corporation
    Inventors: Charles Zitnick, Sing Kang, Nebojsa Jojic
  • Publication number: 20060192785
    Abstract: The illustrated and described embodiments describe techniques for capturing data that describes 3-dimensional (3-D) aspects of a face, transforming facial motion from one individual to another in a realistic manner, and modeling skin reflectance.
    Type: Application
    Filed: April 24, 2006
    Publication date: August 31, 2006
    Applicant: Microsoft Corporation
    Inventors: Stephen Marschner, Brian Guenter, Sashi Raghupathy, Kirk Olynyk, Sing Kang
  • Publication number: 20060133688
    Abstract: A system and process for generating High Dynamic Range (HDR) video is presented which involves first capturing a video image sequence while varying the exposure so as to alternate between frames having a shorter and longer exposure. The exposure for each frame is set prior to it being captured as a function of the pixel brightness distribution in preceding frames. Next, for each frame of the video, the corresponding pixels between the frame under consideration and both preceding and subsequent frames are identified. For each corresponding pixel set, at least one pixel is identified as representing a trustworthy pixel. The pixel color information associated with the trustworthy pixels is then employed to compute a radiance value for each pixel set to form a radiance map. A tone mapping procedure can then be performed to convert the radiance map into an 8-bit representation of the HDR frame.
    Type: Application
    Filed: January 23, 2006
    Publication date: June 22, 2006
    Applicant: Microsoft Corporation
    Inventors: Sing Kang, Matthew Uyttendaele, Simon Winder, Richard Szeliski
  • Publication number: 20060114253
    Abstract: A system and process for generating a two-layer, 3D representation of a digital or digitized image from the image and a pixel disparity map of the image is presented. The two layer representation includes a main layer having pixels exhibiting background colors and background disparities associated with correspondingly located pixels of depth discontinuity areas in the image, as well as pixels exhibiting colors and disparities associated with correspondingly located pixels of the image not found in these depth discontinuity areas. The other layer is a boundary layer made up of pixels exhibiting foreground colors, foreground disparities and alpha values associated with the correspondingly located pixels of the depth discontinuity areas. The depth discontinuity areas correspond to prescribed sized areas surrounding depth discontinuities found in the image using a disparity map thereof.
    Type: Application
    Filed: January 17, 2006
    Publication date: June 1, 2006
    Applicant: Microsoft Corporation
    Inventors: Charles Zitnick, Richard Szeliski, Sing Kang, Matthew Uyttendaele, Simon Winder
  • Publication number: 20060038880
    Abstract: Stereoscopic image display is described. In an embodiment, a location of the eye pupils of a viewer is determined and tracked. An image is displayed within a first focus for viewing with the left eye of the viewer, and the image is displayed within a second focus for viewing with the right eye of the viewer. A positional change of the eye pupils is tracked and a sequential image that corresponds to the positional change of the eye pupils is generated for stereoscopic viewing. In another embodiment, an image is displayed for stereoscopic viewing and a head position of a viewer relative to a center of the displayed image is determined. A positional change of the viewer's head is tracked, and a sequential image that corresponds to the positional change of the viewer's head is generated for stereoscopic viewing.
    Type: Application
    Filed: August 19, 2004
    Publication date: February 23, 2006
    Applicant: Microsoft Corporation
    Inventors: Gary Starkweather, Michael Sinclair, Sing Kang
  • Publication number: 20060038881
    Abstract: Stereoscopic image display is described. In an embodiment, a location of the eye pupils of a viewer is determined and tracked. An image is displayed within a first focus for viewing with the left eye of the viewer, and the image is displayed within a second focus for viewing with the right eye of the viewer. A positional change of the eye pupils is tracked and a sequential image that corresponds to the positional change of the eye pupils is generated for stereoscopic viewing. In another embodiment, an image is displayed for stereoscopic viewing and a head position of a viewer relative to a center of the displayed image is determined. A positional change of the viewer's head is tracked, and a sequential image that corresponds to the positional change of the viewer's head is generated for stereoscopic viewing.
    Type: Application
    Filed: September 23, 2004
    Publication date: February 23, 2006
    Applicant: Microsoft Corporation
    Inventors: Gary Starkweather, Michael Sinclair, Sing Kang
  • Publication number: 20060031915
    Abstract: A system and process for compressing and decompressing multiple video streams depicting substantially the same dynamic scene from different viewpoints that from a grid of viewpoints. Each frame in each contemporaneous set of video frames of the multiple streams is represented by at least a two layers—a main layer and a boundary layer. Compression of the main layers involves first designating one or more of these layers in each set of contemporaneous frames as keyframes. For each set of contemporaneous frames in time sequence order, the main layer of each keyframe is compressed using an inter-frame compression technique. In addition, the main layer of each non-keyframe within the frame set under consideration is compressed using a spatial prediction compression technique. Finally, the boundary layers of each frame in the current frame set are each compressed using an intra-frame compression technique. Decompression is generally the reverse of the compression process.
    Type: Application
    Filed: March 31, 2005
    Publication date: February 9, 2006
    Applicant: Microsoft Corporation
    Inventors: Simon Winder, Matthew Uyttendaele, Charles Zitnick, Richard Szeliski, Sing Kang
  • Publication number: 20060031917
    Abstract: A process for compressing and decompressing non-keyframes in sequential sets of contemporaneous video frames making up multiple video streams where the video frames in a set depict substantially the same scene from different viewpoints. Each set of contemporaneous video frames has a plurality frames designated as keyframes with the remaining being non-keyframes. In one embodiment, the non-keyframes are compressed using a multi-directional spatial prediction technique. In another embodiment, the non-keyframes of each set of contemporaneous video frames are compressed using a combined chaining and spatial prediction compression technique. The spatial prediction compression technique employed can be a single direction technique where just one reference frame, and so one chain, is used to predict each non-keyframe, or it can be a multi-directional technique where two or more reference frames, and so chains, are used to predict each non-keyframe.
    Type: Application
    Filed: July 15, 2005
    Publication date: February 9, 2006
    Applicant: Microsoft Corporation
    Inventors: Simon Winder, Matthew Uyttendaele, Charles Zitnick, Richard Szeliski, Sing Kang
  • Publication number: 20060029134
    Abstract: A system and process for compressing and decompressing multiple video streams depicting substantially the same dynamic scene from different viewpoints. Each frame in each contemporaneous set of video frames of the multiple streams is represented by at least a two layers—a main layer and a boundary layer. Compression of the main layers involves first designating one or more of these layers in each set of contemporaneous frames as keyframes. For each set of contemporaneous frames in time sequence order, the main layer of each keyframe is compressed using an inter-frame compression technique. In addition, the main layer of each non-keyframe within the frame set under consideration is compressed using a spatial prediction compression technique. Finally, the boundary layers of each frame in the current frame set are each compressed using an intra-frame compression technique. Decompression is generally the reverse of the compression process.
    Type: Application
    Filed: August 3, 2004
    Publication date: February 9, 2006
    Applicant: Microsoft Corporation
    Inventors: Simon Winder, Matthew Uyttendaele, Charles Zitnick, Richard Szeliski, Sing Kang
  • Publication number: 20060028473
    Abstract: A system and process for rendering and displaying an interactive viewpoint video is presented in which a user can watch a dynamic scene while manipulating (freezing, slowing down, or reversing) time and changing the viewpoint at will. The ability to interactively control viewpoint while watching a video is an exciting new application for image-based rendering. Because any intermediate view can be synthesized at any time, with the potential for space-time manipulation, this type of video has been dubbed interactive viewpoint video.
    Type: Application
    Filed: August 3, 2004
    Publication date: February 9, 2006
    Applicant: Microsoft Corporation
    Inventors: Matthew Uyttendaele, Simon Winder, Charles Zitnick, Richard Szeliski, Sing Kang
  • Publication number: 20060028489
    Abstract: A system and process for rendering and displaying an interactive viewpoint video is presented in which a user can watch a dynamic scene while manipulating (freezing, slowing down, or reversing) time and changing the viewpoint at will. The ability to interactively control viewpoint while watching a video is an exciting new application for image-based rendering. Because any intermediate view can be synthesized at any time, with the potential for space-time manipulation, this type of video has been dubbed interactive viewpoint video.
    Type: Application
    Filed: March 31, 2005
    Publication date: February 9, 2006
    Applicant: Microsoft Corporation
    Inventors: Matthew Uyttendaele, Simon Winder, Charles Zitnick, Richard Szeliski, Sing Kang
  • Publication number: 20060013449
    Abstract: In the described embodiment, methods and systems for processing facial image data for use in animation are described. In one embodiment, a system is provided that illuminates a face with illumination that is sufficient to enable the simultaneous capture of both structure data, e.g. a range or depth map, and reflectance properties, e.g. the diffuse reflectance of a subject's face. This captured information can then be used for various facial animation operations, among which are included expression recognition and expression transformation.
    Type: Application
    Filed: September 1, 2005
    Publication date: January 19, 2006
    Applicant: Microsoft Corporation
    Inventors: Stephen Marschner, Brian Guenter, Sashi Raghupathy, Kirk Olynyk, Sing Kang