Patents by Inventor Sing Bing Kang

Sing Bing Kang has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20210250531
    Abstract: An “Adaptive Exposure Corrector” performs automated real-time exposure correction of individual images or image sequences of arbitrary length. “Exposure correction” is defined herein as automated adjustments or corrections to any combination of shadows, highlights, high-frequency features, and color saturation of images. The Adaptive Exposure Corrector outputs perceptually improved images based on image ISO and camera ISO capabilities in combination with camera noise characteristics via exposure corrections by a variety of noise-aware image processing functions. An initial calibration process adapts these noise aware image processing functions to noise characteristics of particular camera models and types in combination with particular camera ISO settings. More specifically, this calibration process precomputes a Noise Aware Scaling Function (NASF) and a Color Scalar Function (CSF).
    Type: Application
    Filed: July 26, 2016
    Publication date: August 12, 2021
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Lu YUAN, Sing Bing KANG, Chintan A. SHAH
  • Patent number: 10964053
    Abstract: Computing devices and methods for estimating a pose of a user computing device are provided. In one example a 3D map comprising a plurality of 3D points representing a physical environment is obtained. Each 3D point is transformed into a 3D line that passes through the point to generate a 3D line cloud. A query image of the environment captured by a user computing device is received, the query image comprising query features that correspond to the environment. Using the 3D line cloud and the query features, a pose of the user computing device with respect to the environment is estimated.
    Type: Grant
    Filed: July 2, 2018
    Date of Patent: March 30, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sudipta Narayan Sinha, Pablo Alejandro Speciale, Sing Bing Kang, Marc Andre Leon Pollefeys
  • Patent number: 10929658
    Abstract: Systems and methods for stereo matching based upon active illumination using a patch in a non-actively illuminated image to obtain weights that are used in patch similarity determinations in actively illuminated stereo images is provided. To correlate pixels in actively illuminated stereo images, adaptive support weights computations are used to determine similarity of patches corresponding to the pixels. In order to obtain adaptive support weights for the adaptive support weights computations, weights are obtained by processing a non-actively illuminated (“clean”) image.
    Type: Grant
    Filed: June 21, 2013
    Date of Patent: February 23, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Adam G. Kirk, Christoph Rhemann, Oliver A. Whyte, Shahram Izadi, Sing Bing Kang
  • Patent number: 10928189
    Abstract: The subject disclosure is directed towards projecting light in a pattern in which the pattern contains components (e.g., spots) having different intensities. The pattern may be based upon a grid of initial points associated with first intensities and points between the initial points with second intensities, and so on. The pattern may be rotated relative to cameras that capture the pattern, with captured images used active depth sensing based upon stereo matching of dots in stereo images.
    Type: Grant
    Filed: March 5, 2018
    Date of Patent: February 23, 2021
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sing Bing Kang, Andreas Georgiou, Richard S. Szeliski
  • Patent number: 10839556
    Abstract: A method for estimating a camera pose includes recognizing a three-dimensional (3D) map representing a physical environment, the 3D map including 3D map features defined as 3D points. An obfuscated image representation is received, the representation derived from an original unobfuscated image of the physical environment captured by a camera. The representation includes a plurality of obfuscated features, each including (i) a two-dimensional (2D) line that passes through a 2D point in the original unobfuscated image at which an image feature was detected, and (ii) a feature descriptor that describes the image feature associated with the 2D point that the 2D line of the obfuscated feature passes through. Correspondences are determined between the obfuscated features and the 3D map features of the 3D map of the physical environment. Based on the determined correspondences, a six degree of freedom pose of the camera in the physical environment is estimated.
    Type: Grant
    Filed: October 23, 2018
    Date of Patent: November 17, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sudipta Narayan Sinha, Marc Andre Leon Pollefeys, Sing Bing Kang, Pablo Alejandro Speciale
  • Patent number: 10816331
    Abstract: The subject disclosure is directed towards active depth sensing based upon moving a projector or projector component to project a moving light pattern into a scene. Via the moving light pattern captured over a set of frames, e.g., by a stereo camera system, and estimating light intensity at sub-pixel locations in each stereo frame, higher resolution depth information at a sub-pixel level may be computed than is captured by the native camera resolution.
    Type: Grant
    Filed: February 5, 2018
    Date of Patent: October 27, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sing Bing Kang, Shahram Izadi
  • Patent number: 10726255
    Abstract: Systems and methods for stereo matching based upon active illumination using a patch in a non-actively illuminated image to obtain weights that are used in patch similarity determinations in actively illuminated stereo images is provided. To correlate pixels in actively illuminated stereo images, adaptive support weights computations are used to determine similarity of patches corresponding to the pixels. In order to obtain adaptive support weights for the adaptive support weights computations, weights are obtained by processing a non-actively illuminated (“clean”) image.
    Type: Grant
    Filed: June 21, 2013
    Date of Patent: July 28, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Adam G. Kirk, Christoph Rhemann, Oliver A. Whyte, Shahram Izadi, Sing Bing Kang
  • Patent number: 10671895
    Abstract: A “Best of Burst Selector,” or “BoB Selector,” automatically selects a subjectively best image from a single set of images of a scene captured in a burst or continuous capture mode, captured as a video sequence, or captured as multiple images of the scene over any arbitrary period of time and any arbitrary timing between images. This set of images is referred to as a burst set. Selection of the subjectively best image is achieved in real-time by applying a machine-learned model to the burst set. The machine-learned model of the BoB Selector is trained to select one or more subjectively best images from the burst set in a way that closely emulates human selection based on subjective subtleties of human preferences. Images automatically selected by the BoB Selector are presented to a user or saved for further processing.
    Type: Grant
    Filed: December 12, 2016
    Date of Patent: June 2, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Baoyuan Wang, Sing Bing Kang, Joshua Bryan Weisberg
  • Publication number: 20200151849
    Abstract: According to implementations of the subject matter, a solution is provided for visual style transfer of images. In this solution, first and second sets of feature maps are extracted for first and second source images, respectively, a feature map in the first or second set of feature maps representing at least a part of a visual style of the first or second source image. A first mapping from the first source image to the second source image is determined based on the first and second sets of feature maps. The first source image is transferred based on the first mapping and the second source image to generate a first target image at least partially having the second visual style. Through this solution, a visual style of a source image can be effectively applied to a further source image in feature space.
    Type: Application
    Filed: April 6, 2018
    Publication date: May 14, 2020
    Applicant: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Jing LIAO, Lu YUAN, Gang HUA, Sing Bing KANG
  • Publication number: 20200126256
    Abstract: A method for estimating a camera pose includes recognizing a three-dimensional (3D) map representing a physical environment, the 3D map including 3D map features defined as 3D points. An obfuscated image representation is received, the representation derived from an original unobfuscated image of the physical environment captured by a camera. The representation includes a plurality of obfuscated features, each including (i) a two-dimensional (2D) line that passes through a 2D point in the original unobfuscated image at which an image feature was detected, and (ii) a feature descriptor that describes the image feature associated with the 2D point that the 2D line of the obfuscated feature passes through. Correspondences are determined between the obfuscated features and the 3D map features of the 3D map of the physical environment. Based on the determined correspondences, a six degree of freedom pose of the camera in the physical environment is estimated.
    Type: Application
    Filed: October 23, 2018
    Publication date: April 23, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Sudipta Narayan SINHA, Marc Andre Leon POLLEFEYS, Sing Bing KANG, Pablo Alejandro SPECIALE
  • Patent number: 10609284
    Abstract: Hyperlapse results are generated from wide-angled, panoramic video. A set of wide-angled, panoramic video data is obtained. Video stabilization is performed on the obtained set of wide-angled, panoramic video data. Without user intervention, a smoothed camera path is automatically determined using at least one region of interest that is determined using saliency detection and semantically segmented frames of stabilized video data resulting from the video stabilization. A set of frames is determined to vary the velocity of wide-angled, panoramic rendered display of the hyperlapse results.
    Type: Grant
    Filed: April 5, 2017
    Date of Patent: March 31, 2020
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sing Bing Kang, Neel Suresh Joshi, Christopher J. Buehler, Wei-Sheng Lai, Yujia Huang
  • Patent number: 10530991
    Abstract: An “Exposure Controller” provides various techniques for training and applying a deep convolution network to provide real-time automated camera exposure control, as a real-time function of scene semantic context, in a way that improves image quality for a wide range of image subject types in a wide range of real-world lighting conditions. The deep learning approach applied by the Exposure Controller to implement this functionality first uses supervised learning to achieve a good anchor point that mimics integral exposure control for a particular camera model or type, followed by refinement through reinforcement learning. The end-to-end system (e.g., exposure control and image capture) provided by the Exposure Controller provides real-time performance for predicting and setting camera exposure values to improve overall visual quality of the resulting image over a wide range of image capture scenarios (e.g., back-lit scenes, front lighting, rapid changes to lighting conditions, etc.).
    Type: Grant
    Filed: August 7, 2017
    Date of Patent: January 7, 2020
    Assignee: MICROSOFT TECHNOLOGY LICENSING, LLC
    Inventors: Baoyuan Wang, Sing Bing Kang
  • Publication number: 20200005486
    Abstract: Computing devices and methods for estimating a pose of a user computing device are provided. In one example a 3D map comprising a plurality of 3D points representing a physical environment is obtained. Each 3D point is transformed into a 3D line that passes through the point to generate a 3D line cloud. A query image of the environment captured by a user computing device is received, the query image comprising query features that correspond to the environment. Using the 3D line cloud and the query features, a pose of the user computing device with respect to the environment is estimated.
    Type: Application
    Filed: July 2, 2018
    Publication date: January 2, 2020
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Sudipta Narayan SINHA, Pablo Alejandro SPECIALE, Sing Bing KANG, Marc Andre Leon POLLEFEYS
  • Patent number: 10268885
    Abstract: The subject disclosure is directed towards color correcting for infrared (IR) components that are detected in the R, G, B parts of a sensor photosite. A calibration process determines true R, G, B based upon obtaining or estimating IR components in each photosite, such as by filtering techniques and/or using different IR lighting conditions. A set of tables or curves obtained via offline calibration model the correction data needed for online correction of an image.
    Type: Grant
    Filed: June 11, 2013
    Date of Patent: April 23, 2019
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Sing Bing Kang, Adam G. Kirk
  • Publication number: 20180260623
    Abstract: The subject disclosure is directed towards projecting light in a pattern in which the pattern contains components (e.g., spots) having different intensities. The pattern may be based upon a grid of initial points associated with first intensities and points between the initial points with second intensities, and so on. The pattern may be rotated relative to cameras that capture the pattern, with captured images used active depth sensing based upon stereo matching of dots in stereo images.
    Type: Application
    Filed: March 5, 2018
    Publication date: September 13, 2018
    Inventors: Sing Bing KANG, Andreas GEORGIOU, Richard S. SZELISKI
  • Publication number: 20180220061
    Abstract: An “Exposure Controller” provides various techniques for training and applying a deep convolution network to provide real-time automated camera exposure control, as a real-time function of scene semantic context, in a way that improves image quality for a wide range of image subject types in a wide range of real-world lighting conditions. The deep learning approach applied by the Exposure Controller to implement this functionality first uses supervised learning to achieve a good anchor point that mimics integral exposure control for a particular camera model or type, followed by refinement through reinforcement learning. The end-to-end system (e.g., exposure control and image capture) provided by the Exposure Controller provides real-time performance for predicting and setting camera exposure values to improve overall visual quality of the resulting image over a wide range of image capture scenarios (e.g., back-lit scenes, front lighting, rapid changes to lighting conditions, etc.).
    Type: Application
    Filed: August 7, 2017
    Publication date: August 2, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Baoyuan Wang, Sing Bing Kang
  • Publication number: 20180218210
    Abstract: Aspects of the subject disclosure are directed towards safely projecting a diffracted light pattern, such as in an infrared laser-based projection/illumination system. Non-diffracted (zero-order) light is refracted once to diffuse (defocus) the non-diffracted light to an eye safe level. Diffracted (non-zero-order) light is aberrated twice, e.g., once as part of diffraction by a diffracting optical element encoded with a Fresnel lens (which does not aberrate the non-diffracted light), and another time to cancel out the other aberration; the two aberrations may occur in either order. Various alternatives include upstream and downstream positioning of the diffracting optical element relative to a refractive optical element, and/or refraction via positive and negative lenses.
    Type: Application
    Filed: March 27, 2018
    Publication date: August 2, 2018
    Inventors: Andreas Georgiou, Joel Steven Kollin, Sing Bing Kang
  • Publication number: 20180173947
    Abstract: The subject disclosure is directed towards active depth sensing based upon moving a projector or projector component to project a moving light pattern into a scene. Via the moving light pattern captured over a set of frames, e.g., by a stereo camera system, and estimating light intensity at sub-pixel locations in each stereo frame, higher resolution depth information at a sub-pixel level may be computed than is captured by the native camera resolution.
    Type: Application
    Filed: February 5, 2018
    Publication date: June 21, 2018
    Inventors: Sing Bing KANG, Shahram IZADI
  • Publication number: 20180121733
    Abstract: A “Quality Predictor” applies a machine-learned quality model to predict subjective quality of an output video of an image sequence processing algorithm without actually running that algorithm on a temporal sequence of image frames (referred to as “candidate sets”). Candidate sets having sufficiently high predicted quality scores are processed by the image sequence processing algorithm to produce an output video. Therefore, the Quality Predictor reduces computational overhead by eliminating unnecessary processing of candidate sets when the image sequence processing algorithm is not expected to produce acceptable results. The quality model is trained on a combination of human quality scores of output videos generated by the image sequence processing algorithm and image features extracted from frames of image sequences used to generate those output videos.
    Type: Application
    Filed: October 27, 2016
    Publication date: May 3, 2018
    Applicant: Microsoft Technology Licensing, LLC
    Inventors: Neel Suresh Joshi, Baoyuan Wang, Sing Bing Kang
  • Patent number: 9959465
    Abstract: Aspects of the subject disclosure are directed towards safely projecting a diffracted light pattern, such as in an infrared laser-based projection/illumination system. Non-diffracted (zero-order) light is refracted once to diffuse (defocus) the non-diffracted light to an eye safe level. Diffracted (non-zero-order) light is aberrated twice, e.g., once as part of diffraction by a diffracting optical element encoded with a Fresnel lens (which does not aberrate the non-diffracted light), and another time to cancel out the other aberration; the two aberrations may occur in either order. Various alternatives include upstream and downstream positioning of the diffracting optical element relative to a refractive optical element, and/or refraction via positive and negative lenses.
    Type: Grant
    Filed: June 20, 2013
    Date of Patent: May 1, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Andreas Georgiou, Joel Steven Kollin, Sing Bing Kang