Patents by Inventor Adam G. Kirk

Adam G. Kirk has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 9098908
    Abstract: Methods and systems for generating a depth map are provided. The method includes projecting an infrared (IR) dot pattern onto a scene. The method also includes capturing stereo images from each of two or more synchronized IR cameras, detecting a number of dots within the stereo images, computing a number of feature descriptors for the dots in the stereo images, and computing a disparity map between the stereo images. The method further includes generating a depth map for the scene using the disparity map.
    Type: Grant
    Filed: October 21, 2011
    Date of Patent: August 4, 2015
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Adam G. Kirk, Yaron Eshet, Kestutis Patiejunas, Sing Bing Kang, Charles Lawrence Zitnick, III, David Eraker, Simon Winder
  • Publication number: 20140307058
    Abstract: The subject disclosure is directed towards a high resolution, high frame rate, robust stereo depth system. The system provides depth data in varying conditions based upon stereo matching of images, including actively illuminated IR images in some implementations. A clean IR or RGB image may be captured and used with any other captured images in some implementations. Clean IR images may be obtained by using a notch filter to filter out the active illumination pattern. IR stereo cameras, a projector, broad spectrum IR LEDs and one or more other cameras may be incorporated into a single device, which may also include image processing components to internally compute depth data in the device for subsequent output.
    Type: Application
    Filed: June 24, 2013
    Publication date: October 16, 2014
    Inventors: Adam G. Kirk, Oliver A. Whyte, Sing Bing Kang, Charles Lawrence Zitnick, III, Richard S. Szeliski, Shahram Izadi, Christoph Rhemann, Andreas Georgiou, Avronil Bhattacharjee
  • Publication number: 20140307056
    Abstract: The subject disclosure is directed towards a framework that is configured to allow different background-foreground segmentation modalities to contribute towards segmentation. In one aspect, pixels are processed based upon RGB background separation, chroma keying, IR background separation, current depth versus background depth and current depth versus threshold background depth modalities. Each modality may contribute as a factor that the framework combines to determine a probability as to whether a pixel is foreground or background. The probabilities are fed into a global segmentation framework to obtain a segmented image.
    Type: Application
    Filed: June 14, 2013
    Publication date: October 16, 2014
    Inventors: Alvaro Collet Romea, Bao Zhang, Adam G. Kirk
  • Publication number: 20140307047
    Abstract: The subject disclosure is directed towards stereo matching based upon active illumination, including using a patch in a non-actively illuminated image to obtain weights that are used in patch similarity determinations in actively illuminated stereo images. To correlate pixels in actively illuminated stereo images, adaptive support weights computations may be used to determine similarity of patches corresponding to the pixels. In order to obtain meaningful adaptive support weights for the adaptive support weights computations, weights are obtained by processing a non-actively illuminated (“clean”) image.
    Type: Application
    Filed: June 21, 2013
    Publication date: October 16, 2014
    Inventors: Adam G. Kirk, Christoph Rhemann, Oliver A. Whyte, Shahram Izadi, Sing Bing Kang
  • Publication number: 20140307098
    Abstract: The subject disclosure is directed towards color correcting for infrared (IR) components that are detected in the R, G, B parts of a sensor photosite. A calibration process determines true R, G, B based upon obtaining or estimating IR components in each photosite, such as by filtering techniques and/or using different IR lighting conditions. A set of tables or curves obtained via offline calibration model the correction data needed for online correction of an image.
    Type: Application
    Filed: June 11, 2013
    Publication date: October 16, 2014
    Inventors: Sing Bing Kang, Adam G. Kirk
  • Publication number: 20140307953
    Abstract: The subject disclosure is directed towards communicating image-related data between a base station and/or one or more satellite computing devices, e.g., tablet computers and/or smartphones. A satellite device captures image data and communicates image-related data (such as the images or depth data processed therefrom) to another device, such as a base station. The receiving device uses the image-related data to enhance depth data (e.g., a depth map) based upon the image data captured from the satellite device, which may be physically closer to something in the scene than the base station, for example. To more accurately capture depth data in various conditions, an active illumination pattern may be projected from the base station or another external projector, whereby satellite units may use the other source's active illumination and thereby need not consume internal power to benefit from active illumination.
    Type: Application
    Filed: June 21, 2013
    Publication date: October 16, 2014
    Inventors: Adam G. Kirk, Oliver A. Whyte, Christoph Rhemann, Shahram Izadi
  • Publication number: 20140192158
    Abstract: The description relates to stereo image matching to determine depth of a scene as captured by images. More specifically, the described implementations can involve a two-stage approach where the first stage can compute depth at highly accurate but sparse feature locations. The second stage can compute a dense depth map using the first stage as initialization. This improves accuracy and robustness of the dense depth map.
    Type: Application
    Filed: January 4, 2013
    Publication date: July 10, 2014
    Applicant: Microsoft Corporation
    Inventors: Oliver Whyte, Adam G. Kirk, Shahram Izadi, Carsten Rother, Michael Bleyer, Christoph Rhemann
  • Publication number: 20130321590
    Abstract: The glancing angle exclusion technique described herein selectively limits projective texturing near depth map discontinuities. A depth discontinuity is defined by a jump between a near-depth surface and a far-depth surface. The claimed technique can limit projective texturing on near and far surfaces to a different degree—for example, the technique can limit far-depth projective texturing within a certain distance to a depth discontinuity but not near-depth projective texturing.
    Type: Application
    Filed: August 30, 2012
    Publication date: December 5, 2013
    Applicant: MICROSOFT CORPORATION
    Inventor: Adam G. Kirk
  • Publication number: 20130321589
    Abstract: The automated camera array calibration technique described herein pertains to a technique for automating camera array calibration. The technique can leverage corresponding depth and single or multi-spectral intensity data (e.g., RGB (Red Green Blue) data) captured by hybrid capture devices to automatically determine camera geometry. In one embodiment it does this by finding common features in the depth maps between two hybrid capture devices and derives a rough extrinsic calibration based on shared depth map features. It then uses the intensity (e.g., RGB) data corresponding to the depth maps and uses the features of the intensity (e.g., RGB) data to refine the rough extrinsic calibration.
    Type: Application
    Filed: August 3, 2012
    Publication date: December 5, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: Adam G. Kirk, Yaron Eshet, David Eraker
  • Publication number: 20130321593
    Abstract: The view frustum culling technique described herein allows Free Viewpoint Video (FVV) or other 3D spatial video rendering at a client by sending only the 3D geometry and texture (e.g., RGB) data necessary for a specific viewpoint or view frustum from a server to the rendering client. The synthetic viewpoint is then rendered by the client by using the received geometry and texture data for the specific viewpoint or view frustum. In some embodiments of the view frustum culling technique, the client has both some texture data and 3D geometric data stored locally if there is sufficient local processing power. Additionally, in some embodiments, additional spatial and temporal data can be sent to the client to support changes in the view frustum by providing additional geometry and texture data that will likely be immediately used if the viewpoint is changed either spatially or temporally.
    Type: Application
    Filed: August 29, 2012
    Publication date: December 5, 2013
    Applicant: MICROSOFT CORPORATION
    Inventors: Adam G. Kirk, Donald Marcus Gillett, Patrick Sweeney, Neil Fishman, David Eraker
  • Publication number: 20130100256
    Abstract: Methods and systems for generating a depth map are provided. The method includes projecting an infrared (IR) dot pattern onto a scene. The method also includes capturing stereo images from each of two or more synchronized IR cameras, detecting a number of dots within the stereo images, computing a number of feature descriptors for the dots in the stereo images, and computing a disparity map between the stereo images. The method further includes generating a depth map for the scene using the disparity map.
    Type: Application
    Filed: October 21, 2011
    Publication date: April 25, 2013
    Applicant: Microsoft Corporation
    Inventors: Adam G. Kirk, Yaron Eshet, Kestutis Patiejunas, Sing Bing Kang, Charles Lawrence Zitnick, III, David Eraker, Simon Winder
  • Publication number: 20130095920
    Abstract: Methods and systems for generating free viewpoint video using an active infrared (IR) stereo module are provided. The method includes computing a depth map for a scene using an active IR stereo module. The depth map may be computed by projecting an IR dot pattern onto the scene, capturing stereo images from each of two or more synchronized IR cameras, detecting dots within the stereo images, computing feature descriptors corresponding to the dots in the stereo images, computing a disparity map between the stereo images, and generating the depth map using the disparity map. The method also includes generating a point cloud for the scene using the depth map, generating a mesh of the point cloud, and generating a projective texture map for the scene from the mesh of the point cloud. The method further includes generating the video for the scene using the projective texture map.
    Type: Application
    Filed: October 13, 2011
    Publication date: April 18, 2013
    Applicant: Microsoft Corporation
    Inventors: Kestutis Patiejunas, Kanchan Mitra, Patrick Sweeney, Yaron Eshet, Adam G. Kirk, Sing Bing Kang, Charles Lawrence Zitnick, III, David Eraker, David Harnett, Amit Mital, Simon Winder
  • Patent number: 8400447
    Abstract: A method, system, and computer-readable storage medium are disclosed for partitioning a scene with discrete oriented planes. In one embodiment, a scene comprising a plurality of objects may be partitioned into a plurality of sub-regions. The sub-regions may be divided by a plurality of planes having orientations selected from a discrete set of orientations comprising at least one orientation that is not the x axis, y axis, or z axis and is at a nonzero angle with respect to the x, y, or z axes. The partitioned scene may be stored in a binary tree comprising a plurality of nodes. Each node may correspond to a sub-region. In one embodiment, a ray tracing query may be solved for a particular ray. In solving the ray tracing query, the tree may be traversed to identify a first object of the plurality of objects intersected by the ray.
    Type: Grant
    Filed: February 15, 2008
    Date of Patent: March 19, 2013
    Assignee: Adobe Systems Incorporated
    Inventors: Nathan A. Carr, Gavin S. P. Miller, Adam G. Kirk