Patents by Inventor Matthew Uyttendaele

Matthew Uyttendaele has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 7324687
    Abstract: A system and process for computing a 3D reconstruction of a scene from multiple images thereof, which is based on a color segmentation-based approach, is presented. First, each image is independently segmented. Second, an initial disparity space distribution (DSD) is computed for each segment, using the assumption that all pixels within a segment have the same disparity. Next, each segment's DSD is refined using neighboring segments and its projection into other images. The assumption that each segment has a single disparity is then relaxed during a disparity smoothing stage. The result is a disparity map for each image, which in turn can be used to compute a per pixel depth map if the reconstruction application calls for it.
    Type: Grant
    Filed: June 28, 2004
    Date of Patent: January 29, 2008
    Assignee: Microsoft Corporation
    Inventors: Charles Zitnick, III, Sing Bing Kang, Matthew Uyttendaele, Simon Winder, Richard Szeliski
  • Patent number: 7292257
    Abstract: A system and process for generating, and then rendering and displaying, an interactive viewpoint video in which a user can watch a dynamic scene while manipulating (freezing, slowing down, or reversing) time and changing the viewpoint at will. In general, the interactive viewpoint video is generated using a small number of cameras to capture multiple video streams. A multi-view 3D reconstruction and matting technique is employed to create a layered representation of the video frames that enables both efficient compression and interactive playback of the captured dynamic scene, while at the same time allowing for real-time rendering.
    Type: Grant
    Filed: June 28, 2004
    Date of Patent: November 6, 2007
    Assignee: Microsoft Corporation
    Inventors: Sing Bing Kang, Charles Zitnick, III, Matthew Uyttendaele, Simon Winder, Richard Szeliski
  • Patent number: 7286143
    Abstract: A system and process for generating, and then rendering and displaying, an interactive viewpoint video in which a user can watch a dynamic scene while manipulating (freezing, slowing down, or reversing) time and changing the viewpoint at will. In general, the interactive viewpoint video is generated using a small number of cameras to capture multiple video streams. A multi-view 3D reconstruction and matting technique is employed to create a layered representation of the video frames that enables both efficient compression and interactive playback of the captured dynamic scene, while at the same time allowing for real-time rendering.
    Type: Grant
    Filed: March 31, 2005
    Date of Patent: October 23, 2007
    Assignee: Microsoft Corporation
    Inventors: Sing Bing Kang, Charles Zitnick, III, Matthew Uyttendaele, Simon Winder, Richard Szeliski
  • Publication number: 20070237420
    Abstract: An “Oblique Image Stitcher” provides a technique for constructing a photorealistic oblique view from a set of input images representing a series of partially overlapping views of a scene. The Oblique Image Stitcher first projects each input image onto a geometric proxy of the scene and renders the images from a desired viewpoint. Once the images have been projected onto the geometric proxy, the rendered images are evaluated to identify optimum seams along which the various images are to be blended. Once the optimum seams are selected, the images are remapped relative to those seams by leaving the mapping unchanged at the seams and interpolating a smooth mapping between the seams. The remapped images are then composited to construct the final mosaiced oblique view of the scene. The result is a mosaic image constructed by warping the input images in a photorealistic manner which agrees at seams between images.
    Type: Application
    Filed: April 10, 2006
    Publication date: October 11, 2007
    Applicant: Microsoft Corporation
    Inventors: Drew Steedly, Richard Szeliski, Matthew Uyttendaele, Michael Cohen
  • Publication number: 20070177033
    Abstract: A Bayesian two-color image demosaicer and method for processing a digital color image to demosaic the image in such a way as to reduce image artifacts. The method and system are an improvement on and an enhancement to previous demosaicing techniques. A preliminary demosaicing pass is performed on the image to assign each pixel a fully specified RGB triple color value. The final color value of pixel in the processed image is restricted to be a linear combination of two colors. Fully-specified RGB triple color values for each pixel in an image used to find two clusters represented favored two colors. The amount of contribution from these favored two colors on the final color value then is determined. The method and system also can process multiple images to improve the demosaicing results. When using multiple images, sampling can be performed at a finer resolution, known as super resolution.
    Type: Application
    Filed: January 30, 2006
    Publication date: August 2, 2007
    Applicant: Microsoft Corporation
    Inventors: Eric Bennett, Matthew Uyttendaele, Charles Zitnick, Sing Kang, Richard Szeliski
  • Patent number: 7239757
    Abstract: A system and process for generating High Dynamic Range (HDR) video is presented which involves first capturing a video image sequence while varying the exposure so as to alternate between frames having a shorter and longer exposure. The exposure for each frame is set prior to it being captured as a function of the pixel brightness distribution in preceding frames. Next, for each frame of the video, the corresponding pixels between the frame under consideration and both preceding and subsequent frames are identified. For each corresponding pixel set, at least one pixel is identified as representing a trustworthy pixel. The pixel color information associated with the trustworthy pixels is then employed to compute a radiance value for each pixel set to form a radiance map. A tone mapping procedure can then be performed to convert the radiance map into an 8-bit representation of the HDR frame.
    Type: Grant
    Filed: January 23, 2006
    Date of Patent: July 3, 2007
    Assignee: Microsoft Corporation
    Inventors: Sing Bing Kang, Matthew Uyttendaele, Simon Winder, Richard Szeliski
  • Patent number: 7221366
    Abstract: A system and process for rendering and displaying an interactive viewpoint video is presented in which a user can watch a dynamic scene while manipulating (freezing, slowing down, or reversing) time and changing the viewpoint at will. The ability to interactively control viewpoint while watching a video is an exciting new application for image-based rendering. Because any intermediate view can be synthesized at any time, with the potential for space-time manipulation, this type of video has been dubbed interactive viewpoint video.
    Type: Grant
    Filed: August 3, 2004
    Date of Patent: May 22, 2007
    Assignee: Microsoft Corporation
    Inventors: Matthew Uyttendaele, Simon Winder, Charles Zitnick, III, Richard Szeliski, Sing Bing Kang
  • Patent number: 7206000
    Abstract: A system and process for generating a two-layer, 3D representation of a digital or digitized image from the image and a pixel disparity map of the image is presented. The two layer representation includes a main layer having pixels exhibiting background colors and background disparities associated with correspondingly located pixels of depth discontinuity areas in the image, as well as pixels exhibiting colors and disparities associated with correspondingly located pixels of the image not found in these depth discontinuity areas. The other layer is a boundary layer made up of pixels exhibiting foreground colors, foreground disparities and alpha values associated with the correspondingly located pixels of the depth discontinuity areas. The depth discontinuity areas correspond to prescribed sized areas surrounding depth discontinuities found in the image using a disparity map thereof.
    Type: Grant
    Filed: January 17, 2006
    Date of Patent: April 17, 2007
    Assignee: Microsoft Corporation
    Inventors: Charles Zitnick, III, Richard Szeliski, Sing Bing Kang, Matthew Uyttendaele, Simon Winder
  • Publication number: 20070025723
    Abstract: A “Panoramic Viewfinder” provides an intuitive interactive viewfinder display which operates on a digital camera display screen. This interactive viewfinder provides real-time assistance in capturing images for constructing panoramic image mosaics. The Panoramic Viewfinder “brushes” a panorama from images captured in any order, while providing visual feedback to the user for ensuring that desired scene elements will appear in the final panorama. This visual feedback presents real-time stitched previews of the panorama while capturing images.
    Type: Application
    Filed: July 28, 2005
    Publication date: February 1, 2007
    Applicant: Microsoft Corporation
    Inventors: Patrick Baudisch, Chris Pal, Eric Rudolph, Drew Steedly, Richard Szeliski, Desney Tan, Matthew Uyttendaele
  • Patent number: 7142209
    Abstract: A system and process for rendering and displaying an interactive viewpoint video is presented in which a user can watch a dynamic scene while manipulating (freezing, slowing down, or reversing) time and changing the viewpoint at will. The ability to interactively control viewpoint while watching a video is an exciting new application for image-based rendering. Because any intermediate view can be synthesized at any time, with the potential for space-time manipulation, this type of video has been dubbed interactive viewpoint video.
    Type: Grant
    Filed: March 31, 2005
    Date of Patent: November 28, 2006
    Assignee: Microsoft Corporation
    Inventors: Matthew Uyttendaele, Simon Winder, Charles Zitnick, III, Richard Szeliski, Sing Bing Kang
  • Publication number: 20060195475
    Abstract: An automatic digital image grouping system and method for automatically generating groupings of related images based on criteria that includes image metadata and spatial information. The system and method takes an unordered and unorganized set of digital images and organizes and groups related images into image subsets. The criteria for defining an image subset varies and can be customized depending on the needs of the user. Metadata (such as EXIF tags) already embedded inside the images is used to extract likely image subsets. This metadata may include the temporal proximity of images, focal length, color overlap, and geographical location. The first component of the automatic image grouping system and method is a subset image stage that analyzes the metadata and generates potential image subsets containing related images. The second component is an overlap detection stage, where potential image subset is analyzed and verified by examining pixels of the related images.
    Type: Application
    Filed: February 28, 2005
    Publication date: August 31, 2006
    Applicant: Microsoft Corporation
    Inventors: Ronald Logan, Richard Szeliski, Matthew Uyttendaele
  • Publication number: 20060177150
    Abstract: A panoramic high-dynamic range (HDR) image method and system of combining multiple images having different exposures and at least partial spatial overlap wherein each of the images may have scene motion, camera motion, or both. The major part of the panoramic HDR image method and system is a two-pass optimization-based approach that first defines the position of the objects in a scene and then fills in the dynamic range when possible and consistent. Data costs are created to encourage radiance values that are both consistent with object placement (defined by the first pass) and of a higher signal-to-noise ratio. Seam costs are used to ensure that transitions occur in regions of consistent radiances. The result is a high-quality panoramic HDR image having the full available spatial extent of the scene along with the full available exposure range.
    Type: Application
    Filed: February 1, 2005
    Publication date: August 10, 2006
    Applicant: Microsoft Corporation
    Inventors: Matthew Uyttendaele, Richard Szeliski, Ashley Eden
  • Publication number: 20060158462
    Abstract: Techniques and tools for displaying/viewing HDR images are described. In one aspect, a background image constructed from HDR image information is displayed along with portions of the HDR image corresponding to one or more regions of interest. The portions have at least one display parameter (e.g., a tone mapping parameter) that differs from a corresponding display parameter for the background image. Regions of interest and display parameters can be determined by a user (e.g., via a GUI). In another aspect, an intermediate image is determined based on image data corresponding to one or more regions of interest of the HDR image. The intermediate image has a narrower dynamic range than the HDR image. The intermediate image or a derived image is then displayed. The techniques and tools can be used to compare, for example, different tone mappings, compression methods, or color spaces in the background and regions of interest.
    Type: Application
    Filed: March 10, 2006
    Publication date: July 20, 2006
    Applicant: Microsoft Corporation
    Inventors: Kentaro Toyama, Matthew Uyttendaele, William Crow
  • Publication number: 20060133667
    Abstract: A system and process for creating an interactive digital image, which allows a viewer to interact with a displayed image so as to change it with regard to a desired effect, such as exposure, focus or color, among others. An interactive image includes representative images which depict a scene with some image parameter varying between them. The interactive image also includes an index image, whose pixels each identify the representative image that exhibits the desired effect related to the varied image parameter at a corresponding pixel location. For example, a pixel of the index image might identify the representative image having a correspondingly-located pixel that depicts a portion of the scene at the sharpest focus. One primary form of interaction involves selecting a pixel of a displayed image whereupon the representative image identified in the index image at a corresponding pixel location is displayed in lieu of the currently displayed image.
    Type: Application
    Filed: November 9, 2005
    Publication date: June 22, 2006
    Applicant: Microsoft Corporation
    Inventors: Bernhard Schoelkopf, Kentaro Toyama, Matthew Uyttendaele
  • Publication number: 20060133688
    Abstract: A system and process for generating High Dynamic Range (HDR) video is presented which involves first capturing a video image sequence while varying the exposure so as to alternate between frames having a shorter and longer exposure. The exposure for each frame is set prior to it being captured as a function of the pixel brightness distribution in preceding frames. Next, for each frame of the video, the corresponding pixels between the frame under consideration and both preceding and subsequent frames are identified. For each corresponding pixel set, at least one pixel is identified as representing a trustworthy pixel. The pixel color information associated with the trustworthy pixels is then employed to compute a radiance value for each pixel set to form a radiance map. A tone mapping procedure can then be performed to convert the radiance map into an 8-bit representation of the HDR frame.
    Type: Application
    Filed: January 23, 2006
    Publication date: June 22, 2006
    Applicant: Microsoft Corporation
    Inventors: Sing Kang, Matthew Uyttendaele, Simon Winder, Richard Szeliski
  • Publication number: 20060114253
    Abstract: A system and process for generating a two-layer, 3D representation of a digital or digitized image from the image and a pixel disparity map of the image is presented. The two layer representation includes a main layer having pixels exhibiting background colors and background disparities associated with correspondingly located pixels of depth discontinuity areas in the image, as well as pixels exhibiting colors and disparities associated with correspondingly located pixels of the image not found in these depth discontinuity areas. The other layer is a boundary layer made up of pixels exhibiting foreground colors, foreground disparities and alpha values associated with the correspondingly located pixels of the depth discontinuity areas. The depth discontinuity areas correspond to prescribed sized areas surrounding depth discontinuities found in the image using a disparity map thereof.
    Type: Application
    Filed: January 17, 2006
    Publication date: June 1, 2006
    Applicant: Microsoft Corporation
    Inventors: Charles Zitnick, Richard Szeliski, Sing Kang, Matthew Uyttendaele, Simon Winder
  • Publication number: 20060101377
    Abstract: A location history is a collection of locations over time for an object. A stay is a single instance of an object spending some time in one place, and a destination is any place where one or more objects have experienced a stay. Location histories are parsed using stays and destinations. In a described implementation, each location of a location history is recorded as a spatial position and a corresponding time at which the spatial position is acquired. Stays are extracted from a location history by analyzing locations thereof with regard to a temporal threshold and a spatial threshold. Specifically, two or more locations are considered a stay if they exceed a minimum stay duration and are within a maximum roaming distance. Each stay includes a location, a starting time, and an ending time. Destinations are produced from the extracted stays using a clustering operation and a predetermined scaling factor.
    Type: Application
    Filed: October 19, 2004
    Publication date: May 11, 2006
    Applicant: Microsoft Corporation
    Inventors: Kentaro Toyama, Ramaswamy Hariharan, Ross Cutler, John Douceur, Nuria Oliver, Eric Ringger, Daniel Robbins, Matthew Uyttendaele
  • Publication number: 20060072851
    Abstract: A system and method for deghosting mosaics provides a novel multiperspective plane sweep approach for generating an image mosaic from a sequence of still images, video images, scanned photographic images, computer generated images, etc. This multiperspective plane sweep approach uses virtual camera positions to compute depth maps for columns of overlapping pixels in adjacent images. Object distortions and ghosting caused by image parallax when generating the image mosaics are then minimized by blending pixel colors, or grey values, for each computed depth to create a common composite area for each of the overlapping images. Further, the multiperspective plane sweep approach described herein is both computationally efficient, and applicable to both the case of limited overlap between the images used for creating the image mosaics, and to the case of extensive or increased image overlap.
    Type: Application
    Filed: November 22, 2005
    Publication date: April 6, 2006
    Applicant: Microsoft Corporation
    Inventors: Sing Bing Kang, Richard Szeliski, Matthew Uyttendaele
  • Publication number: 20060029134
    Abstract: A system and process for compressing and decompressing multiple video streams depicting substantially the same dynamic scene from different viewpoints. Each frame in each contemporaneous set of video frames of the multiple streams is represented by at least a two layers—a main layer and a boundary layer. Compression of the main layers involves first designating one or more of these layers in each set of contemporaneous frames as keyframes. For each set of contemporaneous frames in time sequence order, the main layer of each keyframe is compressed using an inter-frame compression technique. In addition, the main layer of each non-keyframe within the frame set under consideration is compressed using a spatial prediction compression technique. Finally, the boundary layers of each frame in the current frame set are each compressed using an intra-frame compression technique. Decompression is generally the reverse of the compression process.
    Type: Application
    Filed: August 3, 2004
    Publication date: February 9, 2006
    Applicant: Microsoft Corporation
    Inventors: Simon Winder, Matthew Uyttendaele, Charles Zitnick, Richard Szeliski, Sing Kang
  • Publication number: 20060031915
    Abstract: A system and process for compressing and decompressing multiple video streams depicting substantially the same dynamic scene from different viewpoints that from a grid of viewpoints. Each frame in each contemporaneous set of video frames of the multiple streams is represented by at least a two layers—a main layer and a boundary layer. Compression of the main layers involves first designating one or more of these layers in each set of contemporaneous frames as keyframes. For each set of contemporaneous frames in time sequence order, the main layer of each keyframe is compressed using an inter-frame compression technique. In addition, the main layer of each non-keyframe within the frame set under consideration is compressed using a spatial prediction compression technique. Finally, the boundary layers of each frame in the current frame set are each compressed using an intra-frame compression technique. Decompression is generally the reverse of the compression process.
    Type: Application
    Filed: March 31, 2005
    Publication date: February 9, 2006
    Applicant: Microsoft Corporation
    Inventors: Simon Winder, Matthew Uyttendaele, Charles Zitnick, Richard Szeliski, Sing Kang