Patents by Inventor Matthew Uyttendaele

Matthew Uyttendaele has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20060028489
    Abstract: A system and process for rendering and displaying an interactive viewpoint video is presented in which a user can watch a dynamic scene while manipulating (freezing, slowing down, or reversing) time and changing the viewpoint at will. The ability to interactively control viewpoint while watching a video is an exciting new application for image-based rendering. Because any intermediate view can be synthesized at any time, with the potential for space-time manipulation, this type of video has been dubbed interactive viewpoint video.
    Type: Application
    Filed: March 31, 2005
    Publication date: February 9, 2006
    Applicant: Microsoft Corporation
    Inventors: Matthew Uyttendaele, Simon Winder, Charles Zitnick, Richard Szeliski, Sing Kang
  • Publication number: 20060031917
    Abstract: A process for compressing and decompressing non-keyframes in sequential sets of contemporaneous video frames making up multiple video streams where the video frames in a set depict substantially the same scene from different viewpoints. Each set of contemporaneous video frames has a plurality frames designated as keyframes with the remaining being non-keyframes. In one embodiment, the non-keyframes are compressed using a multi-directional spatial prediction technique. In another embodiment, the non-keyframes of each set of contemporaneous video frames are compressed using a combined chaining and spatial prediction compression technique. The spatial prediction compression technique employed can be a single direction technique where just one reference frame, and so one chain, is used to predict each non-keyframe, or it can be a multi-directional technique where two or more reference frames, and so chains, are used to predict each non-keyframe.
    Type: Application
    Filed: July 15, 2005
    Publication date: February 9, 2006
    Applicant: Microsoft Corporation
    Inventors: Simon Winder, Matthew Uyttendaele, Charles Zitnick, Richard Szeliski, Sing Kang
  • Publication number: 20060028473
    Abstract: A system and process for rendering and displaying an interactive viewpoint video is presented in which a user can watch a dynamic scene while manipulating (freezing, slowing down, or reversing) time and changing the viewpoint at will. The ability to interactively control viewpoint while watching a video is an exciting new application for image-based rendering. Because any intermediate view can be synthesized at any time, with the potential for space-time manipulation, this type of video has been dubbed interactive viewpoint video.
    Type: Application
    Filed: August 3, 2004
    Publication date: February 9, 2006
    Applicant: Microsoft Corporation
    Inventors: Matthew Uyttendaele, Simon Winder, Charles Zitnick, Richard Szeliski, Sing Kang
  • Publication number: 20050285874
    Abstract: A system and process for generating a two-layer, 3D representation of a digital or digitized image from the image and a pixel disparity map of the image is presented. The two layer representation includes a main layer having pixels exhibiting background colors and background disparities associated with correspondingly located pixels of depth discontinuity areas in the image, as well as pixels exhibiting colors and disparities associated with correspondingly located pixels of the image not found in these depth discontinuity areas. The other layer is a boundary layer made up of pixels exhibiting foreground colors, foreground disparities and alpha values associated with the correspondingly located pixels of the depth discontinuity areas. The depth discontinuity areas correspond to prescribed sized areas surrounding depth discontinuities found in the image using a disparity map thereof.
    Type: Application
    Filed: June 28, 2004
    Publication date: December 29, 2005
    Applicant: Microsoft Corporation
    Inventors: Charles Zitnick, Richard Szeliski, Sing Kang, Matthew Uyttendaele, Simon Winder
  • Publication number: 20050286757
    Abstract: A system and process for computing a 3D reconstruction of a scene from multiple images thereof, which is based on a color segmentation-based approach, is presented. First, each image is independently segmented. Second, an initial disparity space distribution (DSD) is computed for each segment, using the assumption that all pixels within a segment have the same disparity. Next, each segment's DSD is refined using neighboring segments and its projection into other images. The assumption that each segment has a single disparity is then relaxed during a disparity smoothing stage. The result is a disparity map for each image, which in turn can be used to compute a per pixel depth map if the reconstruction application calls for it.
    Type: Application
    Filed: June 28, 2004
    Publication date: December 29, 2005
    Applicant: Microsoft Corporation
    Inventors: Charles Zitnick, Sing Kang, Matthew Uyttendaele, Simon Winder, Richard Szeliski
  • Publication number: 20050285875
    Abstract: A system and process for generating, and then rendering and displaying, an interactive viewpoint video in which a user can watch a dynamic scene while manipulating (freezing, slowing down, or reversing) time and changing the viewpoint at will. In general, the interactive viewpoint video is generated using a small number of cameras to capture multiple video streams. A multi-view 3D reconstruction and matting technique is employed to create a layered representation of the video frames that enables both efficient compression and interactive playback of the captured dynamic scene, while at the same time allowing for real-time rendering.
    Type: Application
    Filed: June 28, 2004
    Publication date: December 29, 2005
    Applicant: Microsoft Corporation
    Inventors: Sing Kang, Charles Zitnick, Matthew Uyttendaele, Simon Winder, Richard Szeliski
  • Publication number: 20050286758
    Abstract: A system and process for computing a 3D reconstruction of a scene from multiple images thereof, which is based on a color segmentation-based approach, is presented. First, each image is independently segmented. Second, an initial disparity space distribution (DSD) is computed for each segment, using the assumption that all pixels within a segment have the same disparity. Next, each segment's DSD is refined using neighboring segments and its projection into other images. The assumption that each segment has a single disparity is then relaxed during a disparity smoothing stage. The result is a disparity map for each image, which in turn can be used to compute a per pixel depth map if the reconstruction application calls for it.
    Type: Application
    Filed: March 31, 2005
    Publication date: December 29, 2005
    Applicant: Microsoft Corporation
    Inventors: Charles Zitnick, Sing Kang, Matthew Uyttendaele, Simon Winder, Richard Szeliski
  • Publication number: 20050286759
    Abstract: A system and process for generating, and then rendering and displaying, an interactive viewpoint video in which a user can watch a dynamic scene while manipulating (freezing, slowing down, or reversing) time and changing the viewpoint at will. In general, the interactive viewpoint video is generated using a small number of cameras to capture multiple video streams. A multi-view 3D reconstruction and matting technique is employed to create a layered representation of the video frames that enables both efficient compression and interactive playback of the captured dynamic scene, while at the same time allowing for real-time rendering.
    Type: Application
    Filed: March 31, 2005
    Publication date: December 29, 2005
    Applicant: Microsoft Corporation
    Inventors: Charles Zitnick, Matthew Uyttendaele, Richard Szeliski, Simon Winder, Sing Kang
  • Publication number: 20050283730
    Abstract: A system and process for providing an interactive video tour of a tour site to a user is presented. In general, the system and process provides an image-based rendering system that enables users to explore remote real world locations, such as a house or a garden. The present approach is based directly on filming an environment, and then using image-based rendering techniques to replay the tour in an interactive manner. As such, the resulting experience is referred to as Interactive Video Tours. The experience is interactive in that the user can move freely along a path, choose between different directions of motion at branch points in the path, and look around in any direction. The user experience is additionally enhanced with multimedia elements such as overview maps, video textures, and sound.
    Type: Application
    Filed: July 19, 2005
    Publication date: December 22, 2005
    Applicant: Microsoft Corporation
    Inventors: Matthew Uyttendaele, Sing Kang, Richard Szeliski, Antonio Criminisi
  • Publication number: 20050243177
    Abstract: A system and process for generating High Dynamic Range (HDR) video is presented which involves first capturing a video image sequence while varying the exposure so as to alternate between frames having a shorter and longer exposure. The exposure for each frame is set prior to it being captured as a function of the pixel brightness distribution in preceding frames. Next, for each frame of the video, the corresponding pixels between the frame under consideration and both preceding and subsequent frames are identified. For each corresponding pixel set, at least one pixel is identified as representing a trustworthy pixel. The pixel color information associated with the trustworthy pixels is then employed to compute a radiance value for each pixel set to form a radiance map. A tone mapping procedure can then be performed to convert the radiance map into an 8-bit representation of the HDR frame.
    Type: Application
    Filed: January 14, 2005
    Publication date: November 3, 2005
    Applicant: Microsoft Corporation
    Inventors: Sing Kang, Matthew Uyttendaele, Simon Winder, Richard Szeliski
  • Publication number: 20050104966
    Abstract: A system and process for creating an interactive digital image, which allows a viewer to interact with a displayed image so as to change it with regard to a desired effect, such as exposure, focus or color, among others. An interactive image includes representative images which depict a scene with some image parameter varying between them. The interactive image also includes an index image, whose pixels each identify the representative image that exhibits the desired effect related to the varied image parameter at a corresponding pixel location. For example, a pixel of the index image might identify the representative image having a correspondingly-located pixel that depicts a portion of the scene at the sharpest focus. One primary form of interaction involves selecting a pixel of a displayed image whereupon the representative image identified in the index image at a corresponding pixel location is displayed in lieu of the currently displayed image.
    Type: Application
    Filed: December 22, 2004
    Publication date: May 19, 2005
    Applicant: Microsoft Corporation
    Inventors: Bernhard Schoelkopf, Kentaro Toyama, Matthew Uyttendaele
  • Publication number: 20050104900
    Abstract: Techniques and tools for displaying/viewing HDR images are described. In one aspect, a background image constructed from HDR image information is displayed along with portions of the HDR image corresponding to one or more regions of interest. The portions have at least one display parameter (e.g., a tone mapping parameter) that differs from a corresponding display parameter for the background image. Regions of interest and display parameters can be determined by a user (e.g., via a GUI). In another aspect, an intermediate image is determined based on image data corresponding to one or more regions of interest of the HDR image. The intermediate image has a narrower dynamic range than the HDR image. The intermediate image or a derived image is then displayed. The techniques and tools can be used to compare, for example, different tone mappings, compression methods, or color spaces in the background and regions of interest.
    Type: Application
    Filed: November 14, 2003
    Publication date: May 19, 2005
    Inventors: Kentaro Toyama, Matthew Uyttendaele, William Crow
  • Publication number: 20050104969
    Abstract: A system and process for creating an interactive digital image, which allows a viewer to interact with a displayed image so as to change it with regard to a desired effect, such as exposure, focus or color, among others. An interactive image includes representative images which depict a scene with some image parameter varying between them. The interactive image also includes an index image, whose pixels each identify the representative image that exhibits the desired effect related to the varied image parameter at a corresponding pixel location. For example, a pixel of the index image might identify the representative image having a correspondingly-located pixel that depicts a portion of the scene at the sharpest focus. One primary form of interaction involves selecting a pixel of a displayed image whereupon the representative image identified in the index image at a corresponding pixel location is displayed in lieu of the currently displayed image.
    Type: Application
    Filed: December 22, 2004
    Publication date: May 19, 2005
    Applicant: Microsoft Corporation
    Inventors: Bernhard Schoelkopf, Kentaro Toyama, Matthew Uyttendaele
  • Publication number: 20050047676
    Abstract: A system and process for generating High Dynamic Range (HDR) video is presented which involves first capturing a video image sequence while varying the exposure so as to alternate between frames having a shorter and longer exposure. The exposure for each frame is set prior to it being captured as a function of the pixel brightness distribution in preceding frames. Next, for each frame of the video, the corresponding pixels between the frame under consideration and both preceding and subsequent frames are identified. For each corresponding pixel set, at least one pixel is identified as representing a trustworthy pixel. The pixel color information associated with the trustworthy pixels is then employed to compute a radiance value for each pixel set to form a radiance map. A tone mapping procedure can then be performed to convert the radiance map into an 8-bit representation of the HDR frame.
    Type: Application
    Filed: October 15, 2004
    Publication date: March 3, 2005
    Applicant: Microsoft Corporation
    Inventors: Sing Kang, Matthew Uyttendaele, Simon Winder, Richard Szeliski
  • Publication number: 20050013501
    Abstract: A system and process for generating a high dynamic range (HDR) image from a bracketed image sequence, even in the presence of scene or camera motion, is presented. This is accomplished by first selecting one of the images as a reference image. Then, each non-reference image is registered with another one of the images, including the reference image, which exhibits an exposure that is both closer to that of the reference image than the image under consideration and closest among the other images to the exposure of the image under consideration, to generate a flow field. The flow fields generated for the non-reference images not already registered with the reference image are concatenated to register each of them with the reference image. Each non-reference image is then warped using its associated flow field. The reference image and the warped images are combined to create a radiance map representing the HDR image.
    Type: Application
    Filed: July 18, 2003
    Publication date: January 20, 2005
    Inventors: Sing Kang, Matthew Uyttendaele, Simon Winder, Richard Szeliski