Patents by Inventor Simon Winder
Simon Winder has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 7382931Abstract: A system and process for generating High Dynamic Range (HDR) video is presented which involves first capturing a video image sequence while varying the exposure so as to alternate between frames having a shorter and longer exposure. The exposure for each frame is set prior to it being captured as a function of the pixel brightness distribution in preceding frames. Next, for each frame of the video, the corresponding pixels between the frame under consideration and both preceding and subsequent frames are identified. For each corresponding pixel set, at least one pixel is identified as representing a trustworthy pixel. The pixel color information associated with the trustworthy pixels is then employed to compute a radiance value for each pixel set to form a radiance map. A tone mapping procedure can then be performed to convert the radiance map into an 8-bit representation of the HDR frame.Type: GrantFiled: January 14, 2005Date of Patent: June 3, 2008Assignee: Microsoft CorporationInventors: Sing Bing Kang, Matthew Uyttendaele, Simon Winder, Richard Szeliski
-
Patent number: 7379583Abstract: A system and process for computing a 3D reconstruction of a scene from multiple images thereof, which is based on a color segmentation-based approach, is presented. First, each image is independently segmented. Second, an initial disparity space distribution (DSD) is computed for each segment, using the assumption that all pixels within a segment have the same disparity. Next, each segment's DSD is refined using neighboring segments and its projection into other images. The assumption that each segment has a single disparity is then relaxed during a disparity smoothing stage. The result is a disparity map for each image, which in turn can be used to compute a per pixel depth map if the reconstruction application calls for it.Type: GrantFiled: March 31, 2005Date of Patent: May 27, 2008Assignee: Microsoft CorporationInventors: Charles Zitnick, III, Sing Bing Kang, Matthew Uyttendaele, Simon Winder, Richard Szeliski
-
Patent number: 7324687Abstract: A system and process for computing a 3D reconstruction of a scene from multiple images thereof, which is based on a color segmentation-based approach, is presented. First, each image is independently segmented. Second, an initial disparity space distribution (DSD) is computed for each segment, using the assumption that all pixels within a segment have the same disparity. Next, each segment's DSD is refined using neighboring segments and its projection into other images. The assumption that each segment has a single disparity is then relaxed during a disparity smoothing stage. The result is a disparity map for each image, which in turn can be used to compute a per pixel depth map if the reconstruction application calls for it.Type: GrantFiled: June 28, 2004Date of Patent: January 29, 2008Assignee: Microsoft CorporationInventors: Charles Zitnick, III, Sing Bing Kang, Matthew Uyttendaele, Simon Winder, Richard Szeliski
-
Patent number: 7292257Abstract: A system and process for generating, and then rendering and displaying, an interactive viewpoint video in which a user can watch a dynamic scene while manipulating (freezing, slowing down, or reversing) time and changing the viewpoint at will. In general, the interactive viewpoint video is generated using a small number of cameras to capture multiple video streams. A multi-view 3D reconstruction and matting technique is employed to create a layered representation of the video frames that enables both efficient compression and interactive playback of the captured dynamic scene, while at the same time allowing for real-time rendering.Type: GrantFiled: June 28, 2004Date of Patent: November 6, 2007Assignee: Microsoft CorporationInventors: Sing Bing Kang, Charles Zitnick, III, Matthew Uyttendaele, Simon Winder, Richard Szeliski
-
Patent number: 7286143Abstract: A system and process for generating, and then rendering and displaying, an interactive viewpoint video in which a user can watch a dynamic scene while manipulating (freezing, slowing down, or reversing) time and changing the viewpoint at will. In general, the interactive viewpoint video is generated using a small number of cameras to capture multiple video streams. A multi-view 3D reconstruction and matting technique is employed to create a layered representation of the video frames that enables both efficient compression and interactive playback of the captured dynamic scene, while at the same time allowing for real-time rendering.Type: GrantFiled: March 31, 2005Date of Patent: October 23, 2007Assignee: Microsoft CorporationInventors: Sing Bing Kang, Charles Zitnick, III, Matthew Uyttendaele, Simon Winder, Richard Szeliski
-
Publication number: 20070179921Abstract: A feature symbol triplets object instance recognizer and method for recognizing specific objects in a query image. Generally, the recognizer and method find repeatable features in the image, and match the repeatable features between a query image and a set of training images. More specifically, the recognizer and method finds features in the query image and then groups all possible combinations of three features in to feature triplets. Small regions or “patches” in the query image, and an affine transformation is applied to the patches to identify any similarity between patches in a query image and training images. The affine transformation is computed using position of neighboring features in each feature triplet. Next, all similar patches are found, and then pairs of images are aligned to determine if the patches agree in the position of the object. If they do, then it is said that object is found and identified.Type: ApplicationFiled: January 27, 2006Publication date: August 2, 2007Applicant: Microsoft CorporationInventors: Charles Zitnick, Jie Sun, Richard Szeliski, Simon Winder
-
Patent number: 7239757Abstract: A system and process for generating High Dynamic Range (HDR) video is presented which involves first capturing a video image sequence while varying the exposure so as to alternate between frames having a shorter and longer exposure. The exposure for each frame is set prior to it being captured as a function of the pixel brightness distribution in preceding frames. Next, for each frame of the video, the corresponding pixels between the frame under consideration and both preceding and subsequent frames are identified. For each corresponding pixel set, at least one pixel is identified as representing a trustworthy pixel. The pixel color information associated with the trustworthy pixels is then employed to compute a radiance value for each pixel set to form a radiance map. A tone mapping procedure can then be performed to convert the radiance map into an 8-bit representation of the HDR frame.Type: GrantFiled: January 23, 2006Date of Patent: July 3, 2007Assignee: Microsoft CorporationInventors: Sing Bing Kang, Matthew Uyttendaele, Simon Winder, Richard Szeliski
-
Patent number: 7221366Abstract: A system and process for rendering and displaying an interactive viewpoint video is presented in which a user can watch a dynamic scene while manipulating (freezing, slowing down, or reversing) time and changing the viewpoint at will. The ability to interactively control viewpoint while watching a video is an exciting new application for image-based rendering. Because any intermediate view can be synthesized at any time, with the potential for space-time manipulation, this type of video has been dubbed interactive viewpoint video.Type: GrantFiled: August 3, 2004Date of Patent: May 22, 2007Assignee: Microsoft CorporationInventors: Matthew Uyttendaele, Simon Winder, Charles Zitnick, III, Richard Szeliski, Sing Bing Kang
-
Patent number: 7206000Abstract: A system and process for generating a two-layer, 3D representation of a digital or digitized image from the image and a pixel disparity map of the image is presented. The two layer representation includes a main layer having pixels exhibiting background colors and background disparities associated with correspondingly located pixels of depth discontinuity areas in the image, as well as pixels exhibiting colors and disparities associated with correspondingly located pixels of the image not found in these depth discontinuity areas. The other layer is a boundary layer made up of pixels exhibiting foreground colors, foreground disparities and alpha values associated with the correspondingly located pixels of the depth discontinuity areas. The depth discontinuity areas correspond to prescribed sized areas surrounding depth discontinuities found in the image using a disparity map thereof.Type: GrantFiled: January 17, 2006Date of Patent: April 17, 2007Assignee: Microsoft CorporationInventors: Charles Zitnick, III, Richard Szeliski, Sing Bing Kang, Matthew Uyttendaele, Simon Winder
-
Patent number: 7142723Abstract: A system and process for generating a high dynamic range (HDR) image from a bracketed image sequence, even in the presence of scene or camera motion, is presented. This is accomplished by first selecting one of the images as a reference image. Then, each non-reference image is registered with another one of the images, including the reference image, which exhibits an exposure that is both closer to that of the reference image than the image under consideration and closest among the other images to the exposure of the image under consideration, to generate a flow field. The flow fields generated for the non-reference images not already registered with the reference image are concatenated to register each of them with the reference image. Each non-reference image is then warped using its associated flow field. The reference image and the warped images are combined to create a radiance map representing the HDR image.Type: GrantFiled: July 18, 2003Date of Patent: November 28, 2006Assignee: Microsoft CorporationInventors: Sing Bing Kang, Matthew T. Uyttendaele, Simon Winder, Richard Szeliski
-
Patent number: 7142209Abstract: A system and process for rendering and displaying an interactive viewpoint video is presented in which a user can watch a dynamic scene while manipulating (freezing, slowing down, or reversing) time and changing the viewpoint at will. The ability to interactively control viewpoint while watching a video is an exciting new application for image-based rendering. Because any intermediate view can be synthesized at any time, with the potential for space-time manipulation, this type of video has been dubbed interactive viewpoint video.Type: GrantFiled: March 31, 2005Date of Patent: November 28, 2006Assignee: Microsoft CorporationInventors: Matthew Uyttendaele, Simon Winder, Charles Zitnick, III, Richard Szeliski, Sing Bing Kang
-
Publication number: 20060133688Abstract: A system and process for generating High Dynamic Range (HDR) video is presented which involves first capturing a video image sequence while varying the exposure so as to alternate between frames having a shorter and longer exposure. The exposure for each frame is set prior to it being captured as a function of the pixel brightness distribution in preceding frames. Next, for each frame of the video, the corresponding pixels between the frame under consideration and both preceding and subsequent frames are identified. For each corresponding pixel set, at least one pixel is identified as representing a trustworthy pixel. The pixel color information associated with the trustworthy pixels is then employed to compute a radiance value for each pixel set to form a radiance map. A tone mapping procedure can then be performed to convert the radiance map into an 8-bit representation of the HDR frame.Type: ApplicationFiled: January 23, 2006Publication date: June 22, 2006Applicant: Microsoft CorporationInventors: Sing Kang, Matthew Uyttendaele, Simon Winder, Richard Szeliski
-
Publication number: 20060114253Abstract: A system and process for generating a two-layer, 3D representation of a digital or digitized image from the image and a pixel disparity map of the image is presented. The two layer representation includes a main layer having pixels exhibiting background colors and background disparities associated with correspondingly located pixels of depth discontinuity areas in the image, as well as pixels exhibiting colors and disparities associated with correspondingly located pixels of the image not found in these depth discontinuity areas. The other layer is a boundary layer made up of pixels exhibiting foreground colors, foreground disparities and alpha values associated with the correspondingly located pixels of the depth discontinuity areas. The depth discontinuity areas correspond to prescribed sized areas surrounding depth discontinuities found in the image using a disparity map thereof.Type: ApplicationFiled: January 17, 2006Publication date: June 1, 2006Applicant: Microsoft CorporationInventors: Charles Zitnick, Richard Szeliski, Sing Kang, Matthew Uyttendaele, Simon Winder
-
Patent number: 7015926Abstract: A system and process for generating a two-layer, 3D representation of a digital or digitized image from the image and a pixel disparity map of the image is presented. The two layer representation includes a main layer having pixels exhibiting background colors and background disparities associated with correspondingly located pixels of depth discontinuity areas in the image, as well as pixels exhibiting colors and disparities associated with correspondingly located pixels of the image not found in these depth discontinuity areas. The other layer is a boundary layer made up of pixels exhibiting foreground colors, foreground disparities and alpha values associated with the correspondingly located pixels of the depth discontinuity areas. The depth discontinuity areas correspond to prescribed sized areas surrounding depth discontinuities found in the image using a disparity map thereof.Type: GrantFiled: June 28, 2004Date of Patent: March 21, 2006Assignee: Microsoft CorporationInventors: Charles Lawrence Zitnick, III, Richard Szeliski, Sing Bing Kang, Matthew T. Uyttendaele, Simon Winder
-
Patent number: 7010174Abstract: A system and process for generating High Dynamic Range (HDR) video is presented which involves first capturing a video image sequence while varying the exposure so as to alternate between frames having a shorter and longer exposure. The exposure for each frame is set prior to it being captured as a function of the pixel brightness distribution in preceding frames. Next, for each frame of the video, the corresponding pixels between the frame under consideration and both preceding and subsequent frames are identified. For each corresponding pixel set, at least one pixel is identified as representing a trustworthy pixel. The pixel color information associated with the trustworthy pixels is then employed to compute a radiance value for each pixel set to form a radiance map. A tone mapping procedure can then be performed to convert the radiance map into an 8-bit representation of the HDR frame.Type: GrantFiled: October 15, 2004Date of Patent: March 7, 2006Assignee: Microsoft CorporationInventors: Sing Bing Kang, Matthew T. Uyttendaele, Simon Winder, Richard Szeliski
-
Publication number: 20060028473Abstract: A system and process for rendering and displaying an interactive viewpoint video is presented in which a user can watch a dynamic scene while manipulating (freezing, slowing down, or reversing) time and changing the viewpoint at will. The ability to interactively control viewpoint while watching a video is an exciting new application for image-based rendering. Because any intermediate view can be synthesized at any time, with the potential for space-time manipulation, this type of video has been dubbed interactive viewpoint video.Type: ApplicationFiled: August 3, 2004Publication date: February 9, 2006Applicant: Microsoft CorporationInventors: Matthew Uyttendaele, Simon Winder, Charles Zitnick, Richard Szeliski, Sing Kang
-
Publication number: 20060031917Abstract: A process for compressing and decompressing non-keyframes in sequential sets of contemporaneous video frames making up multiple video streams where the video frames in a set depict substantially the same scene from different viewpoints. Each set of contemporaneous video frames has a plurality frames designated as keyframes with the remaining being non-keyframes. In one embodiment, the non-keyframes are compressed using a multi-directional spatial prediction technique. In another embodiment, the non-keyframes of each set of contemporaneous video frames are compressed using a combined chaining and spatial prediction compression technique. The spatial prediction compression technique employed can be a single direction technique where just one reference frame, and so one chain, is used to predict each non-keyframe, or it can be a multi-directional technique where two or more reference frames, and so chains, are used to predict each non-keyframe.Type: ApplicationFiled: July 15, 2005Publication date: February 9, 2006Applicant: Microsoft CorporationInventors: Simon Winder, Matthew Uyttendaele, Charles Zitnick, Richard Szeliski, Sing Kang
-
Publication number: 20060028489Abstract: A system and process for rendering and displaying an interactive viewpoint video is presented in which a user can watch a dynamic scene while manipulating (freezing, slowing down, or reversing) time and changing the viewpoint at will. The ability to interactively control viewpoint while watching a video is an exciting new application for image-based rendering. Because any intermediate view can be synthesized at any time, with the potential for space-time manipulation, this type of video has been dubbed interactive viewpoint video.Type: ApplicationFiled: March 31, 2005Publication date: February 9, 2006Applicant: Microsoft CorporationInventors: Matthew Uyttendaele, Simon Winder, Charles Zitnick, Richard Szeliski, Sing Kang
-
Publication number: 20060031915Abstract: A system and process for compressing and decompressing multiple video streams depicting substantially the same dynamic scene from different viewpoints that from a grid of viewpoints. Each frame in each contemporaneous set of video frames of the multiple streams is represented by at least a two layers—a main layer and a boundary layer. Compression of the main layers involves first designating one or more of these layers in each set of contemporaneous frames as keyframes. For each set of contemporaneous frames in time sequence order, the main layer of each keyframe is compressed using an inter-frame compression technique. In addition, the main layer of each non-keyframe within the frame set under consideration is compressed using a spatial prediction compression technique. Finally, the boundary layers of each frame in the current frame set are each compressed using an intra-frame compression technique. Decompression is generally the reverse of the compression process.Type: ApplicationFiled: March 31, 2005Publication date: February 9, 2006Applicant: Microsoft CorporationInventors: Simon Winder, Matthew Uyttendaele, Charles Zitnick, Richard Szeliski, Sing Kang
-
Publication number: 20060029134Abstract: A system and process for compressing and decompressing multiple video streams depicting substantially the same dynamic scene from different viewpoints. Each frame in each contemporaneous set of video frames of the multiple streams is represented by at least a two layers—a main layer and a boundary layer. Compression of the main layers involves first designating one or more of these layers in each set of contemporaneous frames as keyframes. For each set of contemporaneous frames in time sequence order, the main layer of each keyframe is compressed using an inter-frame compression technique. In addition, the main layer of each non-keyframe within the frame set under consideration is compressed using a spatial prediction compression technique. Finally, the boundary layers of each frame in the current frame set are each compressed using an intra-frame compression technique. Decompression is generally the reverse of the compression process.Type: ApplicationFiled: August 3, 2004Publication date: February 9, 2006Applicant: Microsoft CorporationInventors: Simon Winder, Matthew Uyttendaele, Charles Zitnick, Richard Szeliski, Sing Kang