Mixing Stereoscopic Image Signals (epo) Patents (Class 348/E13.063)
-
Publication number: 20130215220Abstract: A method for forming a stereoscopic video of a scene from first and second input digital videos captured using respective first and second digital video cameras, wherein the first and second input digital videos include overlapping scene content and overlapping time durations. The method includes determining camera positions for each frame of the first and second input digital videos, and determining first-eye and second-eye viewpoints for each frame of the stereoscopic video. First-eye and second-eye images are formed for each frame of the stereoscopic video responsive to the corresponding video frames in the first and second input digital videos and the associated camera positions.Type: ApplicationFiled: February 21, 2012Publication date: August 22, 2013Inventors: Sen Wang, Kevin Edward Spaulding
-
Publication number: 20130127988Abstract: A method for modifying the viewpoint of a main image of a scene captured from a first viewpoint. The method uses one or more complementary images of the scene captured from viewpoints that are different from the first viewpoint. A warped main image is determined corresponding to a target viewpoint by warping the main image responsive to a corresponding range map, wherein the warped main image includes one or more holes corresponding to scene content that was occluded. Warped complementary images are similarly determined by warping the complementary images to the target viewpoint responsive to corresponding range maps. Pixel values to fill the one or more holes in the warped main image are determined using pixel values at corresponding pixel locations in the warped complementary images.Type: ApplicationFiled: November 17, 2011Publication date: May 23, 2013Inventors: Sen Wang, Lin Zhong
-
Publication number: 20130010064Abstract: A video processing device, which can output stereoscopic video information that enables stereoscopic viewing to a video display device, includes an obtaining unit that obtains the stereoscopic video information, a superimposing unit that superimposes additional video information on the stereoscopic video information, and a transmitting unit that transmits parallax information of the additional video information to the video display device, with the parallax information being associated with the stereoscopic video information on which the additional video information is superimposed.Type: ApplicationFiled: March 24, 2011Publication date: January 10, 2013Applicant: PANASONIC CORPORATIONInventor: Tadayoshi Okuda
-
Publication number: 20120314029Abstract: A method for inserting a logo into a stereo video image to generate an overlaid stereo image, the method comprising: detecting presence of stereo pictures in the video image and, when stereo pictures are detected, determining the 3D format of said stereo pictures, said 3D format being a stereo spatially multiplexed format; generating a stereo logo comprising stereo spatially multiplexed logo pictures including a representation of the logo, said stereo spatially multiplexed logo pictures being arranged in said 3D format; and combining the stereo logo and the video image to generate the overlaid stereo image in said 3D format.Type: ApplicationFiled: February 19, 2010Publication date: December 13, 2012Inventors: Jill MacDonald Boyce, Kumar Ramaswamy, Joan Llach
-
Publication number: 20120314023Abstract: A method, apparatus and system are provided for the visual inspection of a three-dimensional video stream as it is being re-encoded into a second video format. A portion of a frame of a decoded three-dimensional video stream and a corresponding portion of a frame of the three-dimensional video stream having been re-encoded are arranged into a combined video frame such that the video frame portions appear together in the combined video frame. A boundary between the video frame portions in the combined video frame is manipulated such that a change of disparity on the boundary between the video frame portions, and any overlap between the combined video frame portions, are not visible.Type: ApplicationFiled: October 8, 2010Publication date: December 13, 2012Inventors: Jesus Barcons-Palau, Joan Llach
-
Publication number: 20120262547Abstract: A method of encoding multi-view video using camera parameters and a method of decoding multi-view video using the camera parameters are provided. The method of encoding multi-view video using the camera parameters includes detecting the camera parameters from each of a plurality of video data input from a multi-view camera in predetermined video units, and adaptively encoding each of the plurality of the video data according to whether each video data has the camera parameters. Accordingly, it is possible to increase the efficiency of compressing video without degrading video quality.Type: ApplicationFiled: June 29, 2012Publication date: October 18, 2012Applicants: INDUSTRY-ACADEMIA COOPERATION GROUP OF SEJONG UNIVERSITY, SAMSUNG ELECTRONICS CO., LTD.Inventor: Yung Lyul LEE
-
Publication number: 20120200667Abstract: According to some embodiments, a graphics platform may receive a video signal, including an image of a person, from a video camera. The graphics platform may then add a virtual object to the video signal to create a viewer or broadcast signal. 3D information associated with a spatial relationship between the person and the virtual object is determined. The graphics platform may then create a supplemental signal based on the 3D information, wherein the supplemental signal includes sufficient information to enable the person to interact with the virtual object as if such object was ‘seen’ or sensed from the person's perspective. The supplemental signal may comprise video, audio and/or pressure all as necessary to enable the person to interact with the virtual object as if he/she were physically present with the virtual object.Type: ApplicationFiled: November 9, 2011Publication date: August 9, 2012Inventors: Michael F. Gay, Frank Golding, Smadar Gefen
-
Publication number: 20120188335Abstract: A plurality of video input units generate video frames and provide shooting characteristics. A 3D video frame generator creates a 3D video frame by combining a plurality of video frames, which are provided from the plurality of video input units, respectively, and provides 3D video frame composition information indicating a composition type of the plurality of video frames included in the 3D video frame, and resolution control information indicating adjustment/non-adjustment of resolutions of the video frames. A 3D video frame encoder outputs an encoded 3D video stream by encoding the 3D video frame provided from the 3D video frame generator. A composition information checker checks 3D video composition information including the shooting information, the 3D video frame composition information, and the resolution control information. A 3D video data generator generates 3D video data by combining the 3D video composition information and the encoded 3D video stream.Type: ApplicationFiled: January 18, 2012Publication date: July 26, 2012Inventors: Gun III LEE, Kwang-Cheol Choi, Jae-Yeon Song, Seo-Young Hwang
-
Publication number: 20120007947Abstract: A system that incorporates teachings of the present disclosure may include, for example, a server having a controller to receive 3D image content with a plurality of left eye frames and a plurality of right eye frames, remove a portion of pixels from each left eye frame and from the corresponding right eye frame, combine remaining pixels from each left eye frame of the plurality of left eye frames with remaining pixels from the corresponding right eye frame of the plurality of right eye frames to form a plurality of transport frames where the combined remaining pixels form an alternating pattern of pixels based on either alternating rows of pixels or alternating columns of pixels from each left eye frame and the corresponding right eye frame, and encode the plurality of transport frames for delivery to a media processor. Other embodiments are disclosed.Type: ApplicationFiled: July 7, 2010Publication date: January 12, 2012Applicant: AT&T INTELLECTUAL PROPERTY I, L.P.Inventors: PIERRE COSTA, Ahmad Ansari
-
Publication number: 20100277468Abstract: The invention relates to a method and devices for enabling a user to visualise a virtual model in a real environment. According to the invention, a 2D representation of a 3D virtual object is inserted, in real-time, into the video flows of a camera aimed at a real environment in order to form an enriched video flow. A plurality of cameras generating a plurality of video flows can be simultaneously used to visualise the virtual object in the real environment according to different angles of view. A particular video flow is used to dynamically generate the effects of the real environment on the virtual model. The virtual model can be, for example, a digital copy or virtual enrichments of a real copy. A virtual 2D object, for example the representation of a real person, can be inserted into the enriched video flow.Type: ApplicationFiled: August 9, 2006Publication date: November 4, 2010Applicant: TOTAL IMMERSIONInventors: Valentin Lefevre, Jean-Marie Vaidie