Coding Or Decoding Stereoscopic Image Signals (epo) Patents (Class 348/E13.062)
-
Patent number: 12003719Abstract: Disclosed herein are a method, an apparatus and a storage medium for image encoding/decoding using a segmentation map. A feature vector for an image may be extracted using a segmentation map. The image may be encoded using the segmentation map and the feature vector. An output stream from an encoding apparatus may include a video stream and a feature stream. An input stream to a decoding apparatus may include a video stream and a feature stream. The image may be reconstructed using a reconstructed segmentation map and a reconstructed feature vector.Type: GrantFiled: November 24, 2021Date of Patent: June 4, 2024Assignee: ELECTRONICS AND TELECOMMUNICATIONS RESEARCH INSTITUTEInventors: Hyon-Gon Choo, Hee-Kyung Lee, Han-Shin Lim, Jeong-Il Seo, Won-Sik Cheong
-
Patent number: 11632532Abstract: A three-dimensional model distribution method includes: distributing a first model, which is a three-dimensional model of a target space in a target time period, in a first distribution mode; and distributing a second model, which is a three-dimensional model of the target space in the target time period and makes a smaller change per unit time than the first model, in a second distribution mode different from the first distribution mode.Type: GrantFiled: December 21, 2021Date of Patent: April 18, 2023Assignee: PANASONIC INTELLECTUAL PROPERTY CORPORATION OF AMERICAInventors: Toshiyasu Sugio, Toru Matsunobu, Satoshi Yoshikawa, Tatsuya Koyama, Yoichi Sugino
-
Patent number: 10237563Abstract: A system and method are provided for a 3D modeling system with which an encoded video stream is produced. The system includes a content engine, an encoder, and a fixed function engine. The fixed function engine receives content information from the content engine. The fixed function engine produces encoder information from the content information. The encoder uses the encoder information to produce an encoded video stream having at least one of a higher quality and a lower bandwidth than a video stream encoded without the encoder information.Type: GrantFiled: December 11, 2012Date of Patent: March 19, 2019Assignee: Nvidia CorporationInventors: Hassane S. Azar, Bryan Dudash, Rochelle Pereira, Dawid Pajak
-
Patent number: 9819959Abstract: A method for three-dimensional or multi-view video coding receives input data associated with a current block of a current picture in a current dependent view, wherein the current block is inter-time coded based on an inter-time reference block located by a motion vector (MV), determines estimated DV (disparity vector) candidates from neighboring DVs, applies an evaluation function to the estimated DV candidates to obtain evaluation results for the estimated DV candidates, and selects a final estimated DV. The method then determines an inter-view reference region in an inter-view reference picture, determines first pseudo residues, wherein the first pseudo residues correspond to first differences between the inter-view reference region and a pseudo reference region in a pseudo reference picture located by the MV, and applies encoding or decoding to the input data associated with residues of the current block utilizing the first pseudo residues.Type: GrantFiled: April 6, 2017Date of Patent: November 14, 2017Assignee: HFI INNOVATION INC.Inventors: Kai Zhang, Jicheng An, Jian-Liang Lin
-
Patent number: 9648342Abstract: A method and apparatus using pseudo residues to predict current residues for three-dimensional or multi-view video coding are disclosed. The method first receives input data associated with a current block of a current picture in a current dependent view and determines an inter-view reference block in a first inter-view reference picture in a reference view according to a DV (disparity vector), where the current picture and the first inter-view reference picture correspond to same time instance. Pseudo residues are then determined and used for encoding or decoding of the current block, where the pseudo residues correspond to differences between a corresponding region in an inter-time reference picture in the current dependent view and a pseudo reference region in a pseudo reference picture in the reference view, and where the inter-time reference picture and the pseudo reference picture correspond to same time instance.Type: GrantFiled: November 14, 2013Date of Patent: May 9, 2017Assignee: HFI INNOVATION INC.Inventors: Kai Zhang, Jicheng An, Jian-Liang Lin
-
Patent number: 8854428Abstract: A method for processing a 3D video signal and a digital broadcast receiver for performing the processing method are disclosed. A method for receiving a 3D broadcast signal includes receiving signaling information of at least one stream for a 3 Dimension TeleVision (3DTV) service and a two dimensional (2D) video stream, demultiplexing at least one stream for the 3DTV service and the 2D video stream based on the signaling information, decoding at least one demultiplexed stream for the 3DTV service and the demultiplexed 2D video stream, and outputting a 3D video signal by formatting at least one decoded stream for the 3DTV service and the decoded 2D video stream.Type: GrantFiled: December 18, 2009Date of Patent: October 7, 2014Assignee: LG Electronics, Inc.Inventors: Jong Yeul Suh, Jeong Hyu Yang
-
Patent number: 8842903Abstract: An apparatus includes a storage unit to receive and store an image file, a processor to parse a media data field of the image file including one or more image data samples and to parse a media header field including an image type data field indicating whether each of the one or more image data samples is one of 2 dimensional (2D) image data and 3 dimensional (3D) stereoscopic image data to generate an image corresponding to one of a 2D image and a 3D stereoscopic image based on the image type data field of the image file, and a display unit to display the generated image according to the image type data field of the image file.Type: GrantFiled: December 8, 2008Date of Patent: September 23, 2014Assignee: Samsung Electronics Co., Ltd.Inventors: Seo-Young Hwang, Gun-Ill Lee, Jae-Yeon Song, Yong-Tae Kim
-
Publication number: 20140015922Abstract: In one embodiment, a method comprising receiving at a single encoding engine an input video stream having one or more pictures of a first size; and generating by the single encoding engine, in parallel, plural encoded streams, a first of the encoded streams comprising one or more pictures of the first size and a second of the encoded streams comprising one or more pictures of a second size that is smaller than the first size, the encoding of the second stream based on sharing video coding information used in encoding the first encoded stream.Type: ApplicationFiled: July 10, 2012Publication date: January 16, 2014Applicant: BROADCOM CORPORATIONInventor: Lei Zhang
-
Publication number: 20140015923Abstract: Systems and methods may be provided embodying a novel approach to measuring degradation (or distortion) by analyzing disparity maps from original 3D video and reconstructed 3D video. The disparity maps may be derived using a stereo-matching algorithm exploiting 2-view stereo image disparity. An overall distortion measure may also be determined resulting from the weighted sum of plural measures of distortions, one of the plural distortion measures corresponding to a measure of disparity degradation, and another one corresponding to a measure of geometrical distortion. The measure (or overall distortion measure) is used during real-time encoding to effect various decisions, including mode decision in the coding of each corresponding stereo pair, and in rate control (including stereo pair quantization).Type: ApplicationFiled: July 16, 2012Publication date: January 16, 2014Applicant: CISCO TECHNOLOGY, INC.Inventors: James Au, Jaehin In, Arturo A. Rodriguez, Ali Jerbi, Jiawei Huang
-
Publication number: 20140002594Abstract: A depth map image, unlike a texture view, has smooth regions without complex texture and abrupt changes of pixel value at the object edges. While conventional Inter-prediction skip mode is very efficient for coding texture views, it does not include any Intra-prediction capability, which can be very efficient for coding smooth regions. The hybrid prediction skip mode according to the presently claimed invention includes an Inter-prediction Skip mode coupled with various Intra-prediction modes. The selection of the prediction mode is made by computing a Side Match Distortion (SMD) for the prediction modes. Because no additional overhead indicator bit is required and that the bitstream syntax is not altered, high coding efficiency is maintained and the coding scheme for coding depth maps in accordance to the presently disclosed invention can be implemented easily as an extension to existing standards.Type: ApplicationFiled: June 29, 2012Publication date: January 2, 2014Applicant: Hong Kong Applied Science and Technology Research Institute Company LimitedInventors: Yui-Lam Chan, Sik-Ho Tsang, Wan-Chi Siu, Hoi-Kok Cheung, Wai-Lam Hui, Pak-Kong Lun, Junyan Ren
-
Publication number: 20130321572Abstract: An image processing method includes: receiving a disparity range setting which defines a target disparity range; receiving 3D image data with original disparity not fully within the target disparity range; receiving auxiliary graphical data with original disparity fully beyond the target disparity range; and generating modified 3D image data, including at least a modified portion with modified disparity fully within the target disparity range, by modifying at least a portion of the received 3D image data according to the obtained disparity range setting. At least the modified portion of the modified 3D image data is derived from at least the portion of the received 3D image data that has disparity overlapped with disparity of the received auxiliary graphical data. With the help of the disparity modification, the playback of the 3D image data may be protected from being obstructed by the display of the auxiliary graphical data.Type: ApplicationFiled: May 31, 2012Publication date: December 5, 2013Inventors: Cheng-Tsai Ho, Ding-Yun Chen, Chi-Cheng Ju
-
Publication number: 20130321573Abstract: Method are disclosed for identifying fields of imaging data for display of respective sets of same time coincident different views in a video steam, such as for identifying sets of left-eye and right-eye perspective views taken at the same time in the stereoscopic imaging of an image subject matter. The video stream includes coded markers in the imaging data which are detected to identify the fields for displaying views of the same set, and for distinguishing the fields of one set from those of another. In one embodiment, the fields of one set are identified by coding each field for display of a same solid primary color last line, and the fields of a next successive set are identified by coding each field for display of a same secondary color last line, using a secondary color that is complementary to the primary color.Type: ApplicationFiled: June 1, 2012Publication date: December 5, 2013Applicant: TEXAS INSTRUMENTS INCORPORATEDInventors: Nathan A. Buettner, Marshall C. Capps
-
Publication number: 20130321574Abstract: The disclosed subject matter relates to providing a view synthesis distortion model (VSDM) for multiview video coding (MVC). The disclosed VSDM can facilitate determining quantization values and rate values based on model parameters for encoding depth information. Further, the VSDM can facilitate compression of depth information based on the determined quantization values and rate values. Compression of depth information can provide for reduces bandwidth consumption for dissemination of encoded multiview content for applications such as 3D video, freepoint TV, etc. Further, a feedback element can be employed to update the VSDM based on a comparison of a reconstituted version of the content, from coded depth information, against reference version of the content, from reference depth information.Type: ApplicationFiled: June 4, 2012Publication date: December 5, 2013Applicant: CITY UNIVERSITY OF HONG KONGInventors: Yun Zhang, Sam Tak Wu Kwong
-
Publication number: 20130286158Abstract: A dual-channel three-dimension projector is provided. The dual-channel three-dimension projector comprises a video processor, an FPGA, a first driver, a second driver and a digital micromirror device (DMD). The video processor receives a first video data via a first input interface and a second video data via a second input interface to generate a left-eye signal and a right-eye signal. The FPGA receives the left-eye signal and the right-eye signal via two paths respectively, and generates a left-image signal and a right-image signal. The first driver receives the left-image signal to generate a left-image control signal. The second driver receives the right-image signal to generate a right-image control signal. The DMD electrically connected to the first driver and the second driver alternately projects a left-eye image and a right-eye image according to the left-image control signal and the right-image control signal.Type: ApplicationFiled: June 28, 2012Publication date: October 31, 2013Applicant: DELTA ELECTRONICS, INC.Inventors: Chui-Fan CHIU, Harold BELLIS, Adam KUNZMAN, Kenneth BELL
-
Publication number: 20130258050Abstract: A method and device for multiview distributed video coding with adaptive syndrome bit rate control are disclosed. In one embodiment, multiple groups, including video frames coming from associated digital cameras, are formed. Further, video frames coming from a predetermined number of the digital cameras are declared as key video frames. Furthermore, video frames coming from remaining digital cameras are declared as non-key video frames. In addition, the key video frames are encoded to obtain encoded bits. Moreover, the non-key video frames are encoded to obtain syndrome bits. Also, the encoded key video frames are decoded, to obtain decoded bits and the encoded non-key video frames are decoded, to obtain decoded bits and CRC bits. Further, an optimal number of syndrome bits in each non-key video frame are determined. Furthermore, the encoded bits and determined optimal number of syndrome bits are sent to multiple receivers.Type: ApplicationFiled: August 24, 2012Publication date: October 3, 2013Inventor: VIJAY KUMAR KODAVALLA
-
Publication number: 20130250055Abstract: A 3D video encoding rate controlling method and an apparatus using the method are disclosed. An image encoding method includes encoding first and second images at a first encoding ratio, with an encoding rate of the first image different from an encoding rate of the second image and after encoding the first and second images at the first encoding ratio, encoding the first and second images at a second encoding ratio. Accordingly, it may be possible to provide a viewer with 3D images while minimizing visual fatigue in consideration of a human visual characteristics.Type: ApplicationFiled: October 12, 2012Publication date: September 26, 2013Applicant: Electronics and Telecommunications Research InstituteInventors: Suk Hee CHO, Se Yoon JEONG, Jong Ho KIM, Jin Soo CHOI, Jin Woong KIM
-
Publication number: 20130242044Abstract: A shutter 2D to 3D video conversion box, including an interface module, a decoding module, a control module, a sync signal amplifying and transmission module, and a sync signal receiving and processing module. The interface module, sync signal amplifying and transmission module, and control module are connected to the decoding module; and the interface module includes a HDMI signal input module and a HDMI signal output module. The decoding module converts 2D video signals into 3D digital signals. The control module controls the decoding process, and generates and transmits sync signals. The sync signal amplifying and transmission module amplifies and transmits the sync signals. The sync signal receiving and processing module receives and processes the sync signals sent from the control module.Type: ApplicationFiled: August 16, 2012Publication date: September 19, 2013Applicant: XIAMEN SENHUI ELECTRONICS CO., LTD.Inventor: Xian LI
-
Publication number: 20130182068Abstract: A smart 3D HDMI video splitter is disclosed. When a 3D video signal enters the smart splitter, a field-programmable gate array converts the 3D signal so that the smart 3D HDMI video splitter outputs a 3D or 2D signal according to the type of the television, display or AVR amplifier.Type: ApplicationFiled: September 14, 2012Publication date: July 18, 2013Applicant: DA2 Technologies CorporationInventors: Chuan-Hung CHENG, Chin-Shih Chang, Shu-Cheng Liu
-
Publication number: 20130176389Abstract: In one example, a video coder is configured to code information indicative of whether view synthesis prediction is enabled for video data. When the information indicates that view synthesis prediction is enabled for the video data, the video coder may generate a view synthesis picture using the video data and code at least a portion of a current picture relative to the view synthesis picture. The at least portion of the current picture may comprise, for example, a block (e.g., a PU, a CU, a macroblock, or a partition of a macroblock), a slice, a tile, a wavefront, or the entirety of the current picture. On the other hand, when the information indicates that view synthesis prediction is not enabled for the video data, the video coder may code the current picture using at least one of intra-prediction, temporal inter-prediction, and inter-view prediction without reference to any view synthesis pictures.Type: ApplicationFiled: August 17, 2012Publication date: July 11, 2013Applicant: QUALCOMM IncorporatedInventors: Ying Chen, Ye-Kui Wang, Marta Karczewicz
-
Publication number: 20130162769Abstract: An auto-detect method for detecting a single-frame image format is provided. A single-frame image is divided into a plurality of macro-blocks. Each of the macro-blocks is divided into a plurality of sub-blocks. A meta-block is allocated in each of the sub-blocks. A pixel luminance sum characteristic value for each of the meta-blocks is calculated. A first confidence between a left half and a right half of the single-frame image is calculated according to the pixel luminance sum characteristic values. A second confidence between an upper half and a lower hap of the single-frame image is calculated according to the pixel luminance sum characteristic values. A format of the single-frame image is determined according to the pixel luminance sum characteristic values, and the first and second confidences of the single-frame image.Type: ApplicationFiled: August 23, 2012Publication date: June 27, 2013Applicant: NOVATEK MICROELECTRONICS CORP.Inventors: Lei ZHOU, Heng Yu, Chun Wang
-
Publication number: 20130147912Abstract: A 3D video and graphics processing system may include at least one interface to receive an input video stream. The input video stream may include a 2D, a 3D, or a 2D/3D mixed content stream. A decoder may decode the input video stream. A processor may determine a source format of the input video stream from the decoded input video stream, and determine a target format for an external device to use to display content. The processor may further determine whether the source format matches the target format, if yes, the processor may send the input video stream to the at least one interface, and if no, the processor may modify the input video stream to be in the target format and send the modified input video stream to the at least one interface, for transmitting as an output video stream to the external device.Type: ApplicationFiled: December 9, 2011Publication date: June 13, 2013Applicant: GENERAL INSTRUMENT CORPORATIONInventors: Jae Hoon Kim, David M. Baylon
-
Patent number: 8462196Abstract: Provided are a methods and apparatuses for generating a stereoscopic image format and reconstructing stereoscopic images from the stereoscopic image format. The method of generating a stereoscopic image format for compression or transmission of stereoscopic images includes receiving a base view image and an additional view image, determining block pixel information for the stereoscopic image format for each block position using first block pixel information of the base view image and second block pixel information of the additional view image based on blocks obtained by dividing the base view image and the additional view image, and disposing the determined block pixel information in each block position, thereby generating a combined image including pixel information of the base view image and pixel information of the additional view image.Type: GrantFiled: November 30, 2007Date of Patent: June 11, 2013Assignee: Samsung Electronics Co., Ltd.Inventors: Yong-tae Kim, Jae-seung Kim, Moon-seok Jang
-
Publication number: 20130141531Abstract: A compression method and apparatus of depth map in a 3D video are provided. The compression apparatus includes an edge detection module, a homogenizing module, and a compression encoding module. An edge detection is performed on a depth map of a frame in the 3D video. When at least one macroblock in the frame with no object edge passing through is found, a homogenizing processing is performed on the at least one macroblock. And then the depth map is encoded. Therefore, data quantity might be decreased when the depth map is compressed and encoded according to the present disclosure.Type: ApplicationFiled: January 17, 2012Publication date: June 6, 2013Applicant: Industrial Technology Research InstituteInventors: Jih-Sheng Tu, Jung-Yang Kao
-
Publication number: 20130127987Abstract: In one example, a video coder, such as a video encoder or a video decoder, is configured to code a first set of one or more depth range values for a first set of video data, wherein the first set of one or more depth range values have respective first precisions, code a second set of one or more depth range values for a second set of video data, wherein the second set of one or more depth range values have respective second precisions different than the respective first precisions, and code at least a portion of the second set of video data using the second set of one or more depth range values. In this manner, the video coder may update precisions (e.g., numbers of bits) used to represent depth range values for coding multiview plus depth video data.Type: ApplicationFiled: August 15, 2012Publication date: May 23, 2013Applicant: QUALCOMM INCORPORATEDInventors: Li Zhang, Ying Chen, Marta Karczewicz
-
Publication number: 20130113880Abstract: A High Efficiency Video Coding (HEVC) receiver is provided with a method for adaptive loop filtering. The receiver accepts digital information representing an image, and adaptive loop filter (ALF) parameters with no DC coefficient of weighting. The image is reconstructed using the digital information and estimates derived from the digital information. An ALF filter is constructed from the ALF parameters, and is used to correct for distortion in the reconstructed image. Typically, the receiver accepts a flag signal to indicate whether the DC coefficients have been transmitted or not. In other aspects, center luma coefficients are estimated from other coefficients, and the use of k values is simplified.Type: ApplicationFiled: November 8, 2011Publication date: May 9, 2013Inventors: Jie ZHAO, Christopher A. SEGALL
-
Publication number: 20130106994Abstract: Depth image compression is described for example, to enable body-part centers of players of a game to be detected in real time from depth images or for other applications such as augmented reality, and human-computer interaction. In an embodiment, depth images which have associated body-part probabilities, are compressed using probability mass which is related to the depth of an image element and a probability of a body part for the image element. In various examples, compression of the depth images using probability mass enables body part center detection, by clustering output elements, to be speeded up. In some examples, the scale of the compression is selected according to a depth of a foreground region and in some cases different scales are used for different image regions. In some examples, certainties of the body-part centers are calculated using probability masses of clustered image elements.Type: ApplicationFiled: November 1, 2011Publication date: May 2, 2013Applicant: MICROSOFT CORPORATIONInventors: Toby Sharp, Jamie Daniel Joseph Shotton
-
Publication number: 20130083161Abstract: A method for encoding video for streaming includes receiving a plurality of sequential image frames generated by a 3D graphics rendering engine. Graphics rendering contexts are obtained, including pixel depth map, rendering camera parameters, and camera motion from the 3D rendering engine. The method next entails selecting key frames among the plurality of sequential image frames, interpolating non-key frames via 3D image warping, and encoding all key frames and warping residues of non-key frames. The system is implementable on a server linked to a mobile user device for receiving the encoded frame data. The mobile user device is configured to decode the encoded frame data and display a corresponding image to a user of the mobile user device.Type: ApplicationFiled: September 30, 2011Publication date: April 4, 2013Applicants: UNIVERSITY OF ILLINOIS, DEUTSCHE TELEKOM AGInventors: Cheng-Hsin Hsu, Shu Shi, Klara Nahrstedt, Roy H. Campbell
-
Publication number: 20130083162Abstract: A depth fusion method adapted for a 2D-to-3D conversion image processing apparatus is provided. The depth fusion method includes the following steps. Respective motion-based depths of a plurality of blocks in an image frame are obtained. An original image-based depth of each of the blocks is obtained. The original image-based depth of each of the blocks is converted to obtain a converted image-based depth of each of the blocks. The motion-based depth and the converted image-based depth of each of the blocks are fused block by block to obtain a fusion depth of each of the blocks. Furthermore, a depth fusion apparatus is also provided.Type: ApplicationFiled: February 13, 2012Publication date: April 4, 2013Applicant: NOVATEK MICROELECTRONICS CORP.Inventors: Chun Wang, Guang-Zhi Liu, Jian-De Jiang
-
Publication number: 20130076859Abstract: Provided is a motion vector detecting apparatus capable of detecting a motion vector of a pulldown-converted 3D image signal with high precision. A pulldown detecting unit detects whether a 3D image signal is a pulldown-converted image signal. An LR separating unit outputs an LR separation signal separated into left and right image signals in each of frames having the same image content. A frame delay LR separating unit outputs a frame delay LR separation signal separated into left and right image signals in a frame before one repetition period. A motion vector detector detects motion vectors of the left and right image signals, An LR combination unit combines the motion vectors of the left and right image signals to output the combined motion vectors as a motion vector.Type: ApplicationFiled: September 14, 2012Publication date: March 28, 2013Applicant: JVC KENWOOD CORPORATIONInventors: Tomoyuki SHISHIDO, Hideki AIBA
-
Publication number: 20130070051Abstract: A video encoding method includes: receiving a plurality of video data inputs corresponding to a plurality of video display formats, respectively, wherein the video display formats include a first three-dimensional (3D) anaglyph video; generating a combined video data by combining video contents derived from the video data inputs; and generating an encoded video data by encoding the combined video data. A video decoding method includes: receiving an encoded video data having encoded video contents of a plurality of video data inputs combined therein, wherein the video data inputs correspond to a plurality of video display formats, respectively, and the video display formats include a first three-dimensional (3D) anaglyph video; and generating a decoded video data by decoding the encoded video data.Type: ApplicationFiled: May 30, 2012Publication date: March 21, 2013Inventors: Cheng-Tsai Ho, Ding-Yun Chen, Chi-Cheng Ju
-
Publication number: 20130063559Abstract: A projector illuminates an object, within the field of view of a camera, with a sequence of code patterns. The camera captures the illuminated object and provides object images to a decoder to convert the code patterns into code. A transition locator locates discontinuities in the code pattern images. A dequantizer reconstructs a range image from those discontinuities and said code.Type: ApplicationFiled: September 6, 2012Publication date: March 14, 2013Inventors: Sagi Ben Moshe, Ron Kimmel, Michael Bronstein, Alex Bronstein
-
Publication number: 20130057646Abstract: In one example, a video coder is configured to code one or more blocks of video data representative of texture information of at least a portion of a frame of video data, process a texture slice for a texture view component of a current view associated, the texture slice comprising the coded one or more blocks and a texture slice header comprising a set of syntax elements representative of characteristics of the texture slice, code depth information representative of depth values for at least the portion of the frame, and process a depth slice for a depth view component corresponding to the texture view component of the view, the depth slice comprising the coded depth information and a depth slice header comprising a set of syntax elements representative of characteristics of the depth slice, wherein process the texture slice or the depth slice comprises predict at least one syntax element.Type: ApplicationFiled: July 19, 2012Publication date: March 7, 2013Applicant: QUALCOMM INCORPORATEDInventors: Ying Chen, Ye-Kui Wang, Marta Karczewicz
-
Publication number: 20130050417Abstract: According to one embodiment, a video processing apparatus includes a viewer position detector, a viewing area information calculator, a control information keeping module, and a viewing area controller. The viewer position detector is configured to detect a position of a viewer using an image taken by a camera. The viewing area information calculator is configured to calculate a first control parameter so as to set a viewing area, in which a plurality of parallax images displayed on a display are viewed as a stereoscopic image, at an area depending on the position of the viewer. The control information keeping module is configured to keep one or a plurality of second control parameters for setting the viewing area at a predetermined area. The viewing area controller is configured to set the viewing area according to the first control parameter or the second control parameter.Type: ApplicationFiled: February 23, 2012Publication date: February 28, 2013Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Tan WANG, Nobuyuki IKEDA, Hiroshi FUJIMOTO, Takashi KUMAGAI, Toshihiro MOROHOSHI
-
Publication number: 20130050416Abstract: According to one embodiment, a video processing apparatus includes a receiver that decodes an encoded input video signal and generates a baseband video signal, a display manner selector that selects one display manner from plural display manners including a stereo imaging manner and an integral imaging manner, and a parallax image converter that converts, when the stereo imaging manner is selected by the display manner selector, the baseband video signal into two parallax image signals for the left eye and the right eye and converts, when the integral imaging manner is selected by the display manner selector, the baseband video signal into three or more parallax image signals.Type: ApplicationFiled: February 22, 2012Publication date: February 28, 2013Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Masao Iwasaki, Kiyoshi Hoshino, Shinzo Matsubara, Yutaka Irie, Toshihiro Morohoshi
-
Publication number: 20130050418Abstract: A viewing area adjusting device has an image pickup unit capable of shooting a forward direction of a stereoscopic video display device capable of displaying a stereoscopic video, a viewer information detector configured to detect the number of viewers and positions of the viewers by a video shot by the image pickup unit, a viewing area adjustment policy determining unit configured to select any one of a plurality of viewing area adjustment policies based on the number of viewers detected by the viewer information detector, a viewing area information computing unit configured to compute an adjustment amount for the viewing area, and a viewing area adjusting unit configured to adjust the viewing area.Type: ApplicationFiled: February 24, 2012Publication date: February 28, 2013Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Tatsuhiro Nishioka, Nobuyuki Ikeda, Hiroshi Fujimoto, Toshihiro Morohoshi
-
Publication number: 20130050419Abstract: According to one embodiment, a video processing apparatus includes a viewer position detector, a viewing area information calculator, and a viewing area controller. The viewer position detector is configured to detect a position of a viewer using an image taken by a camera. The viewing area information calculator is configured to calculate a control parameter so as to set a viewing area, in which a plurality of parallax images displayed on a display are viewed as a stereoscopic image, at an area depending on the position of the viewer. The viewing area controller is configured to set the viewing area according to the control parameter in synchronization with a start of displaying a stereoscopic image, and then, keep the set viewing area until receiving an indication for adjusting the viewing area.Type: ApplicationFiled: February 27, 2012Publication date: February 28, 2013Applicant: KABUSHIKI KAISHA TOSHIBAInventors: Atsushi Nakamura, Shinzo Matsubara, Yutaka Irie
-
Publication number: 20130050424Abstract: A multiview video decoding apparatus receives and decodes an encoded stream obtained as a result of encoding residual information, 2D images for N views, and depth images for N views, the residual information being the error between synthetic-view images generated using the 2D images for N views and the depth images for N views, and 2D images for (M?N) views at the view synthesis positions of the synthetic-view images. A view synthesizing apparatus generates the synthetic-view images by using the 2D images and depth images for N views decoded by the multiview video decoding apparatus. A residual information compensating apparatus adds the residual information into the generated synthetic-view images. The apparatus may be applied to a system that conducts view synthesis, for example.Type: ApplicationFiled: April 25, 2011Publication date: February 28, 2013Applicant: SONY CORPORATIONInventors: Yoshitomo Takahashi, Jun Yonemitsu
-
Publication number: 20130044183Abstract: Disclosed are a distributed video encoding/decoding method and a distributed video encoding/decoding apparatus, which can improve loss resilience and the quality of service. The distributed video encoding method first involves checking the state of a channel, determining a channel coding rate and the size of video data to be transmitted based on the checked state of a channel, determining the number of motion prediction performance steps based on the determined size of video data to be transmitted, encoding the video data to be transmitted by performing motion predictions according to the determined number of motion prediction performance steps, and channel-coding the encoded video data according to the determined channel coding rate. Accordingly, it is possible to improve loss resilience, even without additionally occupying network resources, thereby being capable of reducing the probability of decoding failure.Type: ApplicationFiled: January 11, 2011Publication date: February 21, 2013Inventors: Byeungwoo Jeon, Dohyeong Kim, Doug-Young Suh, Chulkeun Kim, Donggyu Sim, Kyungyeon Min, Seanae Park
-
Publication number: 20130038683Abstract: Service compatibility with respect to a 2D TV is realized when transmitting frame compatible type stereoscopic image data. When the image data is frame compatible type stereoscopic image data, image region information for cropping image data for 2D display from the image data after decoding is inserted into a compressed video stream. In the 2D television receiving device (2D TV) of the receiving side, it is possible to crop image data for 2D display from the image data after decoding and to obtain 2D image data based on the image region information.Type: ApplicationFiled: March 2, 2012Publication date: February 14, 2013Applicant: SONY CORPORATIONInventor: Ikuo Tsukagoshi
-
Publication number: 20130021435Abstract: In performing stereoscopic view, a shift information memory stores, as a number of pixel lengths, an offset indicating how far in a right direction or a left direction to move coordinates of pixels to realize stereoscopic view. When realizing stereoscopic view, a plane shift engine moves the coordinates of image data in a graphics plane in the right direction or the left direction by the number of pixel lengths indicated by the offset. When a scale of video data targeted for stereoscopic view is changed by a basic graphics plane, a shift distance of pixel coordinates by the plane shift engine is based on a number of pixel lengths obtained by multiplying the offset by a changed scaling factor in the horizontal direction.Type: ApplicationFiled: September 24, 2012Publication date: January 24, 2013Applicant: PANASONIC CORPORATIONInventor: PANASONIC CORPORATION
-
Publication number: 20130010055Abstract: In response to a stereoscopic image of first and second views, a maximum positive disparity is computed between the first and second views, and a minimum negative disparity is computed between the first and second views. Within a bit stream, at least the stereoscopic image, the maximum positive disparity, and the minimum negative disparity are encoded.Type: ApplicationFiled: June 25, 2012Publication date: January 10, 2013Applicant: TEXAS INSTRUMENTS INCORPORATEDInventors: Veeramanikandan Raju, Wei Hong, Minhua Zhou
-
Publication number: 20130010069Abstract: From a bit stream, at least the following are decoded: a stereoscopic image of first and second views; a maximum positive disparity between the first and second views; and a minimum negative disparity between the first and second views. In response to the maximum positive disparity violating a limit on positive disparity, a convergence plane of the stereoscopic image is adjusted to comply with the limit on positive disparity. In response to the minimum negative disparity violating a limit on negative disparity, the convergence plane is adjusted to comply with the limit on negative disparity.Type: ApplicationFiled: June 25, 2012Publication date: January 10, 2013Applicant: TEXAS INSTRUMENTS INCORPORATEDInventors: Veeramanikandan Raju, Wei Hong, Minhua Zhou
-
Publication number: 20130010056Abstract: The reproduction apparatus includes a remote control receiver that receives an instruction for adjusting an offset amount between the left-eye image data and the right-eye image data, a video signal processor that adjusts an offset amount between the left-eye image data and the right-eye image data so as to be an offset amount based on the instruction, and a CPU that determines whether an absolute value of the offset amount adjusted by the video signal processor is not more than a limit value. When the CPU determines that the adjusted absolute value of the offset amount is more than the limit value, the video signal processor adjusts the offset amount between the left-eye image data and the right-eye image data so that the absolute value of the offset amount between the left-eye image data and the right-eye image data is not more than the limit value.Type: ApplicationFiled: March 17, 2011Publication date: January 10, 2013Inventor: Kenji Morimoto
-
Publication number: 20130002817Abstract: An apparatus and method for processing an image are provided. The image processing apparatus uses a two-dimensional (2D) video signal and depth information corresponding to the 2D video signal to generate a three-dimensional (3D) video signal includes: an image receiver which receives a 2D video signal containing a background and an object; and an image processor which adjusts a transition area corresponding to a boundary between the object and the background in the depth information, and renders a 3D image from the 2D video signal through the adjusted transition area.Type: ApplicationFiled: June 21, 2012Publication date: January 3, 2013Applicant: SAMSUNG ELECTRONICS CO., LTD.Inventors: Won-seok AHN, Seung-hoon HAN, Oh-jae KWON
-
Publication number: 20130002812Abstract: There is an encoding of three dimensional (3D) information. The encoding may include receiving a signal including frames in a 3D video sequence, receiving caption information to appear in a caption window associated with the frames, and/or receiving disparity information associated with the frames. The encoding may also include determining frame disparity maps based on the disparity information associated with the frames. The frame disparity maps may be determined by dividing a part of a frame into a plurality of grid cells. The grid cells may define a disparity measure associated with locations in a grid. The grid cells may form a caption window disparity map dividable into equivalent size portions including an equivalent amount of grid cells. The encoding may also include encoding the frames, the caption information and the frame disparity maps. There is also a decoding of the 3D information.Type: ApplicationFiled: June 29, 2011Publication date: January 3, 2013Applicant: GENERAL INSTRUMENT CORPORATIONInventors: Dinkar N. Bhat, Yeqing Wang
-
Publication number: 20130002821Abstract: A video processing device is a device capable of outputting stereoscopic video information containing a first eye image and a second eye image and enabling stereoscopic viewing to a video display device. The video processing device includes an obtaining unit that obtains the stereoscopic video information which is obtained by coding the first and second eye images in a coding method using different bit rates to the first and second eye images respectively, and a transmitting unit that transmits identification information indicating one of the first and second eye images that is coded with higher bit rate, to the video display device, with the identification information being associated with the decoded stereoscopic video information.Type: ApplicationFiled: March 24, 2011Publication date: January 3, 2013Applicant: PANASONIC CORPORATIONInventor: Tadayoshi Okuda
-
Publication number: 20130002820Abstract: Transmitting and receiving 3D video content via an Internet protocol (IP) stream are described. 3D video content may be transmitted in a single IP stream and adjusted by a device associated with a display for rendering the 3D video content in a desired manner. 3D content also may be transmitted in a plurality of IP streams and a device associated with a display for rendering the 3D content may determine which of the plurality of IP streams to decode based upon a mode of operation of the device. A device receiving 3D video content may be configured to adjust the appearance of the content displayed on a display associated with the device. Such adjusting of the appearance may include moving the position of the rendered 3D video content within the display, positioning in band and/or out of band content in front of, behind, or within the rendered 3D video content.Type: ApplicationFiled: September 13, 2012Publication date: January 3, 2013Applicant: Comcast Cable Communications, LLCInventor: Mark David Francisco
-
Publication number: 20120314023Abstract: A method, apparatus and system are provided for the visual inspection of a three-dimensional video stream as it is being re-encoded into a second video format. A portion of a frame of a decoded three-dimensional video stream and a corresponding portion of a frame of the three-dimensional video stream having been re-encoded are arranged into a combined video frame such that the video frame portions appear together in the combined video frame. A boundary between the video frame portions in the combined video frame is manipulated such that a change of disparity on the boundary between the video frame portions, and any overlap between the combined video frame portions, are not visible.Type: ApplicationFiled: October 8, 2010Publication date: December 13, 2012Inventors: Jesus Barcons-Palau, Joan Llach
-
Publication number: 20120314027Abstract: Multiview videos are acquired by overlapping cameras. Side information is used to synthesize multiview videos. A reference picture list is maintained for current frames of the multiview videos, the reference picture indexes temporal reference pictures and spatial reference pictures of the acquired multiview videos and the synthesized reference pictures of the synthesized multiview video. Each current frame of the multiview videos is predicted according to reference pictures indexed by the associated reference picture list with a skip mode and a direct mode, whereby the side information is inferred from the synthesized reference picture. In addition, the skip and merge modes for single view video coding are modified to support multiview video coding by generating a motion vector prediction list by also considering neighboring blocks that are associated with synthesized reference pictures.Type: ApplicationFiled: July 9, 2012Publication date: December 13, 2012Inventors: Dong Tian, Feng Zou, Anthony Vetro
-
Publication number: 20120307006Abstract: Provided is a receiving apparatus including a first decoder which decodes a first image signal, a second decoder which decodes a second image signal corresponding to an image of at least a part of a region of a first image frame of the first image signal, a CPU which acquires object indication information including spatial position information of the region with respect to the first image frame, and an object reconstruction part which has the image of the region overwrite the first image frame to generate a second image frame based on the position information.Type: ApplicationFiled: January 14, 2011Publication date: December 6, 2012Applicant: SONY CORPORATIONInventor: Ikuo Tsukagoshi