Abstract: An image processing device for encoding a video signal including circuitry configured to perform an arithmetic encoding process on a top block of a current block line of the video signal using a context used in the arithmetic encoding process for a previous block of a previous block line.
Abstract: A moving picture coding apparatus includes a motion compensation coding unit for deciding a coding mode for coding a current block to be coded and for generating predictive image data based on the coding mode, and includes a direct mode enable/disable judgment unit for judging whether or not scaling processing can be performed when the coding mode decided by the motion compensation coding unit is a temporal direct mode. When it is judged that the scaling processing cannot be performed, the motion compensation coding unit performs motion compensation either by using another coding mode or without the scaling processing.
Abstract: A moving picture coding apparatus includes a motion compensation coding unit for deciding a coding mode for coding a current block to be coded and for generating predictive image data based on the coding mode, and includes a direct mode enable/disable judgment unit for judging whether or not scaling processing can be performed when the coding mode decided by the motion compensation coding unit is a temporal direct mode. When it is judged that the scaling processing cannot be performed, the motion compensation coding unit performs motion compensation either by using another coding mode or without the scaling processing.
Abstract: A moving picture coding apparatus includes a motion compensation coding unit for deciding a coding mode for coding a current block to be coded and for generating predictive image data based on the coding mode, and includes a direct mode enable/disable judgment unit for judging whether or not scaling processing can be performed when the coding mode decided by the motion compensation coding unit is a temporal direct mode. When it is judged that the scaling processing cannot be performed, the motion compensation coding unit performs motion compensation either by using another coding mode or without the scaling processing.
Abstract: A method for encoding pictures within a groups of pictures using prediction, where a first reference picture from a group of pictures and a second reference pictures from the subsequent group of pictures are used in predicting pictures in the group of pictures associated with the first reference picture. A plurality of anchor pictures in the group of pictures associated with the first reference picture may be predicted using both the first and second reference pictures to ensure a smooth transition between different groups of pictures within a video frame.
Abstract: Methods and apparatuses for encoding and decoding video are provided. The encoding method includes: performing motion estimation on a current block according to a first sub-pixel unit, and obtaining a motion vector in the first sub-pixel unit with respect to the current block; interpolating a reference picture indicated by the motion vector according to a second sub-pixel unit smaller than the first sub-pixel unit; using the interpolated reference picture, selecting a second sub-pixel in the second sub-pixel unit adjacent to a first sub-pixel in the first sub-pixel unit of the reference picture; selecting a corresponding region that has a smaller error with the current block, from among a first corresponding region of the reference picture and obtained with respect to the first sub-pixel, and a second corresponding region of the current block and obtained with respect to the selected second sub-pixel; and encoding information of the selected corresponding region.
Type:
Grant
Filed:
April 5, 2011
Date of Patent:
April 29, 2014
Assignee:
Samsung Electronics Co., Ltd.
Inventors:
Elena Alshina, Alexander Alshin, Min-su Cheon, Woo-jin Han, Tammy Lee
Abstract: In one embodiment of the present invention, an image display device capable of providing adequate quality of a moving image regardless of an image signal level of an input image signal is disclosed.
Abstract: Complex image data is transformed to a holographic representation of the image. A subset of the holographic representation is modeled. Model parameters constitute a compressed image representation. A two-dimensional Fourier transform can be applied to obtain the holographic image. Modeling includes applying an analysis portion of an adaptive analysis/synthesis linear prediction methodology to a subset of the holographic representation to obtain an autoregressive model. Prior to modeling, one-dimensional Fourier transform can be performed on the holographic representation and the linear prediction is one-dimensional. Model parameters are preferably quantized. Embodiments include determining error between the model and the model's input data. There the compressed image representation the error, which also can be quantized. The subset of the holographic representation can be less than all the representation. The subset can be a plurality of complete rows; preferably substantially symmetric about 0 Hz.
Type:
Grant
Filed:
December 20, 2007
Date of Patent:
April 16, 2013
Assignee:
Science Applications International Corporation
Inventors:
Hanna Elizabeth Witzgall, Timothy F. Settle
Abstract: A method of filtering an image includes receiving target image data and applying a multiple stage filter to the target image data. Each stage of the filtering includes generating a motion vector sampling pattern, using the target image data and the motion vector sampling pattern to generate a temporal prediction of the target image data, and using the temporal prediction of the target image data to generate a spatial-temporal transformation of the target image data.
Type:
Grant
Filed:
September 16, 2005
Date of Patent:
February 22, 2011
Assignees:
Sony Corporation, Sony Electronics Inc.
Inventors:
Marco Paniconi, James J. Carrig, Zhourong Miao
Abstract: A method for padding interlaced texture information on a reference VOP to perform a motion estimation detects whether said each texture macroblock of the reference VOP is a boundary block or not. After the undefined texture pixels of the boundary block are extrapolated from the defined texture pixels thereof by using a horizontal repetitive padding, a transparent row padding and a transparent field padding sequentially, an undefined adjacent block is expanded based on the extrapolated boundary block.
Abstract: A method for padding interlaced texture information on a reference VOP to perform a motion estimation detects whether said each texture macroblock of the reference VOP is a boundary block or not. After the undefined texture pixels of the boundary block are extrapolated from the defined texture pixels thereof by using a horizontal repetitive padding, a transparent row padding and a transparent field padding sequentially, an undefined adjacent block is expanded based on the extrapolated boundary block.
Abstract: A method for padding interlaced texture information on a reference VOP to perform a motion estimation detects whether said each texture macroblock of the reference VOP is a boundary block or not. After the undefined texture pixels of the boundary block are extrapolated from the defined texture pixels thereof by using a horizontal repetitive padding, a transparent row padding and a transparent field padding sequentially, an undefined adjacent block is expanded based on the extrapolated boundary block.
Abstract: A method for padding interlaced texture information on a reference VOP to perform a motion estimation detects whether said each texture macroblock of the reference VOP is a boundary block or not. After the undefined texture pixels of the boundary block are extrapolated from the defined texture pixels thereof by using a horizontal repetitive padding, a transparent row padding and a transparent field padding sequentially, an undefined adjacent block is expanded based on the extrapolated boundary block.
Abstract: A method for padding interlaced texture information on a reference VOP to perform a motion estimation detects whether said each texture macroblock of the reference VOP is a boundary block or not. After the undefined texture pixels of the boundary block are extrapolated from the defined texture pixels thereof by using a horizontal repetitive padding, a transparent row padding and a transparent field padding sequentially, an undefined adjacent block is expanded based on the extrapolated boundary block.
Abstract: A method for padding interlaced texture information on a reference VOP to perform a motion estimation detects whether said each texture macroblock of the reference VOP is a boundary block or not. After the undefined texture pixels of the boundary block are extrapolated from the defined texture pixels thereof by using a horizontal repetitive padding, a transparent row padding and a transparent field padding sequentially, an undefined adjacent block is expanded based on the extrapolated boundary block.