Naofumi Wada has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
Abstract: According to one embodiment, a video encoding apparatus is a video encoding apparatus for subjecting a video image to motion compensated prediction coding, includes an acquisition module to acquire available blocks of blocks having motion vectors from encoded blocks adjacent to a to-be-encoded block and number of the available blocks, an acquisition/selection module to select one selection block from the encoded available blocks, a selection information encoder to encode selection information specifying the selection block using a code table corresponding to the number of available blocks, and an image encoder to subject the to-be-encoded block to motion compensated prediction coding using a motion vector of the selection block.
Abstract: According to one embodiment, a moving picture encoding method includes deriving a target filter to be used for a decoded image of a target image to be encoded. The method includes setting a correspondence relationship between target filter coefficient in the target filter and reference filter coefficient in a reference filter in accordance with tap length of the target filter and tap length of the reference filter. The method includes deriving coefficient difference between the target filter coefficient and the reference filter coefficient in accordance with the correspondence relationship. The method includes encoding target filter information including the tap length of the target filter and the coefficient difference.
Abstract: In one embodiment, a moving-picture decoding apparatus is disclosed. A decoding unit decodes input coded data to generate a quantized transform coefficient and filter information. An inverse-transform/inverse-quantization unit executes inverse-quantization and inverse-transform on the quantized transform coefficient to generate a prediction error picture. A decoded-picture generation unit generates a decoded picture using the prediction error picture and a predicted picture. A luminance filter processing unit applies a luminance filter to the luminance signal of the decoded picture based on luminance filter information to generate the luminance signal of a restored picture. A chrominance filter processing unit applies a chrominance filter to the chrominance signal of the decoded picture based on chrominance filter information to generate the chrominance signal of the restored picture.
Abstract: According to an embodiment, a moving image encoding method includes generating a predicted image of an original image based on a reference image, performing transform and quantization on a prediction error between the original image and the predicted image to obtain a quantized transform coefficient, performing inverse quantization and inverse transform on the quantized transform coefficient to obtain a decoded prediction error, adding the predicted image and the decoded prediction error to generate a local decoded image, setting filter data containing time-space filter coefficients for reconstructing the original image based on the local decoded image and the reference image, performing a time-space filtering process on the local decoded image in accordance with the filter data to generate a reconstructed image, storing the reconstructed image as the reference image, and encoding the filter data and the quantized transform coefficient.
Abstract: According to one embodiment, a moving image encoding method is disclosed. The method can generate a prediction error image based on a difference between an input moving image and a predicted image. The method can execute transform and quantization on the prediction error image to generate quantized transformation coefficients. The method can generate edge information which indicates an attribute of an edge in a local decoded image corresponding to an encoded image. The method can generate, based on the edge information, control information associated with application of a filter to a decoded image at a decoding side. The method can set filter coefficients for the filter based on the control information. In addition, the method can encode the quantized transformation coefficients and filter coefficient information indicating the filter coefficients to output encoded data.
Abstract: A video encoding method wherein an encoded image is used as a reference image for prediction of an image to be encoded next, includes generating a restored image by applying a filter to a local decoded image of an encoded image, setting filter coefficient information of the filter, encoding the filter coefficient information, encoding specific information indicating the local decoded image used as a reference image or the restored image, and storing either the local decoded image or the restored image as the reference image in a memory based on the specific information.
Abstract: An image processor to produce a local decoded image corresponding to an input image, a region partitioning module to classify the local decoded image into a plurality of regions using a given parameter, a filter designing module to design a filter coefficient for every classified region, a filter processor to filter the local decoded image according to a corresponding filter coefficient for every classified region, a frame memory to store a filtered image, a predictor to produce a prediction image using a stored image, and an encoder to output a parameter used for classification of the region and information of a filter coefficient every classified region as encoded data are provided.
Abstract: An video encoding apparatus includes a dividing unit 101 to divide an input image signal into to-be-encoded pixel blocks, a reblocking unit 102 to reblock each of the to-be-encoded pixel blocks to generate a first pixel block and a second pixel block, a first prediction unit 108A to perform prediction for the first pixel block using a first local decoded image corresponding to an encoded pixel to generate a first predicted image, a generation unit to generate a second local decoded image corresponding to the first pixel block using a first prediction error representing the difference between the first pixel block and the first predicted image, a second prediction unit 108B to perform prediction for the second pixel block using the first local decoded image and the second local decoded image to generate a second predicted image, an encoding unit 103-105 to transform and encode the first prediction error and a second prediction error representing the difference between the second pixel block and the second pred
Abstract: A video encoding method includes subjecting an input video image to motion compensated temporal filtering using a motion compensated temporal filter to produce a low-pass filtered image, quantizing a transform coefficient of the low-pass filtered image, encoding a quantized transform coefficient, calculating a weight to be given to a low-pass filter coefficient of a low-pass filter of the motion compensated temporal filter according to coarseness of quantization and a magnitude of a motion compensated error, and controlling a high band stopping characteristic of the low-pass filter according to the low-pass filter coefficient weighted by the weight, wherein the controlling controls the high band stopping characteristic of the low-pass filter to provide a positive correlation with respect to the quantization parameter and provide a negative correlation with respect to the magnitude of the motion compensated error.