Patents by Inventor Kazuto Kamikura
Kazuto Kamikura has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Image generation method and apparatus, program therefor, and storage medium which stores the program
Patent number: 8355596Abstract: An image generation method for generating image information of an image C by using an image A and an image B having a bit depth higher than that of image A. Image C having the same bit depth as image B is generated by increasing the bit depth of image A by means of tone mapping; presence or absence of a point in image B corresponding to each pixel position of image C and the position of the relevant corresponding point are estimated; and to each pixel position in image C for which it is estimated that there is a corresponding point, image information of the corresponding position in image B is assigned. It is possible to generate image information at each pixel position in image C for which it is estimated in the corresponding point estimation that there is no corresponding point, by using the image information assigned according to an estimation result that there is a corresponding point.Type: GrantFiled: October 9, 2008Date of Patent: January 15, 2013Assignee: Nippon Telegraph and Telephone CorporationInventors: Hideaki Kimata, Shinya Shimizu, Kazuto Kamikura, Yoshiyuki Yashima -
Image generation method and apparatus, program therefor, and storage medium which stores the program
Patent number: 8346019Abstract: An image generation method for generating image information of a color signal Y of an image A by using a color signal X of image A, and color signal X and color signal Y of an image B. The presence or absence of a point in color signal X of image B corresponding to each pixel position of color signal X of image A, and the position of the relevant corresponding point are estimated. To each estimated pixel position in color signal Y of image A, image information of the corresponding position in the second color signal Y of image B is assigned. Color signal Y at a pixel position in image A for which it is estimated that there is no corresponding point is generated by using the image information of color signal Y assigned to pixels having a corresponding point.Type: GrantFiled: October 9, 2008Date of Patent: January 1, 2013Assignee: Nippon Telegraph and Telephone CorporationInventors: Hideaki Kimata, Shinya Shimizu, Kazuto Kamikura, Yoshiyuki Yashima -
Patent number: 8290289Abstract: An image encoding method includes determining a corresponding point on a target image for encoding, which corresponds to each pixel on a reference image, based on the distance from a camera used for obtaining the reference image to an imaged object, and a positional relationship between cameras; computing a parallax vector from the position of the pixel to the corresponding point in the pixel space; computing a target predictive vector having the same starting point as the parallax vector and components obtained by rounding off the components of the parallax vector; computing a target reference vector having the same starting point as the parallax vector and the same size and direction as a differential vector between the target predictive vector and the parallax vector; and setting a predicted pixel value of a pixel on the target encoding image, which is indicated by the target predictive vector, to a value of a pixel on the reference image, which is indicated by the target reference vector.Type: GrantFiled: September 18, 2007Date of Patent: October 16, 2012Assignee: Nippon Telegraph and Telephone CorporationInventors: Shinya Shimizu, Masaki Kitahara, Kazuto Kamikura, Yoshiyuki Yashima
-
Publication number: 20120170651Abstract: A video encoding method includes selecting a reference vector target frame and a reference frame from among already-encoded frames; encoding information for designating each frame; setting a reference vector for indicating an area in the reference vector target frame with respect to an encoding target area; encoding the reference vector; performing a corresponding area search by using image information of a reference vector target area, which belongs to the reference vector target frame and is indicated by the reference vector, and the reference frame; determining a reference area in the reference frame based on the search result; generating a predicted image by using image information of the reference frame, which corresponds to the reference area; and encoding differential information between image information of the encoding target area and the predicted image.Type: ApplicationFiled: March 15, 2012Publication date: July 5, 2012Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Shinya Shimizu, Hideaki Kimata, Kazuto Kamikura, Yoshiyuki Yashima
-
Patent number: 8204118Abstract: A video encoding apparatus used in encoding of a multi-viewpoint image. The apparatus generates a synthetic image for a camera used for obtaining an encoding target image, by using an already-encoded reference camera image having a viewpoint different from the viewpoint of the camera used for obtaining the encoding target image, and disparity information between the reference camera image and the encoding target image, thereby encoding the encoding target image. A predicted image for a differential image between an input image of an encoding target area to be encoded and the synthetic image generated therefor is generated, and a predicted image for the encoding target area, which is represented by the sum of the predicted differential image and the synthetic image for the encoding target area, is generated. A prediction residual represented by a difference between the predicted image for the encoding target area and the encoding target image of the encoding target area is encoded.Type: GrantFiled: June 23, 2008Date of Patent: June 19, 2012Assignee: Nippon Telegraph and Telephone CorporationInventors: Shinya Shimizu, Hideaki Kimata, Kazuto Kamikura, Yoshiyuki Yashima
-
Publication number: 20110194599Abstract: In scalable video encoding, incidence rates of combinations of optimum prediction modes to be selected for spatially corresponding blocks of an upper layer and a lower layer are determined based on an optimum prediction mode that was selected in a conventional encoding, and then a correspondence table that describes relationships therebetween is created. Subsequently, the combinations of the selected optimum prediction modes described in the correspondence table are narrowed down based on the value of the incidence rate so as to create prediction mode correspondence information that describes the combinations of the optimum prediction mode narrowed down.Type: ApplicationFiled: October 21, 2009Publication date: August 11, 2011Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Kazuya Hayase, Yukihiro Bandoh, Seishi Takamura, Kazuto Kamikura, Yoshiyuki Yashima
-
Publication number: 20110188574Abstract: A direction is detected for each block in which a pixel value is changed which is represented by an edge that indicates a direction of change in pixel value in each block, a direction in which a deblocking filter is to be applied to a block boundary is determined based on a direction of an edge detected for a block to be processed which includes the block boundary subject to deblocking and on a direction of an edge detected for a block contacting the block to be processed, and the deblocking filter is applied to the block boundary in accordance with the determined direction.Type: ApplicationFiled: October 21, 2009Publication date: August 4, 2011Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Shohei Matsuo, Seishi Takamura, Kazuto Kamikura, Yoshiyuki Yashima
-
Patent number: 7929605Abstract: In order to make it possible to obtain the correct decoded image even in the case of not decoding a particular frame of the encoded data and improve the coding efficiency, the predicted image production unit 103 selects the image data from the image data of a plurality of frames in the reference image memory 107 which are encoded in the past, of the i-th (1?i?j) category, for the current frame which is classified as the j-th category by the image classifying unit 102, and produces the predicted image. The difference encoding unit 104 encodes a difference between the image data of the current frame and the predicted image. Also, the current category encoding unit 106 encodes the category number of the current frame, and the reference image specifying data encoding unit 105 encodes the reference image specifying data which specifies the image data selected from the reference image memory 107.Type: GrantFiled: July 22, 2004Date of Patent: April 19, 2011Assignee: Nippon Telegraph and Telephone CorporationInventors: Hideaki Kimata, Masaki Kitahara, Kazuto Kamikura
-
Publication number: 20100329344Abstract: A scalable video encoding method of performing encoding by predicting an upper-layer signal having a relatively high spatial resolution by means of interpolation using an immediately-lower-layer signal having a relatively low spatial resolution. The method computes a first weighting coefficient for each image area of a predetermined unit size in a search for estimating a motion between an encoding target image area in an upper layer and a reference image area, where the first weighting coefficient is computed based on a brightness variation between an image area, which belongs to an immediately-lower layer and has the same spatial position as the encoding target image area, and the reference image area; and performs a motion estimation using a signal which is obtained by correcting a decoded signal of the reference image area by the first weighting coefficient and functions as an estimated signal in the motion estimation, so as to compute a motion vector.Type: ApplicationFiled: July 1, 2008Publication date: December 30, 2010Applicant: Nippon Telegraph and Telephone CorporationInventors: Kazuya Hayase, Yukihiro Bandoh, Seishi Takamura, Kazuto Kamikura, Yoshiyuki Yashima
-
IMAGE GENERATION METHOD AND APPARATUS, PROGRAM THEREFOR, AND STORAGE MEDIUM WHICH STORES THE PROGRAM
Publication number: 20100290715Abstract: An image generation method for generating image information of an image C by using an image A and an image B having a bit depth higher than that of image A. Image C having the same bit depth as image B is generated by increasing the bit depth of image A by means of tone mapping; presence or absence of a point in image B corresponding to each pixel position of image C and the position of the relevant corresponding point are estimated; and to each pixel position in image C for which it is estimated that there is a corresponding point, image information of the corresponding position in image B is assigned. It is possible to generate image information at each pixel position in image C for which it is estimated in the corresponding point estimation that there is no corresponding point, by using the image information assigned according to an estimation result that there is a corresponding point.Type: ApplicationFiled: October 9, 2008Publication date: November 18, 2010Applicant: Nippon Telegraph and Telephone CorporationInventors: Hideaki Kimata, Shinya Shimiza, Kazuto Kamikura, Yoshiyuki Yashima -
Publication number: 20100220784Abstract: By using parallax compensation which performs prediction by using parallax between video images, the video images are encoded as a single video image. Reference parallax for a target image to be encoded is set, wherein the reference parallax is estimated using a reference image; area division in an image frame is set; parallax displacement for each divided area is set, wherein the parallax displacement is the difference between the reference parallax and parallax for the parallax compensation; data of the area division is encoded; and data for indicating the parallax displacement is encoded. During decoding, reference parallax for a target image to be decoded is set, wherein it is estimated using a reference image; data for indicating area division, which is included in encoded data, is decoded; and data of parallax displacement, which is included in the encoded data, is decoded for each area indicated by the area division data.Type: ApplicationFiled: January 4, 2007Publication date: September 2, 2010Applicants: IPPON TELEGRAPH AND TELEPHONE CORPORATION, NATIONAL UNIVERSITY CORPORATION NAGOYA UNIVERSITYInventors: Masayuki Tanimoto, Toshiaki Fujii, Kenji Yamamoto, Masaki Kitahara, Hideaki Kimata, Shinya Shimizu, Kazuto Kamikura, Yoshiyuki Yashima
-
Publication number: 20100215095Abstract: A video scalable encoding method calculates a weight coefficient which includes a proportional coefficient and an offset coefficient and indicates brightness variation between an encoding target image region and a reference image region in an upper layer, calculates a motion vector by applying the weight coefficient to an image signal of a reference image region as a search target and executing motion estimation, and generates a prediction signal by applying the weight coefficient to a decoded signal of a reference image region indicated by the motion vector and executing motion compensation. Based on encoding information of an immediately-lower image region in an immediately-lower layer, which is present at spatially the same position as the encoding target image region, a data structure of the weight coefficient is determined.Type: ApplicationFiled: October 20, 2008Publication date: August 26, 2010Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Kazuya Hayase, Yukihiro Bandoh, Seishi Takamura, Kazuto Kamikura, Yoshiyuki Yashima
-
IMAGE GENERATION METHOD AND APPARATUS, PROGRAM THEREFOR, AND STORAGE MEDIUM WHICH STORES THE PROGRAM
Publication number: 20100208991Abstract: An image generation method for generating image information of a color signal Y of an image A by using a color signal X of image A, and color signal X and color signal Y of an image B. The presence or absence of a point in color signal X of image B corresponding to each pixel position of color signal X of image A, and the position of the relevant corresponding point are estimated. To each estimated pixel position in color signal Y of image A, image information of the corresponding position in the second color signal Y of image B is assigned. Color signal Y at a pixel position in image A for which it is estimated that there is no corresponding point is generated by using the image information of color signal Y assigned to pixels having a corresponding point.Type: ApplicationFiled: October 9, 2008Publication date: August 19, 2010Applicant: Nippon Telegraph and Telephone CorporationInventors: Hideaki Kimata, Shinya Shimizu, Kazuto Kamikura, Yoshiyuki Yashima -
Publication number: 20100208803Abstract: A method for encoding an image using an intraframe prediction is provided which includes selecting a gradient of a pixel value that is indicated by an image signal to be predicted among a plurality of gradient candidates, generating a predicted signal by applying a gradient in accordance with the distance from a prediction reference pixel based on the gradient, intraframe-encoding the image signal to be predicted based on the predicted signal, and encoding information indicating the size of the selected gradient. Alternatively, the method includes estimating the gradient of a pixel value that is indicated by an image signal to be predicted based on an image signal which has already been encoded, generating a predicted signal by applying a gradient in accordance with the distance from a prediction reference pixel based on the gradient, and intraframe-encoding the image signal to be predicted based on the predicted signal.Type: ApplicationFiled: October 14, 2008Publication date: August 19, 2010Applicant: Nippon Telegraph and Telephone CorporationInventors: Shohei Matsuo, Seishi Takamura, Kazuto Kamikura, Yoshiyuki Yashima
-
IMAGE GENERATION METHOD AND APPARATUS, PROGRAM THEREFOR, AND STORAGE MEDIUM WHICH STORES THE PROGRAM
Publication number: 20100209016Abstract: An image generation method for generating image information of an image C by using an image A and an image B having a resolution higher than that of image A. Image C having the same resolution as image B is generated by enlarging image A; presence or absence of a point in image B corresponding to each pixel position of image C and the position of the relevant corresponding point are estimated; and to each pixel position in image C for which it is estimated that there is a corresponding point, image information of the corresponding position in image B is assigned. It is possible to generate image information at each pixel position in image C for which it is estimated in the corresponding point estimation that there is no corresponding point, by using the image information assigned according to an estimation result that there is a corresponding point.Type: ApplicationFiled: October 9, 2008Publication date: August 19, 2010Applicant: Nippon Telegraph and Telephone CorporationInventors: Hideaki Kimata, Shinya Shimizu, Kazuto Kamikura, Yoshiyuki Yashima -
Publication number: 20100189177Abstract: A video encoding apparatus used in encoding of a multi-viewpoint image. The apparatus generates a synthetic image for a camera used for obtaining an encoding target image, by using an already-encoded reference camera image having a viewpoint different from the viewpoint of the camera used for obtaining the encoding target image, and disparity information between the reference camera image and the encoding target image, thereby encoding the encoding target image. A predicted image for a differential image between an input image of an encoding target area to be encoded and the synthetic image generated therefor is generated, and a predicted image for the encoding target area, which is represented by the sum of the predicted differential image and the synthetic image for the encoding target area, is generated. A prediction residual represented by a difference between the predicted image for the encoding target area and the encoding target image of the encoding target area is encoded.Type: ApplicationFiled: June 23, 2008Publication date: July 29, 2010Applicant: Nippon Telegraph and Telephone CorporationInventors: Shinya Shimizu, Hideaki Kimata, Kazuto Kamikura, Yoshiyuki Yashima
-
Publication number: 20100118939Abstract: When video images are processed by applying temporal or spatial interframe prediction encoding to each divided area, and generating a predicted image of a processing target area based on a reference frame of the processing target area and reference information which indicates a predicted target position of the processing target area in the reference frame, predicted reference information is generated as predicted information of the reference information. Reference information used when an area adjacent to the processing target area was processed is determined as predicted reference information prediction data used for predicting the reference information of the processing target area. Reference area reference information is generated using one or more pieces of reference information used when a reference area indicated by the prediction data was processed. The predicted reference information prediction data is updated using the reference area reference information.Type: ApplicationFiled: October 23, 2007Publication date: May 13, 2010Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Shinya Shimizu, Kazuto Kamikura, Yoshiyuki Yashima, Hideaki Kimata
-
Publication number: 20100086222Abstract: An image encoding method includes determining and encoding global parallax data which is probably correct parallax data in consideration of the Epipolar geometry constraint between a camera of a standard viewpoint, which is selected from the entire multi-viewpoint images, and images obtained by all the other viewpoints; generating base parallax data for each camera as a viewpoint other than the standard viewpoint, where the base parallax data is probably correct parallax data in consideration of the Epipolar geometry constraint between the image of the relevant camera and the images of all the other cameras based on the global parallax data and the camera parameters; determining and encoding correction parallax data used for correcting the base parallax data, so as to indicate parallax data between the image of the relevant camera and an already-encoded reference viewpoint image used for parallax compensation; and encoding the image of the relevant camera by using parallax data obtained by correcting the baseType: ApplicationFiled: September 18, 2007Publication date: April 8, 2010Applicant: Nippon Telegraph and Telephone CorporationInventors: Shinya Shimizu, Masaki Kitahara, Kazuto Kamikura, Yoshiyuki Yashima
-
Publication number: 20100067579Abstract: A video encoding method, in which a video signal consisting of two or more signal elements is targeted to be encoded, includes a step of setting a downsampling ratio is set for a specific signal element in a frame, in accordance with the characteristics in the frame; and a step of generating a target video signal to be encoded, by subjecting the specific signal element in the frame to downsampling in accordance with the set downsampling ratio. The frame may be divided into partial areas in accordance with localized characteristics in the frame; and a downsampling ratio for a specific signal element in these partial areas may be set in accordance with the characteristics in each partial area.Type: ApplicationFiled: October 5, 2007Publication date: March 18, 2010Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Yukihiro Bandoh, Kazuto Kamikura, Yoshiyuki Yashima
-
Publication number: 20100067584Abstract: A video processing method includes dividing a processing target image, which forms a video image, into a plurality of divided areas; determining a bandwidth applied to the divided areas; computing a filter coefficient array for implementing frequency characteristics corresponding to a band limitation using the bandwidth; subjecting the image data to a filtering process using the filter coefficient array; deriving a value of error information between the obtained data and the original image data, and computing an allocation coefficient used for determining an optimum bandwidth, based on the derived value; determining, for each divided area, the optimum bandwidth corresponding to the allocation coefficient, and computing an optimum filter coefficient array for implementing the frequency characteristics corresponding to a band limitation using the optimum bandwidth; subjecting the image data of the divided area to a filtering process using the optimum filter coefficient array; and synthesizing the obtained dataType: ApplicationFiled: December 26, 2007Publication date: March 18, 2010Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Tokinobu Mitasaki, Kazuto Kamikura, Naoki Ono