Patents by Inventor Hideaki Kimata

Hideaki Kimata has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8355596
    Abstract: An image generation method for generating image information of an image C by using an image A and an image B having a bit depth higher than that of image A. Image C having the same bit depth as image B is generated by increasing the bit depth of image A by means of tone mapping; presence or absence of a point in image B corresponding to each pixel position of image C and the position of the relevant corresponding point are estimated; and to each pixel position in image C for which it is estimated that there is a corresponding point, image information of the corresponding position in image B is assigned. It is possible to generate image information at each pixel position in image C for which it is estimated in the corresponding point estimation that there is no corresponding point, by using the image information assigned according to an estimation result that there is a corresponding point.
    Type: Grant
    Filed: October 9, 2008
    Date of Patent: January 15, 2013
    Assignee: Nippon Telegraph and Telephone Corporation
    Inventors: Hideaki Kimata, Shinya Shimizu, Kazuto Kamikura, Yoshiyuki Yashima
  • Patent number: 8355438
    Abstract: When video images are processed by applying temporal or spatial interframe prediction encoding to each divided area, and generating a predicted image of a processing target area based on a reference frame of the processing target area and reference information which indicates a predicted target position of the processing target area in the reference frame, predicted reference information is generated as predicted information of the reference information. Reference information used when an area adjacent to the processing target area was processed is determined as predicted reference information prediction data used for predicting the reference information of the processing target area. Reference area reference information is generated using one or more pieces of reference information used when a reference area indicated by the prediction data was processed. The predicted reference information prediction data is updated using the reference area reference information.
    Type: Grant
    Filed: October 23, 2007
    Date of Patent: January 15, 2013
    Assignee: Nippon Telegraph and Telephone Corporation
    Inventors: Shinya Shimizu, Kazuto Kamikura, Yoshiyuki Yashima, Hideaki Kimata
  • Patent number: 8346019
    Abstract: An image generation method for generating image information of a color signal Y of an image A by using a color signal X of image A, and color signal X and color signal Y of an image B. The presence or absence of a point in color signal X of image B corresponding to each pixel position of color signal X of image A, and the position of the relevant corresponding point are estimated. To each estimated pixel position in color signal Y of image A, image information of the corresponding position in the second color signal Y of image B is assigned. Color signal Y at a pixel position in image A for which it is estimated that there is no corresponding point is generated by using the image information of color signal Y assigned to pixels having a corresponding point.
    Type: Grant
    Filed: October 9, 2008
    Date of Patent: January 1, 2013
    Assignee: Nippon Telegraph and Telephone Corporation
    Inventors: Hideaki Kimata, Shinya Shimizu, Kazuto Kamikura, Yoshiyuki Yashima
  • Publication number: 20120320986
    Abstract: Efficient multiview video encoding is realized even in a situation in which a processing picture cannot be obtained, by accurately estimating a motion vector and simultaneously using an inter-camera correlation and a temporal correlation in prediction of a video signal. A view synthesized picture at a time when a processing picture has been taken is generated from a reference camera video that has been taken by a camera different from a processing camera that has taken the processing picture included in a multiview video based on the same setting as that of the processing camera. A motion vector is estimated by searching for a corresponding region in a reference picture taken by the processing camera using a picture signal on the view synthesized picture corresponding to a processing region on the processing picture without using the processing picture.
    Type: Application
    Filed: February 18, 2011
    Publication date: December 20, 2012
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Shinya Shimizu, Hideaki Kimata, Norihiko Matsuura
  • Publication number: 20120314776
    Abstract: A highly efficient encoding technique is realized even for a multiview video involved in local mismatches in illumination and color between cameras. A view synthesized picture corresponding to an encoding target frame is synthesized from an already encoded reference view frame taken at a reference view different from an encoding target view simultaneously with the encoding target frame at the encoding target view of a multiview video. For each processing unit region having a predetermined size, a reference region on an already encoded reference frame at the encoding target view corresponding to the view synthesized picture is searched for. A correction parameter for correcting a mismatch between cameras is estimated from the view synthesized picture for the processing unit region and the reference frame for the reference region. The view synthesized picture for the processing unit region is corrected using the estimated correction parameter.
    Type: Application
    Filed: February 21, 2011
    Publication date: December 13, 2012
    Applicant: Nippon Telegraph and Telephone Corporation
    Inventors: Shinya Shimizu, Hideaki Kimata, Norihiko Matsuura
  • Patent number: 8275048
    Abstract: A video encoding method for assigning a plurality of images to a plurality of GOPs and encoding images belonging to the GOPs as a video image. The method includes determining whether each image belonging to each GOP is to be encoded; encoding GOP encoding/non-encoding data for indicating whether encoded data of the image belonging to the relevant GOP is output; and encoding the image belonging to the relevant GOP when the encoded data of the image is output. Typically, it is determined whether an image generated by using one or more other GOPs without decoding the encoded data of the relevant GOP is closer to an original image of the relevant image in comparison with an image obtained by decoding the encoded data, so as to determine whether the image belonging to the relevant GOP is to be encoded.
    Type: Grant
    Filed: September 30, 2005
    Date of Patent: September 25, 2012
    Assignee: Nippon Telegraph and Telephone Corporation
    Inventors: Masaki Kitahara, Hideaki Kimata
  • Publication number: 20120170651
    Abstract: A video encoding method includes selecting a reference vector target frame and a reference frame from among already-encoded frames; encoding information for designating each frame; setting a reference vector for indicating an area in the reference vector target frame with respect to an encoding target area; encoding the reference vector; performing a corresponding area search by using image information of a reference vector target area, which belongs to the reference vector target frame and is indicated by the reference vector, and the reference frame; determining a reference area in the reference frame based on the search result; generating a predicted image by using image information of the reference frame, which corresponds to the reference area; and encoding differential information between image information of the encoding target area and the predicted image.
    Type: Application
    Filed: March 15, 2012
    Publication date: July 5, 2012
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Shinya Shimizu, Hideaki Kimata, Kazuto Kamikura, Yoshiyuki Yashima
  • Patent number: 8204118
    Abstract: A video encoding apparatus used in encoding of a multi-viewpoint image. The apparatus generates a synthetic image for a camera used for obtaining an encoding target image, by using an already-encoded reference camera image having a viewpoint different from the viewpoint of the camera used for obtaining the encoding target image, and disparity information between the reference camera image and the encoding target image, thereby encoding the encoding target image. A predicted image for a differential image between an input image of an encoding target area to be encoded and the synthetic image generated therefor is generated, and a predicted image for the encoding target area, which is represented by the sum of the predicted differential image and the synthetic image for the encoding target area, is generated. A prediction residual represented by a difference between the predicted image for the encoding target area and the encoding target image of the encoding target area is encoded.
    Type: Grant
    Filed: June 23, 2008
    Date of Patent: June 19, 2012
    Assignee: Nippon Telegraph and Telephone Corporation
    Inventors: Shinya Shimizu, Hideaki Kimata, Kazuto Kamikura, Yoshiyuki Yashima
  • Publication number: 20120027291
    Abstract: The disclosed multi-view image coding/decoding device first obtains depth information for an object photographed in an area subject to processing. Next, a group of pixels in an already-coded (decoded) area which is adjacent to the area subject to processing and in which the same object as in the area subject to processing has been photographed is determined using the depth information and set as a sample pixel group. Then, a view synthesis image is generated for the pixels included in the sample pixel group and the area subject to processing. Next, correction parameters to correct illumination and color mismatches in the sample pixel group are estimated from the view synthesis image and the decoded image. A predicted image is then generated by correcting the view synthesis image relative to the area subject to processing using the estimated correction parameters.
    Type: Application
    Filed: February 23, 2010
    Publication date: February 2, 2012
    Applicants: NATIONAL UNIVERSITY CORPORATION NAGOYA UNIVERSITY, NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Shinya Shimizu, Hideaki Kimata, Masayuki Tanimoto
  • Publication number: 20110286678
    Abstract: In the disclosed multi-view image encoding/decoding method in which a frame to be encoded/decoded is divided and encoding/decoding is done to each region, first, a prediction image is generated not only for the region to be processed, but also for the already encoded/decoded regions neighboring to the region to be processed. The prediction image is generated using the same prediction method for both kinds of regions. Next, correction parameters for correcting illumination and color mismatches are estimated from the prediction image and decoded image of the neighboring regions. At this time, the estimated correction parameters can be obtained even at the decoding side, therefore, encoding them is unnecessary. Thus, by using the estimated correction parameters to correct the predicted image that was generated for the region to be processed, a corrected predicted image that can be actually used is generated.
    Type: Application
    Filed: February 5, 2010
    Publication date: November 24, 2011
    Inventors: Shinya Shimizu, Hideaki Kimata
  • Patent number: 7929605
    Abstract: In order to make it possible to obtain the correct decoded image even in the case of not decoding a particular frame of the encoded data and improve the coding efficiency, the predicted image production unit 103 selects the image data from the image data of a plurality of frames in the reference image memory 107 which are encoded in the past, of the i-th (1?i?j) category, for the current frame which is classified as the j-th category by the image classifying unit 102, and produces the predicted image. The difference encoding unit 104 encodes a difference between the image data of the current frame and the predicted image. Also, the current category encoding unit 106 encodes the category number of the current frame, and the reference image specifying data encoding unit 105 encodes the reference image specifying data which specifies the image data selected from the reference image memory 107.
    Type: Grant
    Filed: July 22, 2004
    Date of Patent: April 19, 2011
    Assignee: Nippon Telegraph and Telephone Corporation
    Inventors: Hideaki Kimata, Masaki Kitahara, Kazuto Kamikura
  • Publication number: 20100290715
    Abstract: An image generation method for generating image information of an image C by using an image A and an image B having a bit depth higher than that of image A. Image C having the same bit depth as image B is generated by increasing the bit depth of image A by means of tone mapping; presence or absence of a point in image B corresponding to each pixel position of image C and the position of the relevant corresponding point are estimated; and to each pixel position in image C for which it is estimated that there is a corresponding point, image information of the corresponding position in image B is assigned. It is possible to generate image information at each pixel position in image C for which it is estimated in the corresponding point estimation that there is no corresponding point, by using the image information assigned according to an estimation result that there is a corresponding point.
    Type: Application
    Filed: October 9, 2008
    Publication date: November 18, 2010
    Applicant: Nippon Telegraph and Telephone Corporation
    Inventors: Hideaki Kimata, Shinya Shimiza, Kazuto Kamikura, Yoshiyuki Yashima
  • Publication number: 20100220784
    Abstract: By using parallax compensation which performs prediction by using parallax between video images, the video images are encoded as a single video image. Reference parallax for a target image to be encoded is set, wherein the reference parallax is estimated using a reference image; area division in an image frame is set; parallax displacement for each divided area is set, wherein the parallax displacement is the difference between the reference parallax and parallax for the parallax compensation; data of the area division is encoded; and data for indicating the parallax displacement is encoded. During decoding, reference parallax for a target image to be decoded is set, wherein it is estimated using a reference image; data for indicating area division, which is included in encoded data, is decoded; and data of parallax displacement, which is included in the encoded data, is decoded for each area indicated by the area division data.
    Type: Application
    Filed: January 4, 2007
    Publication date: September 2, 2010
    Applicants: IPPON TELEGRAPH AND TELEPHONE CORPORATION, NATIONAL UNIVERSITY CORPORATION NAGOYA UNIVERSITY
    Inventors: Masayuki Tanimoto, Toshiaki Fujii, Kenji Yamamoto, Masaki Kitahara, Hideaki Kimata, Shinya Shimizu, Kazuto Kamikura, Yoshiyuki Yashima
  • Publication number: 20100209016
    Abstract: An image generation method for generating image information of an image C by using an image A and an image B having a resolution higher than that of image A. Image C having the same resolution as image B is generated by enlarging image A; presence or absence of a point in image B corresponding to each pixel position of image C and the position of the relevant corresponding point are estimated; and to each pixel position in image C for which it is estimated that there is a corresponding point, image information of the corresponding position in image B is assigned. It is possible to generate image information at each pixel position in image C for which it is estimated in the corresponding point estimation that there is no corresponding point, by using the image information assigned according to an estimation result that there is a corresponding point.
    Type: Application
    Filed: October 9, 2008
    Publication date: August 19, 2010
    Applicant: Nippon Telegraph and Telephone Corporation
    Inventors: Hideaki Kimata, Shinya Shimizu, Kazuto Kamikura, Yoshiyuki Yashima
  • Publication number: 20100208991
    Abstract: An image generation method for generating image information of a color signal Y of an image A by using a color signal X of image A, and color signal X and color signal Y of an image B. The presence or absence of a point in color signal X of image B corresponding to each pixel position of color signal X of image A, and the position of the relevant corresponding point are estimated. To each estimated pixel position in color signal Y of image A, image information of the corresponding position in the second color signal Y of image B is assigned. Color signal Y at a pixel position in image A for which it is estimated that there is no corresponding point is generated by using the image information of color signal Y assigned to pixels having a corresponding point.
    Type: Application
    Filed: October 9, 2008
    Publication date: August 19, 2010
    Applicant: Nippon Telegraph and Telephone Corporation
    Inventors: Hideaki Kimata, Shinya Shimizu, Kazuto Kamikura, Yoshiyuki Yashima
  • Publication number: 20100189177
    Abstract: A video encoding apparatus used in encoding of a multi-viewpoint image. The apparatus generates a synthetic image for a camera used for obtaining an encoding target image, by using an already-encoded reference camera image having a viewpoint different from the viewpoint of the camera used for obtaining the encoding target image, and disparity information between the reference camera image and the encoding target image, thereby encoding the encoding target image. A predicted image for a differential image between an input image of an encoding target area to be encoded and the synthetic image generated therefor is generated, and a predicted image for the encoding target area, which is represented by the sum of the predicted differential image and the synthetic image for the encoding target area, is generated. A prediction residual represented by a difference between the predicted image for the encoding target area and the encoding target image of the encoding target area is encoded.
    Type: Application
    Filed: June 23, 2008
    Publication date: July 29, 2010
    Applicant: Nippon Telegraph and Telephone Corporation
    Inventors: Shinya Shimizu, Hideaki Kimata, Kazuto Kamikura, Yoshiyuki Yashima
  • Publication number: 20100118939
    Abstract: When video images are processed by applying temporal or spatial interframe prediction encoding to each divided area, and generating a predicted image of a processing target area based on a reference frame of the processing target area and reference information which indicates a predicted target position of the processing target area in the reference frame, predicted reference information is generated as predicted information of the reference information. Reference information used when an area adjacent to the processing target area was processed is determined as predicted reference information prediction data used for predicting the reference information of the processing target area. Reference area reference information is generated using one or more pieces of reference information used when a reference area indicated by the prediction data was processed. The predicted reference information prediction data is updated using the reference area reference information.
    Type: Application
    Filed: October 23, 2007
    Publication date: May 13, 2010
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Shinya Shimizu, Kazuto Kamikura, Yoshiyuki Yashima, Hideaki Kimata
  • Publication number: 20100008422
    Abstract: A video encoding method includes selecting a reference vector target frame and a reference frame from among already-encoded frames; encoding information for designating each frame; setting a reference vector for indicating an area in the reference vector target frame with respect to an encoding target area; encoding the reference vector; performing a corresponding area search by using image information of a reference vector target area, which belongs to the reference vector target frame and is indicated by the reference vector, and the reference frame; determining a reference area in the reference frame based on the search result; generating a predicted image by using image information of the reference frame, which corresponds to the reference area; and encoding differential information between image information of the encoding target area and the predicted image.
    Type: Application
    Filed: October 24, 2007
    Publication date: January 14, 2010
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Shinya Shimizu, Hideaki Kimata, Kazuto Kamikura, Yoshiyuki Yashima
  • Publication number: 20090028248
    Abstract: A video encoding method for encoding video images as a single video image by using parallax compensation which performs prediction by using parallax between the video images, and a corresponding decoding method. The number of parameters as parallax data used for the parallax compensation is selected and set for each reference image. Data of the set number of parameters is encoded, and parallax data in accordance with the number of parameters is encoded. During decoding, parallax-parameter number data, which is included in encoded data and designates the number of parameters as parallax data for each reference image, is decoded, and parallax data in accordance with the number of parameters is decoded, where the parallax data is included in the encoded data.
    Type: Application
    Filed: December 29, 2006
    Publication date: January 29, 2009
    Applicants: Nippon Telegraph and Telephone Corporation, National University Corporation Nagoya University
    Inventors: Masaki Kitahara, Hideaki Kimata, Shinya Shimizu, Kazuto Kamikura, Yoshiyuki Yashima, Masayuki Tanimoto, Toshiaki Fujii, Kenji Yamamoto
  • Publication number: 20080317115
    Abstract: A video encoding method for assigning a plurality of images to a plurality of GOPs and encoding images belonging to the GOPs as a video image. The method includes determining whether each image belonging to each GOP is to be encoded; encoding GOP encoding/non-encoding data for indicating whether encoded data of the image belonging to the relevant GOP is output; and encoding the image belonging to the relevant GOP when the encoded data of the image is output. Typically, it is determined whether an image generated by using one or more other GOPs without decoding the encoded data of the relevant GOP is closer to an original image of the relevant image in comparison with an image obtained by decoding the encoded data, so as to determine whether the image belonging to the relevant GOP is to be encoded.
    Type: Application
    Filed: September 30, 2005
    Publication date: December 25, 2008
    Applicant: Nippon Telegraph and Telephone Corp
    Inventors: Masaki Kitahara, Hideaki Kimata