Patents by Inventor Yoshiyuki Yashima

Yoshiyuki Yashima has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 8346019
    Abstract: An image generation method for generating image information of a color signal Y of an image A by using a color signal X of image A, and color signal X and color signal Y of an image B. The presence or absence of a point in color signal X of image B corresponding to each pixel position of color signal X of image A, and the position of the relevant corresponding point are estimated. To each estimated pixel position in color signal Y of image A, image information of the corresponding position in the second color signal Y of image B is assigned. Color signal Y at a pixel position in image A for which it is estimated that there is no corresponding point is generated by using the image information of color signal Y assigned to pixels having a corresponding point.
    Type: Grant
    Filed: October 9, 2008
    Date of Patent: January 1, 2013
    Assignee: Nippon Telegraph and Telephone Corporation
    Inventors: Hideaki Kimata, Shinya Shimizu, Kazuto Kamikura, Yoshiyuki Yashima
  • Patent number: 8290289
    Abstract: An image encoding method includes determining a corresponding point on a target image for encoding, which corresponds to each pixel on a reference image, based on the distance from a camera used for obtaining the reference image to an imaged object, and a positional relationship between cameras; computing a parallax vector from the position of the pixel to the corresponding point in the pixel space; computing a target predictive vector having the same starting point as the parallax vector and components obtained by rounding off the components of the parallax vector; computing a target reference vector having the same starting point as the parallax vector and the same size and direction as a differential vector between the target predictive vector and the parallax vector; and setting a predicted pixel value of a pixel on the target encoding image, which is indicated by the target predictive vector, to a value of a pixel on the reference image, which is indicated by the target reference vector.
    Type: Grant
    Filed: September 18, 2007
    Date of Patent: October 16, 2012
    Assignee: Nippon Telegraph and Telephone Corporation
    Inventors: Shinya Shimizu, Masaki Kitahara, Kazuto Kamikura, Yoshiyuki Yashima
  • Publication number: 20120170651
    Abstract: A video encoding method includes selecting a reference vector target frame and a reference frame from among already-encoded frames; encoding information for designating each frame; setting a reference vector for indicating an area in the reference vector target frame with respect to an encoding target area; encoding the reference vector; performing a corresponding area search by using image information of a reference vector target area, which belongs to the reference vector target frame and is indicated by the reference vector, and the reference frame; determining a reference area in the reference frame based on the search result; generating a predicted image by using image information of the reference frame, which corresponds to the reference area; and encoding differential information between image information of the encoding target area and the predicted image.
    Type: Application
    Filed: March 15, 2012
    Publication date: July 5, 2012
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Shinya Shimizu, Hideaki Kimata, Kazuto Kamikura, Yoshiyuki Yashima
  • Patent number: 8204118
    Abstract: A video encoding apparatus used in encoding of a multi-viewpoint image. The apparatus generates a synthetic image for a camera used for obtaining an encoding target image, by using an already-encoded reference camera image having a viewpoint different from the viewpoint of the camera used for obtaining the encoding target image, and disparity information between the reference camera image and the encoding target image, thereby encoding the encoding target image. A predicted image for a differential image between an input image of an encoding target area to be encoded and the synthetic image generated therefor is generated, and a predicted image for the encoding target area, which is represented by the sum of the predicted differential image and the synthetic image for the encoding target area, is generated. A prediction residual represented by a difference between the predicted image for the encoding target area and the encoding target image of the encoding target area is encoded.
    Type: Grant
    Filed: June 23, 2008
    Date of Patent: June 19, 2012
    Assignee: Nippon Telegraph and Telephone Corporation
    Inventors: Shinya Shimizu, Hideaki Kimata, Kazuto Kamikura, Yoshiyuki Yashima
  • Publication number: 20110200105
    Abstract: An automatic producing method for a predicted value generation procedure that predicts a value of an encoding target pixel by using a previously-decoded pixel. A parent population is generated by randomly producing predicted value generation procedures each of which is indicated by a tree structure, and a plurality of predicted value generation procedures are selected as parents from the parent population. One or more predicted value generation procedures are generated as children based on a predetermined tree structure developing method which subjects the selected predicted value generation procedures to a development where an existing predicted value generation function can be an end node of a tree.
    Type: Application
    Filed: October 21, 2009
    Publication date: August 18, 2011
    Applicant: Nippon Telegraph And Telephone Corporation
    Inventors: Seishi Takamura, Masaaki Matsumura, Yoshiyuki Yashima
  • Publication number: 20110194599
    Abstract: In scalable video encoding, incidence rates of combinations of optimum prediction modes to be selected for spatially corresponding blocks of an upper layer and a lower layer are determined based on an optimum prediction mode that was selected in a conventional encoding, and then a correspondence table that describes relationships therebetween is created. Subsequently, the combinations of the selected optimum prediction modes described in the correspondence table are narrowed down based on the value of the incidence rate so as to create prediction mode correspondence information that describes the combinations of the optimum prediction mode narrowed down.
    Type: Application
    Filed: October 21, 2009
    Publication date: August 11, 2011
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Kazuya Hayase, Yukihiro Bandoh, Seishi Takamura, Kazuto Kamikura, Yoshiyuki Yashima
  • Publication number: 20110188574
    Abstract: A direction is detected for each block in which a pixel value is changed which is represented by an edge that indicates a direction of change in pixel value in each block, a direction in which a deblocking filter is to be applied to a block boundary is determined based on a direction of an edge detected for a block to be processed which includes the block boundary subject to deblocking and on a direction of an edge detected for a block contacting the block to be processed, and the deblocking filter is applied to the block boundary in accordance with the determined direction.
    Type: Application
    Filed: October 21, 2009
    Publication date: August 4, 2011
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Shohei Matsuo, Seishi Takamura, Kazuto Kamikura, Yoshiyuki Yashima
  • Patent number: 7983338
    Abstract: Highly efficient lossless decoding is realized under the condition that codes transmitted as a base part are compatible with the H.264 standard. An orthogonal transformation section (12) performs orthogonal transformation of residual signals (Rorig), acquires transform coefficients (Xorig), and a quantization section (13) quantizes the transform coefficients. An existential space determination section (14) obtains information on upper limits and lower limits of the respective coefficients (an existential space of transform coefficients) from quantization information.
    Type: Grant
    Filed: September 29, 2005
    Date of Patent: July 19, 2011
    Assignee: Nippon Telegraph and Telephone Corporation
    Inventors: Seishi Takamura, Yoshiyuki Yashima
  • Publication number: 20100329344
    Abstract: A scalable video encoding method of performing encoding by predicting an upper-layer signal having a relatively high spatial resolution by means of interpolation using an immediately-lower-layer signal having a relatively low spatial resolution. The method computes a first weighting coefficient for each image area of a predetermined unit size in a search for estimating a motion between an encoding target image area in an upper layer and a reference image area, where the first weighting coefficient is computed based on a brightness variation between an image area, which belongs to an immediately-lower layer and has the same spatial position as the encoding target image area, and the reference image area; and performs a motion estimation using a signal which is obtained by correcting a decoded signal of the reference image area by the first weighting coefficient and functions as an estimated signal in the motion estimation, so as to compute a motion vector.
    Type: Application
    Filed: July 1, 2008
    Publication date: December 30, 2010
    Applicant: Nippon Telegraph and Telephone Corporation
    Inventors: Kazuya Hayase, Yukihiro Bandoh, Seishi Takamura, Kazuto Kamikura, Yoshiyuki Yashima
  • Publication number: 20100290715
    Abstract: An image generation method for generating image information of an image C by using an image A and an image B having a bit depth higher than that of image A. Image C having the same bit depth as image B is generated by increasing the bit depth of image A by means of tone mapping; presence or absence of a point in image B corresponding to each pixel position of image C and the position of the relevant corresponding point are estimated; and to each pixel position in image C for which it is estimated that there is a corresponding point, image information of the corresponding position in image B is assigned. It is possible to generate image information at each pixel position in image C for which it is estimated in the corresponding point estimation that there is no corresponding point, by using the image information assigned according to an estimation result that there is a corresponding point.
    Type: Application
    Filed: October 9, 2008
    Publication date: November 18, 2010
    Applicant: Nippon Telegraph and Telephone Corporation
    Inventors: Hideaki Kimata, Shinya Shimiza, Kazuto Kamikura, Yoshiyuki Yashima
  • Publication number: 20100220784
    Abstract: By using parallax compensation which performs prediction by using parallax between video images, the video images are encoded as a single video image. Reference parallax for a target image to be encoded is set, wherein the reference parallax is estimated using a reference image; area division in an image frame is set; parallax displacement for each divided area is set, wherein the parallax displacement is the difference between the reference parallax and parallax for the parallax compensation; data of the area division is encoded; and data for indicating the parallax displacement is encoded. During decoding, reference parallax for a target image to be decoded is set, wherein it is estimated using a reference image; data for indicating area division, which is included in encoded data, is decoded; and data of parallax displacement, which is included in the encoded data, is decoded for each area indicated by the area division data.
    Type: Application
    Filed: January 4, 2007
    Publication date: September 2, 2010
    Applicants: IPPON TELEGRAPH AND TELEPHONE CORPORATION, NATIONAL UNIVERSITY CORPORATION NAGOYA UNIVERSITY
    Inventors: Masayuki Tanimoto, Toshiaki Fujii, Kenji Yamamoto, Masaki Kitahara, Hideaki Kimata, Shinya Shimizu, Kazuto Kamikura, Yoshiyuki Yashima
  • Publication number: 20100215102
    Abstract: An image encoding method for encoding a pixel value of an encoding target by using a predicted value generated by means of spatial or temporal prediction using a previously-decoded image. The method performs prediction of the pixel value of the encoding target and obtains the predicted value; computes data of a probability distribution which indicates what value an original pixel value has for the obtained predicted value, by shifting, in accordance with the predicted value, difference distribution data of a difference between the original pixel value and the predicted value in predictive encoding, where the difference distribution data is stored in advance; clips the obtained data of the probability distribution so as to contain the data in a range from a lower limit to an upper limit for possible values of the original pixel value; and encodes the pixel value of the encoding target by using the clipped data of the probability distribution of the original pixel value from the lower limit to the upper limit.
    Type: Application
    Filed: October 23, 2008
    Publication date: August 26, 2010
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Seishi Takamura, Yoshiyuki Yashima
  • Publication number: 20100215095
    Abstract: A video scalable encoding method calculates a weight coefficient which includes a proportional coefficient and an offset coefficient and indicates brightness variation between an encoding target image region and a reference image region in an upper layer, calculates a motion vector by applying the weight coefficient to an image signal of a reference image region as a search target and executing motion estimation, and generates a prediction signal by applying the weight coefficient to a decoded signal of a reference image region indicated by the motion vector and executing motion compensation. Based on encoding information of an immediately-lower image region in an immediately-lower layer, which is present at spatially the same position as the encoding target image region, a data structure of the weight coefficient is determined.
    Type: Application
    Filed: October 20, 2008
    Publication date: August 26, 2010
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Kazuya Hayase, Yukihiro Bandoh, Seishi Takamura, Kazuto Kamikura, Yoshiyuki Yashima
  • Publication number: 20100208803
    Abstract: A method for encoding an image using an intraframe prediction is provided which includes selecting a gradient of a pixel value that is indicated by an image signal to be predicted among a plurality of gradient candidates, generating a predicted signal by applying a gradient in accordance with the distance from a prediction reference pixel based on the gradient, intraframe-encoding the image signal to be predicted based on the predicted signal, and encoding information indicating the size of the selected gradient. Alternatively, the method includes estimating the gradient of a pixel value that is indicated by an image signal to be predicted based on an image signal which has already been encoded, generating a predicted signal by applying a gradient in accordance with the distance from a prediction reference pixel based on the gradient, and intraframe-encoding the image signal to be predicted based on the predicted signal.
    Type: Application
    Filed: October 14, 2008
    Publication date: August 19, 2010
    Applicant: Nippon Telegraph and Telephone Corporation
    Inventors: Shohei Matsuo, Seishi Takamura, Kazuto Kamikura, Yoshiyuki Yashima
  • Publication number: 20100208991
    Abstract: An image generation method for generating image information of a color signal Y of an image A by using a color signal X of image A, and color signal X and color signal Y of an image B. The presence or absence of a point in color signal X of image B corresponding to each pixel position of color signal X of image A, and the position of the relevant corresponding point are estimated. To each estimated pixel position in color signal Y of image A, image information of the corresponding position in the second color signal Y of image B is assigned. Color signal Y at a pixel position in image A for which it is estimated that there is no corresponding point is generated by using the image information of color signal Y assigned to pixels having a corresponding point.
    Type: Application
    Filed: October 9, 2008
    Publication date: August 19, 2010
    Applicant: Nippon Telegraph and Telephone Corporation
    Inventors: Hideaki Kimata, Shinya Shimizu, Kazuto Kamikura, Yoshiyuki Yashima
  • Publication number: 20100209016
    Abstract: An image generation method for generating image information of an image C by using an image A and an image B having a resolution higher than that of image A. Image C having the same resolution as image B is generated by enlarging image A; presence or absence of a point in image B corresponding to each pixel position of image C and the position of the relevant corresponding point are estimated; and to each pixel position in image C for which it is estimated that there is a corresponding point, image information of the corresponding position in image B is assigned. It is possible to generate image information at each pixel position in image C for which it is estimated in the corresponding point estimation that there is no corresponding point, by using the image information assigned according to an estimation result that there is a corresponding point.
    Type: Application
    Filed: October 9, 2008
    Publication date: August 19, 2010
    Applicant: Nippon Telegraph and Telephone Corporation
    Inventors: Hideaki Kimata, Shinya Shimizu, Kazuto Kamikura, Yoshiyuki Yashima
  • Patent number: 7778474
    Abstract: A scalable encoding method for decomposing an image signal into signals assigned to layers so as to encode the image signal includes encoding an image signal of at least one target layer among the layers based on an amount of codes allocated to the target layer; outputting a residual signal of the target layer corresponding to a difference between the pre-encoded image signal and an image signal decoded from the encoded image signal; determining a target amount of codes allocated for encoding the residual signal based on the residual signal, an amount of codes allocated to an objective layer, and an image signal of the objective layer; encoding the residual signal based on the determined target amount of codes; revising the amount of codes allocated to the objective layer based on the target amount of codes; and encoding the image signal of the objective layer based on the revised amount.
    Type: Grant
    Filed: October 4, 2005
    Date of Patent: August 17, 2010
    Assignee: Nippon Telegraph and Telephone Corporation
    Inventors: Yukihiro Bandou, Seishi Takamura, Yoshiyuki Yashima
  • Publication number: 20100189177
    Abstract: A video encoding apparatus used in encoding of a multi-viewpoint image. The apparatus generates a synthetic image for a camera used for obtaining an encoding target image, by using an already-encoded reference camera image having a viewpoint different from the viewpoint of the camera used for obtaining the encoding target image, and disparity information between the reference camera image and the encoding target image, thereby encoding the encoding target image. A predicted image for a differential image between an input image of an encoding target area to be encoded and the synthetic image generated therefor is generated, and a predicted image for the encoding target area, which is represented by the sum of the predicted differential image and the synthetic image for the encoding target area, is generated. A prediction residual represented by a difference between the predicted image for the encoding target area and the encoding target image of the encoding target area is encoded.
    Type: Application
    Filed: June 23, 2008
    Publication date: July 29, 2010
    Applicant: Nippon Telegraph and Telephone Corporation
    Inventors: Shinya Shimizu, Hideaki Kimata, Kazuto Kamikura, Yoshiyuki Yashima
  • Publication number: 20100118939
    Abstract: When video images are processed by applying temporal or spatial interframe prediction encoding to each divided area, and generating a predicted image of a processing target area based on a reference frame of the processing target area and reference information which indicates a predicted target position of the processing target area in the reference frame, predicted reference information is generated as predicted information of the reference information. Reference information used when an area adjacent to the processing target area was processed is determined as predicted reference information prediction data used for predicting the reference information of the processing target area. Reference area reference information is generated using one or more pieces of reference information used when a reference area indicated by the prediction data was processed. The predicted reference information prediction data is updated using the reference area reference information.
    Type: Application
    Filed: October 23, 2007
    Publication date: May 13, 2010
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Shinya Shimizu, Kazuto Kamikura, Yoshiyuki Yashima, Hideaki Kimata
  • Publication number: 20100086222
    Abstract: An image encoding method includes determining and encoding global parallax data which is probably correct parallax data in consideration of the Epipolar geometry constraint between a camera of a standard viewpoint, which is selected from the entire multi-viewpoint images, and images obtained by all the other viewpoints; generating base parallax data for each camera as a viewpoint other than the standard viewpoint, where the base parallax data is probably correct parallax data in consideration of the Epipolar geometry constraint between the image of the relevant camera and the images of all the other cameras based on the global parallax data and the camera parameters; determining and encoding correction parallax data used for correcting the base parallax data, so as to indicate parallax data between the image of the relevant camera and an already-encoded reference viewpoint image used for parallax compensation; and encoding the image of the relevant camera by using parallax data obtained by correcting the base
    Type: Application
    Filed: September 18, 2007
    Publication date: April 8, 2010
    Applicant: Nippon Telegraph and Telephone Corporation
    Inventors: Shinya Shimizu, Masaki Kitahara, Kazuto Kamikura, Yoshiyuki Yashima