Patents by Inventor Tomoko Yasunari

Tomoko Yasunari has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 7050502
    Abstract: A method and apparatus for performing motion estimation under a motion estimation mode suitable for an amount of motion within each E-block detects a motion vector with a small amount of computation. A block division section divides a frame to be encoded into E-blocks of a predetermined pixel size. For each target E-block, a motion estimation mode detection section relies on a past motion vector of a predetermined block to predict an amount of motion, and determines a motion estimation mode defining a search area that enables detection of the predicted amount of motion among a plurality of predefined motion estimation modes. If the predicted amount of motion is small, a mode defining a narrow search area and a fine search resolution is selected. If the predicted amount of motion is large, a mode defining a broad search area and a coarse search resolution is selected.
    Type: Grant
    Filed: September 16, 2002
    Date of Patent: May 23, 2006
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Tomoko Yasunari, Shinya Kadono, Satoshi Kondo
  • Publication number: 20040105589
    Abstract: In conventional moving image compression coding apparatuses, in the case of a largely moving image, it is impossible to detect an appropriate motion vector, and further, it is difficult to set an appropriate motion vector detection area.
    Type: Application
    Filed: January 20, 2004
    Publication date: June 3, 2004
    Inventors: Makoto Kawaharada, Tomoko Yasunari
  • Patent number: 6697430
    Abstract: A coding information memory is provided for storing coding modes that have been applied to respective compressed blocks. When a current block to be coded is input, coding distortion is estimated for respective candidate blocks corresponding to multiple coding modes. At the same time, by reference to the coding information memory, the frequencies of coding modes that have been applied to a plurality of blocks, which include a block contained in a compressed frame and located at the same position as the current block, are counted. Then, the coding mode for the current block is determined by weighting the estimated coding distortion such that the weighted distortion is inversely proportional to the counted frequencies and that one of the candidate blocks with the smallest weighted coding distortion is selected.
    Type: Grant
    Filed: May 17, 2000
    Date of Patent: February 24, 2004
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Tomoko Yasunari, Hideki Fukuda
  • Patent number: 6591015
    Abstract: A video coding method according to the present invention is adapted to predictively code each target block within a target frame relative to a reference frame. The method includes the steps of: a) calculating an estimate of the target block based on pixel values within the target block; b) calculating respective correction values for the target block and a predicted block associated with the target block, the predicted block being generated from the reference frame by motion compensation; c) correcting the pixel values within the target and predicted blocks using the respective correction values, and calculating a predicted error based on a difference between each said pixel value within the corrected target block and an associated one of the pixel values within the corrected predicted block; d) determining a coding mode based on the estimate of the target block and the predicted error; and e) coding the target block in accordance with the coding mode determined.
    Type: Grant
    Filed: July 28, 1999
    Date of Patent: July 8, 2003
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Tomoko Yasunari, Hideki Fukuda
  • Publication number: 20030053544
    Abstract: A method and apparatus for performing motion estimation under a motion estimation mode suitable for an amount of motion within each E-block detects a motion vector with a small amount of computation. A block division section 102 divides a frame to be encoded into E-blocks of a predetermined pixel size. For each target E-block, a motion estimation mode detection section 105 relies on a past motion vector of a predetermined block to predict an amount of motion, and determines a motion estimation mode defining a search area that enables detection of the predicted amount of motion among a plurality of predefined motion estimation modes. If the predicted amount of motion is small, a mode defining a narrow search area and a fine search resolution is selected. If the predicted amount of motion is large, a mode defining a broad search area and a coarse search resolution is selected.
    Type: Application
    Filed: September 16, 2002
    Publication date: March 20, 2003
    Inventors: Tomoko Yasunari, Shinya Kadono, Satoshi Kondo