Patents by Inventor Taro Imagawa

Taro Imagawa has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20100208104
    Abstract: An image processing apparatus (30) for generating an image with high resolution over a diffraction limit includes: an image input unit (101) which receives red image data and green image data which represent images of an object by red light and green light, respectively, and a blue image data which represents an image of the object by blue light having a wavelength shorter than those of the red and the green light; and an image processing unit (103) which corrects the red and the green image data by adding thereon a spatial high frequency component contained in the blue image data, and the image input unit (101) receives, as the blue image data, image data which is generated by light receiving elements provided at intervals shorter than a size of a smallest area that the red and the green light can converge.
    Type: Application
    Filed: June 17, 2009
    Publication date: August 19, 2010
    Applicant: PANASONIC CORPORATION
    Inventors: Taro Imagawa, Takeo Azuma
  • Publication number: 20100194911
    Abstract: The shooting, recording and playback system 100 of the present invention receives incoming light 101, stores an image shot, and then subjects the image shot to be reproduced to resolution raising processing, thereby outputting RGB images with high spatial resolution and high temporal resolution (ROUT GOUT BOUT) 102. The system 100 includes a shooting section 103, a color separating section 104, an R imaging sensor section 105, a G imaging sensor section 106, a B imaging sensor section 107, an image shot storage section 108, an image shot writing section 109, a memory section 110, an image shot reading section 111, a spatial resolution upconverter section 112, a temporal resolution upconverter section 113, an output section 114, and a line recognition signal generating section 185. The system can get image data with high spatial resolution and high temporal resolution without getting the camera configuration complicated and without decreasing the optical efficiency.
    Type: Application
    Filed: April 8, 2010
    Publication date: August 5, 2010
    Inventors: Hideto Motomura, Takeo Azuma, Kunio Nobori, Taro Imagawa
  • Publication number: 20100157149
    Abstract: An image processing apparatus and method is provided to generate a moving picture with a high resolution, a high frame rate and a high SNR by eliminating a decrease in SNR even if the intensity of incoming light has been halved by a half mirror. The apparatus generates a multi-color moving picture based on first and second moving pictures, which respectively have first and second frame rates (where the second rate is higher than the first rate) and are comprised of pictures representing a first color component and pictures representing a second color component different from the first color component. The resolution of the second moving picture is equal to or lower than that of the first moving picture.
    Type: Application
    Filed: May 13, 2008
    Publication date: June 24, 2010
    Inventors: Kunio Nobori, Takeo Azuma, Hideto Motomura, Taro Imagawa
  • Publication number: 20100149381
    Abstract: The shooting, recording and playback system 100 of the present invention receives incoming light 101, stores an image shot, and then subjects the image shot to be reproduced to resolution raising processing, thereby outputting RGB images with high spatial resolution and high temporal resolution (ROUT GOUT BOUT) 102. The system 100 includes a shooting section 103, a color separating section 104, an R imaging sensor section 105, a G imaging sensor section 106, a B imaging sensor section 107, an image shot storage section 108, an image shot writing section 109, a memory section 110, an image shot reading section 111, a spatial resolution upconverter section 112, a temporal resolution upconverter section 113, an output section 114, and a line recognition signal generating section 185. The system can get image data with high spatial resolution and high temporal resolution without getting the camera configuration complicated and without decreasing the optical efficiency.
    Type: Application
    Filed: May 9, 2008
    Publication date: June 17, 2010
    Inventors: Hideto Motomura, Takeo Azuma, Kunio Nobori, Taro Imagawa
  • Publication number: 20100103297
    Abstract: An image data generator 100 according to the present invention includes a shooting section 103, a color separating section 104, R, G and B imaging sensor sections 105, 106 and 107, an image shot storage section 108, an image shot writing section 109, a spatial frequency calculating section 186, a color channel range distribution calculating section 187, a color channel range distribution information writing section 188, a memory section 110, a shooting information reading section 111, a super-resolution section 240, an output section 114 and a line recognition signal generating section 185. This image data generator can get high-spatial-resolution, high-temporal-resolution image data with the same camera configuration as a conventional color camera and without decreasing the optical efficiency.
    Type: Application
    Filed: December 18, 2008
    Publication date: April 29, 2010
    Inventors: Hideto Motomura, Takeo Azuma, Kunio Nobori, Taro Imagawa
  • Patent number: 7702019
    Abstract: A moving object detection device including a spatiotemporal data generation unit 120 generating time series data, which arranges data indicating a moving object along a temporal axis, based on an output from a camera 100, an inter-leg information unit 140 extracting based on the generated time series data, inter-leg information, which is information regarding a temporal change in an inter-leg area arising from movement of a moving object that has two or more legs, and a periodicity analysis unit 150 analyzing a periodicity within the extracted inter-leg information. Further, the moving object detection devices includes a moving object detection unit 160 generating, from the analyzed periodicity, movement information that includes the presence or lack thereof of a moving object.
    Type: Grant
    Filed: March 1, 2006
    Date of Patent: April 20, 2010
    Assignee: Panasonic Corporation
    Inventors: Masahiro Iwasaki, Toshiki Kindo, Taro Imagawa
  • Publication number: 20100013948
    Abstract: A multi-color image processor according to the present invention includes an image capturing section 101 and a signal processing section 104. The image capturing section 101 includes a color separating section 10 for separating visible radiation into at least two light rays with first- and second-color components, respectively, and first and second imagers 12 and 14 that receive the light rays with the first- and second-color components. The image capturing section 101 gets images with the first- and second-color components by making the first imager 12 decimate pixels to read, but making the second imager 14 read every pixel, on a field-by-field basis on respective arrangements of pixels of the first and second imagers 12 and 14.
    Type: Application
    Filed: August 29, 2008
    Publication date: January 21, 2010
    Applicant: PANASONIC CORPORATION
    Inventors: Takeo Azuma, Kunio Nobori, Taro Imagawa, Katsuhiro Kanamori
  • Patent number: 7613325
    Abstract: The present invention provides a human detection device which detects a human contained in a moving picture, and includes the following: a spatiotemporal volume generation unit which generates a three-dimensional spatiotemporal image in which frame images that make up the moving picture in which a human has been filmed are arranged along a temporal axis; a spatiotemporal fragment extraction unit which extracts a real image spatiotemporal fragment, which is an image appearing in a cut plane or cut fragment when the three-dimensional spatiotemporal image is cut, from the generated three-dimensional spatiotemporal image; a human body region movement model spatiotemporal fragment output unit which generates and outputs, based on a human movement model which defines a characteristic of the movement of a human, a human body region movement spatiotemporal fragment, which is a spatiotemporal fragment obtained from a movement by the human movement model; a spatiotemporal fragment verification unit which verifies betwe
    Type: Grant
    Filed: December 29, 2005
    Date of Patent: November 3, 2009
    Assignee: Panasonic Corporation
    Inventors: Masahiro Iwasaki, Taro Imagawa, Kenji Nagao, Etsuko Nagao, legal representative
  • Publication number: 20090263044
    Abstract: An image generation apparatus generates a new video sequence with a high S/N ratio and suppressed motion blurs, from an original video sequence and a still image which are generated by capturing the same dark, moving object.
    Type: Application
    Filed: October 11, 2007
    Publication date: October 22, 2009
    Applicant: MATSUSHITA ELECTRIC INDUSTRIAL CO., LTD.
    Inventors: Taro Imagawa, Kunio Nobori, Takeo Azuma
  • Patent number: 7596177
    Abstract: An image generation apparatus includes an image receiving unit which receives a first video sequence including frames having a first resolution and a second video sequence including frames having a second resolution which is higher than the first resolution. Each frame of the first video sequence is obtained with a first exposure time, and each frame of the second video sequence is obtained with a second exposure time which is longer than the first exposure time. The image generation apparatus also includes an image integration unit which generates, from the first video sequence and the second video sequence, a new video sequence including frames having a resolution which is equal to or higher than the second resolution, at a frame rate which is equal to or higher than a frame rate of the first video sequence.
    Type: Grant
    Filed: April 17, 2007
    Date of Patent: September 29, 2009
    Assignee: Panasonic Corporation
    Inventors: Taro Imagawa, Takeo Azuma
  • Patent number: 7570283
    Abstract: An image capturing apparatus (100) of an image taker side has an image capturing unit (101) which captures an image using a CCD sensor, a CMOS sensor and the like, an image capturing restriction signal receiving unit (102) which receives an image capturing restriction signal transmitted by a communication terminal (200), and an image capturing restriction unit (103) which restricts the image capturing performed by the image capturing unit (101) according to a request of the image capturing restriction signal. Further, the communication terminal (200) of a subject side has an image capturing restriction signal generation unit (201) which transmits the image capturing restriction signal for requesting to restrict the image capturing.
    Type: Grant
    Filed: October 12, 2005
    Date of Patent: August 4, 2009
    Assignee: Panasonic Corporation
    Inventors: Satoshi Sato, Katsuji Aoki, Kunio Nobori, Jun Ozawa, Taro Imagawa
  • Publication number: 20090167909
    Abstract: Provided is an image generation apparatus that generates a new video sequence from video sequences including image regions for which corresponding point detection and motion estimation cannot be performed correctly.
    Type: Application
    Filed: October 24, 2007
    Publication date: July 2, 2009
    Inventors: Taro Imagawa, Takeo Azuma
  • Patent number: 7397931
    Abstract: A human identification apparatus, which can judge for identification of human images even in temporally-distant frames or frames shot with different cameras, judges whether or not persons represented by human images respectively included in different image sequences are the same person, and includes: a walking posture detecting unit which detects first and second walking sequences, each sequence being an image sequence indicating a walking state of respective first and second persons respectively included in the different image sequences; and a walking state estimating unit which estimates a transition state of a walking posture in the periodic walking movement of the first person at a time or in a position different from a time or a position of the walking sequence of the first person; and a judging unit which verifies whether or not the estimated transition state of the walking posture of the first person matches the transition state of the walking posture of the second person, and judges that the first per
    Type: Grant
    Filed: January 31, 2006
    Date of Patent: July 8, 2008
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Taro Imagawa, Masahiro Iwasaki
  • Publication number: 20070189386
    Abstract: The image generation apparatus includes: an image receiving unit which receives a first video sequence including frames having a first resolution and a second video sequence including frames having a second resolution which is higher than the first resolution, each frame of the first video sequence being obtained with a first exposure time, and each frame of the second video sequence being obtained with a second exposure time which is longer than the first exposure time; and an image integration unit which generates, from the first video sequence and the second video sequence, a new video sequence including frames having a resolution which is equal to or higher than the second resolution, at a frame rate which is equal to or higher than a frame rate of the first video sequence, by reducing a difference between a value of each frame of the second video sequence and a sum of values of frames of the new video sequence which are included within an exposure period of the frame of the second video sequence.
    Type: Application
    Filed: April 17, 2007
    Publication date: August 16, 2007
    Inventors: Taro Imagawa, Takeo Azuma
  • Patent number: 7180050
    Abstract: A tag communication section (12) receives tag information transmitted from an information tag (11) attached to a person (P) to be detected. An attribute lookup section (15) looks up an attribute storage section (16) using ID included in the tag information to obtain attribute information such as the height of the person (P). A target detection section (14) specifies the position, posture and the like of the person (P) in an image obtained from an imaging section (13) using the attribute information.
    Type: Grant
    Filed: April 25, 2003
    Date of Patent: February 20, 2007
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Taro Imagawa, Masamichi Nakagawa, Takeo Azuma, Shusaku Okamoto
  • Publication number: 20060262857
    Abstract: The present invention provides a moving object detection device that includes: a spatiotemporal data generation unit 120 which generates time series data which arranges data indicating a moving object along a temporal axis, based on an output from a camera 100; an inter-leg information unit 140 which extracts, based on the generated time series data, inter-leg information, which is information regarding a temporal change in an inter-leg area arising from movement of a moving object that has two or more legs; a periodicity analysis unit 150 which analyzes a periodicity within the extracted inter-leg information; and a moving object detection unit 160 which generates, from the analyzed periodicity, movement information that includes the presence or lack thereof of a moving object.
    Type: Application
    Filed: March 1, 2006
    Publication date: November 23, 2006
    Inventors: Masahiro Iwasaki, Toshiki Kindo, Taro Imagawa
  • Patent number: 7130487
    Abstract: The present invention relates to a retrieval method for searching a first character element string obtained by subjecting a character string to character recognition for a second character element string. The first character element string includes a first character element and the second character element string includes a second character element. A distance relevant to a similarity between the first character element and the second character element is predetermined between the first character element and the second character element. The retrieval method comprises the steps of comparing the distance with a predetermined reference distance, and determining whether the second character element matches the first character element based on a result of the comparison of the distance with the predetermined reference distance.
    Type: Grant
    Filed: December 15, 1999
    Date of Patent: October 31, 2006
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Taro Imagawa, Yoshihiko Matsukawa, Kenji Kondo, Tsuyoshi Mekata
  • Publication number: 20060195199
    Abstract: A monitoring device includes: a moving object image generation unit which receives, on a frame-by-frame basis, an overall image captured by a camera and performs inter-frame differential processing on the overall image or background differential processing between the overall image and a background image that is previously prepared; a density calculation unit which transforms the differential processed image (a moving object image) into one-dimensional information and calculates a density indicating a degree of density of moving objects or of a crowd through frequency analysis; a model generation unit which calculates a reference density (a model value) of the moving objects or of the crowd based on the density of a predetermined date and time; and a situation determination unit which compares between the density at the current time and the reference density, determines whether or not the density at the current time is different from the reference density, generates a determination result, and provides the re
    Type: Application
    Filed: April 17, 2006
    Publication date: August 31, 2006
    Inventors: Masahiro Iwasaki, Taro Imagawa
  • Publication number: 20060120564
    Abstract: A human identification apparatus, which can judge for identification of human images even in temporally-distant frames or frames shot with different cameras, judges whether or not persons represented by human images respectively included in different image sequences are the same person, and includes: a walking posture detecting unit which detects first and second walking sequences, each sequence being an image sequence indicating a walking state of respective first and second persons respectively included in the different image sequences; and a walking state estimating unit which estimates a transition state of a walking posture in the periodic walking movement of the first person at a time or in a position different from a time or a position of the walking sequence of the first person; and a judging unit which verifies whether or not the estimated transition state of the walking posture of the first person matches the transition state of the walking posture of the second person, and judges that the first per
    Type: Application
    Filed: January 31, 2006
    Publication date: June 8, 2006
    Inventors: Taro Imagawa, Masahiro Iwasaki
  • Publication number: 20060115116
    Abstract: The present invention provides a human detection device which detects a human contained in a moving picture, and includes the following: a spatiotemporal volume generation unit which generates a three-dimensional spatiotemporal image in which frame images that make up the moving picture in which a human has been filmed are arranged along a temporal axis; a spatiotemporal fragment extraction unit which extracts a real image spatiotemporal fragment, which is an image appearing in a cut plane or cut fragment when the three-dimensional spatiotemporal image is cut, from the generated three-dimensional spatiotemporal image; a human body region movement model spatiotemporal fragment output unit which generates and outputs, based on a human movement model which defines a characteristic of the movement of a human, a human body region movement spatiotemporal fragment, which is a spatiotemporal fragment obtained from a movement by the human movement model; a spatiotemporal fragment verification unit which verifies betwe
    Type: Application
    Filed: December 29, 2005
    Publication date: June 1, 2006
    Inventors: Masahiro Iwasaki, Taro Imagawa, Kenji Nagao, Etsuko Nagao