Patents by Inventor Yuji Takata

Yuji Takata has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 7227996
    Abstract: An image processing method for detecting an object from an input image using a template image, including inputting a specified image with respect to both a template image and an input image, calculating an edge normal direction vector of said specified image, generating an evaluation vector from said edge normal direction vector, subjecting the evaluation vector to orthogonal transformation, a step of performing a product sum calculation of corresponding spectral data with respect to each evaluation vector that has been subjected to orthogonal transformation and has been obtained for each of said template image and said input image, and a step of subjecting it to inverse orthogonal transformation and generating a similarity value map. The formula of the similarity value, the orthogonal transformation, and the inverse orthogonal transformation each have linearity.
    Type: Grant
    Filed: January 25, 2002
    Date of Patent: June 5, 2007
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Kazuyuki Imagawa, Tetsuya Yoshimura, Katsuhiro Iwasa, Hideaki Matsuo, Yuji Takata
  • Patent number: 7123754
    Abstract: A face detection device includes a face learning dictionary, which holds learned information for identification between a facial image and a non-facial image. An image input unit inputs a subject image. An edge image extraction unit extracts an edge image from the subject image. A partial image extraction unit, based on the edge image, extracts partial images that are candidates to contain facial images from the subject image. A face/non-face identification unit references the learning dictionary to identify whether or not each extracted partial image contains a facial image. Face detection of high precision, which reflects learned results, is performed.
    Type: Grant
    Filed: May 22, 2002
    Date of Patent: October 17, 2006
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Hideaki Matsuo, Kazuyuki Imagawa, Yuji Takata, Katsuhiro Iwasa, Toshirou Eshima, Naruatsu Baba
  • Patent number: 6961446
    Abstract: The present media editing device generates media including messages in an easy manner in a communication terminal such as a mobile terminal. Therein, a moving image data storage part stores moving image data recorded by a user. A region extraction part extracts any region including the user from the moving image data. A front determination part detects whether or not the user in the extracted region is facing the front. A sound detection part detects the presence or absence of a sound signal of a predetermined level or higher. A frame selection part determines starting and ending frames based on the results outputted from the front determination part and the sound detection part. An editing part performs, for example, an image conversion process by clipping out the media based on thus determined starting and ending frames. A transmission data storage part stores the resultantly edited media as transmission data.
    Type: Grant
    Filed: September 12, 2001
    Date of Patent: November 1, 2005
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Kazuyuki Imagawa, Yuji Takata, Hideaki Matsuo, Katsuhiro Iwasa, Tetsuya Yoshimura
  • Patent number: 6704433
    Abstract: A human tracking device according to the present invention stably tracks a human with good perception of the distance to a human with high resistance against disturbance. A camera image is divided into a human region and a background region. Then, each area of the image is judged whether or not the human region can be divided into a plurality of blob models corresponding to parts of a human. The parts of a human are preferably the head, trunk and legs. When the result of the judgment is “YES”, a plurality of human blob models are produced based on the human region. When the result of the judgment is “NO”, a single human blob model is produced based on the human region. The human is then tracked based on these human blob models. In this way, the human can be stably tracked with good perception of the distance to the human and with high resistance against disturbance.
    Type: Grant
    Filed: December 20, 2000
    Date of Patent: March 9, 2004
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Hideaki Matsuo, Kazuyuki Imagawa, Yuji Takata
  • Patent number: 6697503
    Abstract: In a broadly-applicable face image extraction device and method for defining a face by position and size in target images varied in type for face image extraction at high speed, an edge extraction part 1 extracts an edge part from a target image and generates an edge image. A template storage part 2 previously stores a template composed of a plurality of concentric shapes varied in size. A voting result storage part 3 has voting storage regions for each size of the concentric shapes of the template so as to store the result obtained by voting processing carried out by a voting part 4. The voting part 4 carries out the voting processing utilizing the template at each pixel in the edge image, and stores the result obtained thereby in the corresponding voting storage region. After the voting processing, an analysis part 5 performs cluster evaluation based on the voting results stored in the voting storage regions, and then defines the face in the target image by position and size.
    Type: Grant
    Filed: November 30, 2000
    Date of Patent: February 24, 2004
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Hideaki Matsuo, Kazuyuki Imagawa, Yuji Takata, Naruatsu Baba, Toshiaki Ejima
  • Patent number: 6603876
    Abstract: Two pictures of a subject obtained by objective lenses and from different locations of viewpoint are respectively rotated by dove prisms, 90 degrees clockwise, and then merged into a single picture by total reflection mirrors. The picture after synthesis is reduced at a predetermined ratio by a condenser lens, and then projected onto a pickup plane of a CCD. It is therefore possible to obtain stereoscopic pictures having parallax as a single picture using a single camera without narrowing the effective fields of view of right and left pictures of different viewpoints.
    Type: Grant
    Filed: July 9, 1999
    Date of Patent: August 5, 2003
    Assignee: Matsushita Electric Industrial Co., Ltd.
    Inventors: Hideaki Matsuo, Kazuyuki Imagawa, Yuji Takata
  • Publication number: 20030091239
    Abstract: An image series including one or more images and control information that is related to the images of the image series and concerns disclosure/nondisclosure of the images are communicated. The control information is related to either or both of a focused-region and a non-focused region which include a part or all of an object within the image, and indicates whether or not the regions are to be disclosed. The control information also adapts to disclosure/nondisclosure to specific/unspecified users.
    Type: Application
    Filed: November 12, 2002
    Publication date: May 15, 2003
    Inventors: Kazuyuki Imagawa, Hideaki Matsuo, Yuji Takata, Katsuhiro Iwasa, Takaaki Nishi, Eiji Fukumiya
  • Patent number: 6559954
    Abstract: A 3D shape measurement method and a device using the method eliminate harmful influences of periodic inconstancy in the phase shift method. Optical intensity patterns following periodic functions of sine waves are irradiated on an object while shifting the phases thereof. Based on the image picked up from the object, the 3D shape of the object is measured. In this method, a plurality of optical intensity patterns following periodic functions with varying wavelengths are projected onto the object so as not to interfere with each other. The least common multiple of the wavelengths of the periodic functions is larger than the extent having periodic inconstancy within the image pickup area.
    Type: Grant
    Filed: December 1, 2000
    Date of Patent: May 6, 2003
    Assignee: Matsushita Electric Industrial Co., LTD
    Inventors: Yuji Takata, Hideaki Matsuo, Kazuyuki Imagawa, Takeshi Ohashi
  • Publication number: 20030059117
    Abstract: An image processing device has an edge extraction unit, which inputs an image and generates an edge image, a voting unit, which uses templates to carry out voting on the edge image and generate voting results; a maxima extraction unit, which extracts the maxima among the voting results and generates extraction results; and an object identifying unit, which identifies the position of an object based on the extraction results. The edge extraction unit has a filter processing unit that uses a filter for performing simultaneous noise elimination and edge extraction of the image.
    Type: Application
    Filed: September 26, 2002
    Publication date: March 27, 2003
    Applicant: Matsushita Electric Industrial Co., Ltd.
    Inventors: Katsuhiro Iwasa, Hideaki Matsuo, Yuji Takata, Kazuyuki Imagawa, Eiji Fukumiya
  • Publication number: 20020191818
    Abstract: A face detection device includes a face learning dictionary, which holds learned information for identification between a facial image and a non-facial image. An image input unit inputs a subject image. An edge image extraction unit extracts an edge image from the subject image. A partial image extraction unit, based on the edge image, extracts partial images that are candidates to contain facial images from the subject image. A face/non-face identification unit references the learning dictionary to identify whether or not each extracted partial image contains a facial image. Face detection of high precision, which reflects learned results, is performed.
    Type: Application
    Filed: May 22, 2002
    Publication date: December 19, 2002
    Applicant: Matsushita Electric Industrial Co., Ltd.
    Inventors: Hideaki Matsuo, Kazuyuki Imagawa, Yuji Takata, Katsuhiro Iwasa, Toshirou Eshima, Naruatsu Baba
  • Publication number: 20020175997
    Abstract: A surveillance recording device using cameras extracts facial images and whole body images of a person from images shot by the cameras. A height is calculated from the whole body images. Retrieval information, including a facial image (best shot), is associated with images in a recording medium and recorded into a database. The recorded data are utilized as an index for later retrieval from the recording medium. Facial images are displayed in a list of thumbnails to make it easy to retrieve a target person on a thumbnail screen. The images are displayed together with a moving image of a target person.
    Type: Application
    Filed: May 21, 2002
    Publication date: November 28, 2002
    Applicant: Matsushita Electric Industrial Co., Ltd.
    Inventors: Yuji Takata, Shogo Hamasaki, Hideaki Matsuo, Kazuyuki Imagawa, Masafumi Yoshizawa
  • Publication number: 20020136459
    Abstract: An image processing method for detecting an object from an input image using a template image, including inputting a specified image with respect to both a template image and an input image, calculating an edge normal direction vector of said specified image, generating an evaluation vector from said edge normal direction vector, subjecting the evaluation vector to orthogonal transformation, a step of performing a product sum calculation of corresponding spectral data with respect to each evaluation vector that has been subjected to orthogonal transformation and has been obtained for each of said template image and said input image, and a step of subjecting it to inverse orthogonal transformation and generating a similarity value map. The formula of the similarity value, the orthogonal transformation, and the inverse orthogonal transformation each have linearity.
    Type: Application
    Filed: January 25, 2002
    Publication date: September 26, 2002
    Inventors: Kazuyuki Imagawa, Tetsuya Yoshimura, Katsuhiro Iwasa, Hideaki Matsuo, Yuji Takata
  • Publication number: 20020031262
    Abstract: The present media editing device generates media including messages in an easy manner in a communication terminal such as a mobile terminal. Therein, a moving image data storage part 14 stores moving image data recorded by a user. A region extraction part 17 extracts any region including the user from the moving image data. A front determination part 18 detects whether or not the user in the extracted region is facing the front. A sound detection part 19 detects the presence or absence of a sound signal of a predetermined level or higher. A frame selection part 20 determines starting and ending frames based on the results outputted from the front determination part 18 and the sound detection part 19. An editing part 21 performs, for example, an image conversion process by clipping out the media based on thus determined starting and ending frames. A transmission data storage part 15 stores the resultantly edited media as transmission data.
    Type: Application
    Filed: September 12, 2001
    Publication date: March 14, 2002
    Inventors: Kazuyuki Imagawa, Yuji Takata, Hideaki Matsuo, Katsuhiro Iwasa, Tetsuya Yoshimura
  • Publication number: 20010052928
    Abstract: An image communication terminal comprises a face extraction part 7 for extracting the position and the size of a face with respect to an image picked up by a camera part 4, a display part 3 for displaying the image toward a user, a communication part 9 for establishing two-way communication of the image to and from an information processor on the side of the other party, and a transmitting data processing part 8 for outputting to the communication part 9 an image in a rectangular transmission region set so as to be movable in the image picked up by the camera part 4, an effective region which moves integrally with the transmission region being set in the image picked up by the camera part 4, to move the position of the transmission region in conformity with the position of the face region, provided that the face region deviates from the effective region.
    Type: Application
    Filed: May 22, 2001
    Publication date: December 20, 2001
    Inventors: Kazuyuki Imagawa, Hideaki Matsuo, Yuji Takata, Masafumi Yoshizawa, Shogo Hamasaki, Tetsuya Yoshimura, Katsuhiro Iwasa
  • Patent number: 6256400
    Abstract: An object of the present invention is to provide a method of segmenting hand gestures which automatically segments hand gestures to be detected into words or apprehensible units structured by a plurality of words when recognizing the hand gestures without the user's presentation where to segment. Transition feature data in which a feature of a transition gesture being not observed during a gesture representing a word but is described when transiting from a gesture to another is previously stored. Thereafter, a motion of image corresponding to the part of body in which the transition gesture is observed is detected (step S106), the detected motion of image is compared with the transition feature data (step S107), and a time position where the transition gesture is observed is determined so as to segment the hand gestures (step S108).
    Type: Grant
    Filed: September 28, 1999
    Date of Patent: July 3, 2001
    Assignees: Matsushita Electric Industrial Co., Ltd., Communications Research Laboratory, Independent Administration Institution
    Inventors: Yuji Takata, Hideaki Matsuo, Seiji Igi, Shan Lu, Yuji Nagashima
  • Publication number: 20010005219
    Abstract: A human tracking device according to the present invention stably tracks a human with good perception of the distance to a human with high resistance against disturbance. A camera image is divided into a human region and a background region. Then, each area of the image is judged whether or not the human region can be divided into a plurality of blob models corresponding to parts of a human. The parts of a human are preferably the head, trunk and legs. When the result of the judgment is “YES”, a plurality of human blob models are produced based on the human region. When the result of the judgment is “NO”, a single human blob model is produced based on the human region. The human is then tracked based on these human blob models. In this way, the human can be stably tracked with good perception of the distance to the human and with high resistance against disturbance.
    Type: Application
    Filed: December 20, 2000
    Publication date: June 28, 2001
    Inventors: Hideaki Matsuo, Kazuyuki Imagawa, Yuji Takata
  • Publication number: 20010002695
    Abstract: A 3D shape measurement method and a device using the method eliminate harmful influences of periodic inconstancy in the phase shift method. Optical intensity patterns following periodic functions of sine waves are irradiated on an object while shifting the phases thereof. Based on the image picked up from the object, the 3D shape of the object is measured. In this method, a plurality of optical intensity patterns following periodic functions with varying wavelengths are projected onto the object so as not to interfere with each other. The least common multiple of the wavelengths of the periodic functions is larger than the extent having periodic inconstancy within the image pickup area.
    Type: Application
    Filed: December 1, 2000
    Publication date: June 7, 2001
    Inventors: Yuji Takata, Hideaki Matsuo, Kazuyuki Imagawa, Takeshi Ohashi
  • Publication number: 20010002932
    Abstract: In a broadly-applicable face image extraction device and method for defining a face by position and size in target images varied in type for face image extraction at high speed, an edge extraction part 1 extracts an edge part from a target image and generates an edge image. A template storage part 2 previously stores a template composed of a plurality of concentric shapes varied in size. A voting result storage part 3 has voting storage regions for each size of the concentric shapes of the template so as to store the result obtained by voting processing carried out by a voting part 4. The voting part 4 carries out the voting processing utilizing the template at each pixel in the edge image, and stores the result obtained thereby in the corresponding voting storage region. After the voting processing, an analysis part 5 performs cluster evaluation based on the voting results stored in the voting storage regions, and then defines the face in the target image by position and size.
    Type: Application
    Filed: November 30, 2000
    Publication date: June 7, 2001
    Inventors: Hideaki Matsuo, Kazuyuki Imagawa, Yuji Takata, Naruatsu Baba, Toshiaki Ejima
  • Patent number: 6215890
    Abstract: A hand gesture recognizing device is provided which can correctly recognize hand gestures at high speed without requiring users to be equipped with tools. A gesture of a user is stereoscopically filmed by a photographing device 1 and then stored in an image storage device 2. A feature image extracting device 3 transforms colors of the stereoscopic image data read from the image storage device 2 in accordance with color transformation tables created by a color transformation table creating device 13, and disassembles and outputs the feature image of the user in corresponding channels. A spatial position calculating device 4 calculates spatial positions of feature parts of the user by utilizing parallax of the feature image outputted from the feature image extracting device 4. A region dividing device 5 defines the space around the user with spatial region codes. A hand gesture detecting device 6 detects how the hands of the user move in relation to the spatial region codes.
    Type: Grant
    Filed: September 25, 1998
    Date of Patent: April 10, 2001
    Assignees: Matsushita Electric Industrial Co., Ltd., Communications Research Laboratory of Ministry of Posts and Telecommunications
    Inventors: Hideaki Matsuo, Yuji Takata, Terutaka Teshima, Seiji Igi, Shan Lu, Kazuyuki Imagawa