Patents by Inventor Kensuke Terakawa

Kensuke Terakawa has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20070189609
    Abstract: A face discriminating process judges whether a discrimination target image is an image of a face, based on characteristic amounts of the discrimination target image. Gradation conversion of pixel values is administered as a preliminary process, to suppress fluctuations in contrast within the discrimination target image. In the gradation conversion process, degrees of variance of pixel values within local regions of the discrimination target image are caused to approach a predetermined level. The local regions are set to be of a size that includes a single eye of a face to be discriminated.
    Type: Application
    Filed: March 31, 2006
    Publication date: August 16, 2007
    Inventors: Wataru Ito, Sadato Akahori, Kensuke Terakawa, Yoshiro Kitamura
  • Publication number: 20070165951
    Abstract: To detect a face image in an inputted image, predetermined-size partial images are cut out at different positions in the inputted image. An indicator value indicating a probability of each partial image being the face image is calculated. The partial images having the indicator values not less than a first threshold are extracted as candidate face images. Each candidate is set as a candidate of interest. If any nearby candidate is present within a predetermined coordinate distance from the candidate of interest, the candidate of interest and the nearby candidate are set in one candidate group. For each candidate group, an integrated indicator value reflecting the indicator values calculated for the candidates forming the candidate group is calculated. Then, an image within a predetermined area in the inputted image containing the candidate group having the integrated indicator value not less than a second threshold is extracted as the face image.
    Type: Application
    Filed: January 12, 2007
    Publication date: July 19, 2007
    Inventors: Sadato Akahori, Kensuke Terakawa
  • Publication number: 20070104374
    Abstract: An index representing the probability that a fraction image is a face image including a face in an input image is calculated for each of the positions of the face to be detected on the basis of a feature value. When the sum of the indexes of the fraction images is not smaller than the first threshold value, the image formed by the fraction images is determined to be a face image.
    Type: Application
    Filed: October 13, 2006
    Publication date: May 10, 2007
    Applicant: FUJIFILM Corporation
    Inventor: Kensuke Terakawa
  • Publication number: 20070076954
    Abstract: An index representing the probability that an input image is a face image including a face oriented in a predetermined orientation is calculated for each of different predetermined orientations on the basis of a feature value of the input image including a face and the orientation of the face included in the input image is identified on the basis of the ratio of the indexes which have been calculated for the different predetermined orientations.
    Type: Application
    Filed: October 3, 2006
    Publication date: April 5, 2007
    Inventor: Kensuke Terakawa
  • Publication number: 20070071329
    Abstract: Usability of a face detecting apparatus is improved, by enabling selection of a detecting mode optimal for an intended purpose, when detecting facial images from within images. During detection of images of forward facing faces, switching of the detecting mode to one of: a detection rate mode; a false positive detection rate mode; and a processing speed mode is enabled. Face detection focused on optimizing detection performance for each detecting mode is performed.
    Type: Application
    Filed: September 28, 2006
    Publication date: March 29, 2007
    Inventor: Kensuke Terakawa
  • Publication number: 20070047822
    Abstract: False positive detection of discrimination targets within images is reduced, while detection processes are accelerated. A partial image generating means generates a plurality of partial images by scanning a subwindow over an entire image. A candidate classifier judges whether each of the partial images represent a face (discrimination target), and candidate images that possibly represent faces are detected. A discrimination target discriminating means judges whether each of the candidate images represents a face. The candidate classifier has performed learning, employing reference sample images and in-plane rotated sample images.
    Type: Application
    Filed: August 31, 2006
    Publication date: March 1, 2007
    Inventors: Yoshiro Kitamura, Sadato Akahori, Kensuke Terakawa
  • Publication number: 20070036431
    Abstract: In a method of detection of different objects in an input image by application, to partial images cut at different positions in the input image, of a plurality of weak classifiers that evaluate whether a detection target image is an image of a predetermined object based on a histogram of values of characteristic quantities calculated from a plurality of sample images representing the predetermined object, the histogram is extended to multi-dimensions and a criterion for the evaluation by the weak classifiers is a multi-dimensional histogram representing histograms for the different objects in the form of vectors.
    Type: Application
    Filed: August 9, 2006
    Publication date: February 15, 2007
    Inventor: Kensuke Terakawa
  • Publication number: 20070036429
    Abstract: In a method of detection of a predetermined object in an input image, one or more sample image groups representing the object of which a predetermined part or parts is/are occluded is/are prepared in addition to a sample image group representing the entirety of the object, by shifting a position at which sample images in the entirety sample image group are cut. A plurality of detectors are generated by causing the detectors to learn the respective types of the sample image groups according to a machine learning method. The detectors are applied to partial images cut sequentially from the input image at different positions, and judgment is made as to whether each of the partial images is an image representing the object in the state of the entirety or in the state of occlusion thereof.
    Type: Application
    Filed: August 9, 2006
    Publication date: February 15, 2007
    Inventor: Kensuke Terakawa
  • Publication number: 20060222217
    Abstract: A face discrimnating process judges whether a discrimination target image is an image of a face, based on characteristic amounts of the discrimination target image. A first and a second brightness gradation converting process are administered as preliminary processes. The first brightness gradation converting process is administered on regions in which the degrees of variance of pixel values are greater than or equal to a first predetermined level. The second brightness gradation converting process is administered on regions in which the degrees of variance are less than the first predetermined level. The first brightness gradation converting process causes the degree of variance to approach a second predetermined level. The second brightness gradation converting process suppresses the degree of variance to be less than the second predetermined level.
    Type: Application
    Filed: March 31, 2006
    Publication date: October 5, 2006
    Inventors: Yoshiro Kitamura, Sadato Akahori, Kensuke Terakawa
  • Publication number: 20060215905
    Abstract: A plurality of different facial images is used to cause a face classification apparatus to learn a characteristic feature of a face by using a machine-learning method. Each of the facial images includes a face which has the same direction and the same angle of inclination as those of a face included in each of the other facial images and each of the facial images is limited to an image of a specific facial region. For example, the facial region is a predetermined region including only a specific facial part other than a region below an upper lip to avoid an influence of a change in facial expressions. Alternatively, if the apparatus is used to detect a frontal face and to perform refined detection processing on the extracted face candidate, a region including only an eye or eyes, a nose and an upper lip is used as the facial region.
    Type: Application
    Filed: March 7, 2006
    Publication date: September 28, 2006
    Inventors: Yoshiro Kitamura, Sadato Akahori, Kensuke Terakawa
  • Publication number: 20060035259
    Abstract: Structural element candidates, estimated to be predetermined structural elements of a predetermined subject, are detected from an image that includes the subject. The subject that includes the structural element candidates is detected from the image in the vicinity of the detected structural element candidates. The characteristics of the structural elements are discriminated from the image in the vicinity of the structural element candidates, at a higher accuracy than when the structural elements were detected. In the case that the characteristics of the structural elements are discriminated, the structural element candidates are confirmed as being the predetermined structural elements.
    Type: Application
    Filed: August 10, 2005
    Publication date: February 16, 2006
    Inventors: Kouji Yokouchi, Sadato Akahori, Kensuke Terakawa
  • Publication number: 20050226499
    Abstract: Red-eye area detection accuracy is improved according to a pattern of appearance of red-eye areas generated frequently in an actual photography environment. A red-eye candidate area finding unit finds red-eye candidate areas in a digital photograph image, by using reference data. A display unit displays the image with the red-eye candidate areas having been marked and preliminary corrected. A user can specify an unfound red-eye area and an erroneously specified area by using a specification unit 22. An update unit 24 updates the reference data by learning a characteristic of the unfound red-eye area and the erroneously specified area so that a probability becomes higher regarding detection of an area having a characteristic similar to the unfound red-eye area as a red-eye candidate area while becomes lower regarding detection of an area having a characteristic similar to the erroneously specified area as a red-eye area.
    Type: Application
    Filed: March 25, 2005
    Publication date: October 13, 2005
    Inventor: Kensuke Terakawa
  • Publication number: 20050219385
    Abstract: Occurrence of red eye at the time of flash photography can be prevented with a high probability according to tendency of red-eye occurrence caused by various factors and according to a difference of the tendency among people. A memory in a red-eye prevention device stores reference data for red-eye prevention defining photography conditions for each person. An identification unit identifies a person subjected to flash photography by a camera. In the case where the identification unit cannot identify the person, a registration unit registers an additional person and initial photography conditions with the memory as a part of the reference data. The identification unit identifies the person as the additional person, and a photography condition selection unit selects actual photography conditions for red-eye prevention for flash photography by the camera of the person identified by the identification unit, based on the reference data in the memory.
    Type: Application
    Filed: March 25, 2005
    Publication date: October 6, 2005
    Inventor: Kensuke Terakawa
  • Publication number: 20050094869
    Abstract: A moving image generating apparatus for generating three-dimensional moving image to display a subject as a three-dimensional object, comprises a plurality of two-dimensional moving image generators, and each of the two-dimensional moving image generators is provided to each position, of which relative positions with respect to each of the two-dimensional moving image generators is predetermined respectively, and captures the subject at different timing intermittently, so that each of the two-dimensional moving image generators generating two-dimensional captured moving image respectively, and a three-dimensional moving image generator operable to generate the three-dimensional moving image of which frame rate is higher than a frame rate of the two-dimensional captured moving image, based on both the relative positions of the plurality of two-dimensional moving image generator and the two-dimensional captured moving images generated by the two-dimensional moving image generators.
    Type: Application
    Filed: September 27, 2004
    Publication date: May 5, 2005
    Inventors: Akira Yoda, Kensuke Terakawa