Patents by Inventor Miki Haseyama
Miki Haseyama has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20210056414Abstract: Provided is a learning apparatus capable of generating learning pattern information for causing meaningful output information corresponding to input information to be accurately output, while reducing the amount of learning data needed to generate the learning pattern corresponding to the input information. When generating learning pattern data PD for obtaining meaningful output corresponding to image data GD, the learning pattern data PD corresponding to the results of deep-layer learning processing using the image data GD, the learning apparatus: acquires, from an external source, external data BD corresponding to the image data GD; on the basis of a correlation between image feature data GC indicating a feature of the image data GD and external feature data BC indicating a feature of the external data BD, converts the image feature data GC; and generates converted image feature data MC.Type: ApplicationFiled: January 31, 2019Publication date: February 25, 2021Applicant: NATIONAL UNIVERSITY CORPORATION HOKKAIDO UNIVERSITYInventors: Miki HASEYAMA, Takahiro OGAWA
-
Patent number: 9077949Abstract: A content search device includes a feature quantity computing unit that computes a feature quantity of at least any one of an image feature, an acoustic feature and a semantic feature included in each piece of content data, and that stores feature quantity data. The device also includes an unknown feature quantity computing unit that computes an unknown feature quantity of each feature type not associated with a content identifier in the feature quantity data by use of the feature quantity of the feature type associated with the content identifier, and that stores the unknown feature quantity as a feature estimated value in the feature quantity data. The device further includes a distance computing unit that computes a distance indicating a similarity between each two pieces of content data based on the feature quantities and the feature estimated values stored in the feature quantity data.Type: GrantFiled: November 6, 2009Date of Patent: July 7, 2015Assignee: National University Corporation Hokkaido UniversityInventor: Miki Haseyama
-
Patent number: 8180162Abstract: A similar image retrieving device (1) comprises: an image database (21) for storage of sets of image data, and sets of keywords each associated with a corresponding image data; a cluster classification section (11) to read the sets of image data, provide a respective one of the sets of image data with a compatibility value as an index representative of a set of compatibilities of a corresponding one of the sets of keywords, and classify the sets of image data into clusters thereof in accordance with the compatibility value; an optimum cluster extracting section (12) to provide the set of query image data with a compatibility value, and select one of clusters to which the query image data is to belong to minimize an error caused in a Projection onto Convex Sets using the clusters; and a similar image extracting section (13) to output, as a similar image, a set of image data provided a close compatibility value, among the sets of image data belonging to the cluster selected by the optimum cluster extractor.Type: GrantFiled: October 23, 2008Date of Patent: May 15, 2012Assignee: National University Corporation Hokkaido UniversityInventor: Miki Haseyama
-
Patent number: 8180161Abstract: An image classification device includes a characteristic value set calculation unit 11 that calculates a characteristic value set of the whole image for each of multiple sets of image data in an image database 51, detects an edge of the set of the image data, and calculates a characteristic value set of the detected edge portions; a first clustering unit 12 that classifies the multiple sets of image data into multiple clusters on the basis of the characteristic value sets of the whole images; a second clustering unit 13 that further classifies the multiple clusters classified by the first clustering unit 12 into multiple clusters on the basis of the characteristic value sets of the edge portions; and a cluster integration unit 14 that determines which pixels constitutes a subject in each of the multiple sets of image data, based on the composition of the image, and integrates some of the multiple clusters classified by the second clustering unit 13 together based on the pixels constituting the subject.Type: GrantFiled: December 1, 2008Date of Patent: May 15, 2012Assignee: National University Corporation Hokkaido UniversityInventor: Miki Haseyama
-
Patent number: 8098890Abstract: Kalman filter processing is applied to each of successive images of a scene obscured by fog, captured by an onboard camera of a vehicle. The measurement matrix for the Kalman filter is established based on currently estimated characteristics of the fog, and intrinsic luminance values of a scene portrayed by a current image constitute the state vector for the Kalman filter. Adaptive filtering for removing the effects of fog from the images is thereby achieved, with the filtering being optimized in accordance with the degree of image deterioration caused by the fog.Type: GrantFiled: June 12, 2008Date of Patent: January 17, 2012Assignees: DENSO CORPORATION, National University Corporation Hokkaido UniversityInventor: Miki Haseyama
-
Publication number: 20110225196Abstract: A moving image search device includes: a moving image database (11) for storage of sets of moving image data; a scene dividing unit (21) which divides a visual signal of the sets of moving image data into shots and outputs, as a scene, continuous shots having a small characteristic value set difference of an audio signal to the shots; a video signal similarity calculation unit (23) which calculates, for each of scenes obtained by the division by the scene dividing unit (11), video signal similarities to the other scenes according to a characteristic value set of the visual signal and a characteristic value set of the audio signal, and thus generates video signal similarity data (12); a video signal similarity search unit (26) which searches the scenes according to the video signal similarity data (12) to find a scene having a smaller similarity to the each scene than a certain threshold (12); and a video signal similarity display unit (29) which acquires and displays coordinates corresponding to the similaritType: ApplicationFiled: March 18, 2009Publication date: September 15, 2011Applicant: National University Corporation Hokkaido UniversityInventor: Miki Haseyama
-
Publication number: 20110225153Abstract: A content search device 1 comprises: feature quantity computing means 10 for, for each piece of content data of multiple types, computing a feature quantity of at least any one of an image feature, an acoustic feature and a semantic feature included in the piece of content data, and storing feature quantity data 34a; unknown feature quantity computing means 14 for computing an unknown feature quantity of each feature type not associated with a content identifier in the feature quantity data 34a by use of the feature quantity of the feature type associated with the content identifier, and storing the unknown feature quantity as a feature estimated value in the feature quantity data 34a; distance computing means 15 for computing a distance indicating a similarity between each two pieces of content data based on the feature quantities and the feature estimated values stored in the feature quantity data 34a; and display means 16 for determining a display position of a thumbnail corresponding to each piece of contType: ApplicationFiled: November 6, 2009Publication date: September 15, 2011Applicant: National University Corporation Hokkaido UniversityInventor: Miki Haseyama
-
Publication number: 20110211772Abstract: A similar image retrieving device (1) comprises: an image database (21) for storage of sets of image data, and sets of keywords each associated with a corresponding image data; a cluster classification section (11) to read the sets of image data, provide a respective one of the sets of image data with a compatibility value as an index representative of a set of compatibilities of a corresponding one of the sets of keywords, and classify the sets of image data into clusters thereof in accordance with the compatibility value; an optimum cluster extracting section (12) to provide the set of query image data with a compatibility value, and select one of clusters to which the query image data is to belong to minimize an error caused in a Projection onto Convex Sets using the clusters; and a similar image extracting section (13) to output, as a similar image, a set of image data provided a close compatibility value, among the sets of image data belonging to the cluster selected by the optimum cluster extractor.Type: ApplicationFiled: October 23, 2008Publication date: September 1, 2011Applicant: National University Corporation Hokkaido UniversityInventor: Miki Haseyama
-
Publication number: 20110103700Abstract: An image classification device includes a characteristic value set calculation unit 11 that calculates a characteristic value set of the whole image for each of multiple sets of image data in an image database 51, detects an edge of the set of the image data, and calculates a characteristic value set of the detected edge portions; a first clustering unit 12 that classifies the multiple sets of image data into multiple clusters on the basis of the characteristic value sets of the whole images; a second clustering unit 13 that further classifies the multiple clusters classified by the first clustering unit 12 into multiple clusters on the basis of the characteristic value sets of the edge portions; and a cluster integration unit 14 that determines which pixels constitutes a subject in each of the multiple sets of image data, based on the composition of the image, and integrates some of the multiple clusters classified by the second clustering unit 13 together based on the pixels constituting the subject.Type: ApplicationFiled: December 1, 2008Publication date: May 5, 2011Applicant: National University Corporation Hokkaido UniversityInventor: Miki Haseyama
-
Publication number: 20100195736Abstract: Matching processing reconstructs divided lost regions, which are obtained by dividing a lost region in an image of a Frame t into regions each including N×N pixels as a unit, from corresponding regions of an estimated image of a previously reconstructed Frame t?1 using a boundary matching method. Estimation pre-processing calculates local regions of the estimated image of Frame t?1, which correspond to local regions of each divided lost region in the image of Frame t using a block matching method, and calculates second motion vectors for respective pixels from local regions associated with region in the image of Frame t?1 for all pixels L×L included in each local region of divided lost region. Original image estimation processing defines a transition model and observation model from the result obtained by the estimation pre-processing, and estimates an original image using a Kalman filter algorithm.Type: ApplicationFiled: April 9, 2010Publication date: August 5, 2010Applicant: NATIONAL UNIVERSITY CORP HOKKAIDO UNIVERSITYInventor: Miki Haseyama
-
Patent number: 7689053Abstract: An image processing method is described which makes it possible to obtain an image which has a high quality and resolution without leaving boundaries of adjacent blocks detectable even in the case where the image has already been subjected to a processing of resolution enhancement (enlargement) of the image using a set of fractal parameters.Type: GrantFiled: January 13, 2006Date of Patent: March 30, 2010Assignee: Panasonic CorporationInventors: Satoshi Kondo, Miki Haseyama, Norihiro Kakukou
-
Publication number: 20080317287Abstract: Kalman filter processing is applied to each of successive images of a scene obscured by fog, captured by an onboard camera of a vehicle. The measurement matrix for the Kalman filter is established based on currently estimated characteristics of the fog, and intrinsic luminance values of a scene portrayed by a current image constitute the state vector for the Kalman filter. Adaptive filtering for removing the effects of fog from the images is thereby achieved, with the filtering being optimized in accordance with the degree of image deterioration caused by the fog.Type: ApplicationFiled: June 12, 2008Publication date: December 25, 2008Applicants: DENSO CORPORATION, National University Corporation Hokkaido UniversityInventor: Miki Haseyama
-
Publication number: 20070230814Abstract: An image reconstructing method for reconstructing an image accurately even if the true support is unknown. An initial image (I) is denoted by (ginitial) (S1300). A measured support is subjected to an expansion processing (S1400) to generate an image (d) showing the support (D) (S1500). Snakes are applied to the image (d) (S1700), and an extracted object (D?) is made a new support (D) (S1800). Using the obtained support (D) and the Fourier amplitude |F| of the original image, an ER algorithm is applied to the (ginitial) M times to obtain an output image (gn) (S1900). The obtained (gn) is used as the (ginitial) and the (d) (S2000, S2100). Steps (S1700 to S2100) are repeated a predetermined times (N times), thus reconstructing the image. The output image (gN) created after N-times repetition is the reconstructed image.Type: ApplicationFiled: February 20, 2005Publication date: October 4, 2007Inventors: Miki Haseyama, Keiko Kondo
-
Publication number: 20060165307Abstract: To provide an image processing method which makes it possible to obtain an image which has a high quality and resolution without leaving boundaries of adjacent blocks detectable even in the case where the image has already been subjected to a processing of resolution enhancement (enlargement) of the image using a set of fractal parameters.Type: ApplicationFiled: January 13, 2006Publication date: July 27, 2006Inventors: Satoshi Kondo, Miki Haseyama, Norihiro Kakukou