Patents by Inventor Yoshitaka Ushiku
Yoshitaka Ushiku has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11834052Abstract: An estimator generation apparatus may include a first estimator and a second estimator sharing a common encoder. The first estimator may be trained to determine a target person's state from face image data. The second estimator may be trained to reconstruct physiological data from face image data. The machine learning may allow the common encoder to have its parameters converging toward higher-accuracy local solutions for estimating the target person's state, thus generating the estimator that may estimate the target person's state more accurately.Type: GrantFiled: March 13, 2019Date of Patent: December 5, 2023Assignee: OMRON CorporationInventors: Atsushi Hashimoto, Yoshitaka Ushiku, Yasuyo Kotake
-
Publication number: 20230013870Abstract: Accuracy of a model extracting a graph structure as an intermediate representation from input data is improved. An encoding unit (100) extracts a feature amount of each of a plurality of vertices included in a graph structure (Tr) from input data (10), and calculates a likelihood that an edge is connected to the vertex. A sampling unit (130) determines the graph structure (Tr) based on a conversion result of a Gumbel-Softmax function for the likelihood. A learning unit (150) optimizes a decoding unit (140) and the encoding unit (100) by back propagation using a loss function including an error (LP) between output data (20) generated from the graph structure (Tr) and correct data.Type: ApplicationFiled: February 19, 2021Publication date: January 19, 2023Applicants: OMRON CORPORATION, KYOTO UNIVERSITYInventors: Atsushi HASHIMOTO, Yoshitaka USHIKU, Shinsuke MORI, Taichi NISHIMURA
-
Publication number: 20220139092Abstract: A model generation apparatus according to one aspect of the present invention acquires a plurality of learning datasets each constituted by a first sample of a first time of predetermined data obtained in time series and feature information included in a second sample of the predetermined data of a future second time relative to the first time, and trains a prediction model, by machine learning, to predict feature information of the second time from the first sample of the first time, for each learning dataset. In the model generation apparatus, a rarity degree for is set each learning dataset, and, in the machine learning, the model generation apparatus trains more preponderantly on learning datasets having a higher rarity degree.Type: ApplicationFiled: January 23, 2020Publication date: May 5, 2022Applicants: OMRON Corporation, NATIONAL UNIVERSITY CORPORATION, KYOTO UNIVERSITYInventors: Atsushi HASHIMOTO, Yuta KAMIKAWA, Yoshitaka USHIKU, Masaaki IIYAMA, Motoharu SONOGASHIRA
-
Publication number: 20210269046Abstract: An estimator generation apparatus may include a first estimator and a second estimator sharing a common encoder. The first estimator may be trained to determine a target person's state from face image data. The second estimator may be trained to reconstruct physiological data from face image data. The machine learning may allow the common encoder to have its parameters converging toward higher-accuracy local solutions for estimating the target person's state, thus generating the estimator that may estimate the target person's state more accurately.Type: ApplicationFiled: March 13, 2019Publication date: September 2, 2021Applicant: OMRON CorporationInventors: Atsushi HASHIMOTO, Yoshitaka USHIKU, Yasuyo KOTAKE
-
Patent number: 11004239Abstract: A repetitive structure extraction device includes an image feature extraction unit which extracts an image feature for each of a plurality of images which are captured at one or a plurality of locations and which are given different capture times, a temporal feature extraction unit which extracts, for each of the plurality of images, a temporal feature according to a predetermined period from a capture time given to the image, and a repetitive structure extraction unit which learns, on the basis of the image feature extracted for each of the plurality of images by the image feature extraction unit and the temporal feature extracted for each of the plurality of images by the temporal feature extraction unit, a repetitive structure which is used to perform interconversion between the temporal feature and a component of the image feature and which is provided according to a correlation of periodic change between the component of the image feature and the temporal feature.Type: GrantFiled: December 13, 2019Date of Patent: May 11, 2021Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Akisato Kimura, Yoshitaka Ushiku, Kunio Kashino
-
Patent number: 10839561Abstract: A repetitive structure extraction device includes an image feature extraction unit which extracts an image feature for each of a plurality of images which are captured at one or a plurality of locations and which are given different capture times, a temporal feature extraction unit which extracts, for each of the plurality of images, a temporal feature according to a predetermined period from a capture time given to the image, and a repetitive structure extraction unit which learns, on the basis of the image feature extracted for each of the plurality of images by the image feature extraction unit and the temporal feature extracted for each of the plurality of images by the temporal feature extraction unit, a repetitive structure which is used to perform interconversion between the temporal feature and a component of the image feature and which is provided according to a correlation of periodic change between the component of the image feature and the temporal feature.Type: GrantFiled: December 13, 2019Date of Patent: November 17, 2020Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Akisato Kimura, Yoshitaka Ushiku, Kunio Kashino
-
Publication number: 20200118297Abstract: A repetitive structure extraction device includes an image feature extraction unit which extracts an image feature for each of a plurality of images which are captured at one or a plurality of locations and which are given different capture times, a temporal feature extraction unit which extracts, for each of the plurality of images, a temporal feature according to a predetermined period from a capture time given to the image, and a repetitive structure extraction unit which learns, on the basis of the image feature extracted for each of the plurality of images by the image feature extraction unit and the temporal feature extracted for each of the plurality of images by the temporal feature extraction unit, a repetitive structure which is used to perform interconversion between the temporal feature and a component of the image feature and which is provided according to a correlation of periodic change between the component of the image feature and the temporal feature.Type: ApplicationFiled: December 13, 2019Publication date: April 16, 2020Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Akisato KIMURA, Yoshitaka USHIKU, Kunio KASHINO
-
Publication number: 20200118298Abstract: A repetitive structure extraction device includes an image feature extraction unit which extracts an image feature for each of a plurality of images which are captured at one or a plurality of locations and which are given different capture times, a temporal feature extraction unit which extracts, for each of the plurality of images, a temporal feature according to a predetermined period from a capture time given to the image, and a repetitive structure extraction unit which learns, on the basis of the image feature extracted for each of the plurality of images by the image feature extraction unit and the temporal feature extracted for each of the plurality of images by the temporal feature extraction unit, a repetitive structure which is used to perform interconversion between the temporal feature and a component of the image feature and which is provided according to a correlation of periodic change between the component of the image feature and the temporal feature.Type: ApplicationFiled: December 13, 2019Publication date: April 16, 2020Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Akisato KIMURA, Yoshitaka USHIKU, Kunio KASHINO
-
Patent number: 10580166Abstract: A repetitive structure extraction device includes an image feature extraction unit which extracts an image feature for each of a plurality of images which are captured at one or a plurality of locations and which are given different capture times, a temporal feature extraction unit which extracts, for each of the plurality of images, a temporal feature according to a predetermined period from a capture time given to the image, and a repetitive structure extraction unit which learns, on the basis of the image feature extracted for each of the plurality of images by the image feature extraction unit and the temporal feature extracted for each of the plurality of images by the temporal feature extraction unit, a repetitive structure which is used to perform interconversion between the temporal feature and a component of the image feature and which is provided according to a correlation of periodic change between the component of the image feature and the temporal feature.Type: GrantFiled: July 13, 2016Date of Patent: March 3, 2020Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Akisato Kimura, Yoshitaka Ushiku, Kunio Kashino
-
Publication number: 20180204354Abstract: A repetitive structure extraction device includes an image feature extraction unit which extracts an image feature for each of a plurality of images which are captured at one or a plurality of locations and which are given different capture times, a temporal feature extraction unit which extracts, for each of the plurality of images, a temporal feature according to a predetermined period from a capture time given to the image, and a repetitive structure extraction unit which learns, on the basis of the image feature extracted for each of the plurality of images by the image feature extraction unit and the temporal feature extracted for each of the plurality of images by the temporal feature extraction unit, a repetitive structure which is used to perform interconversion between the temporal feature and a component of the image feature and which is provided according to a correlation of periodic change between the component of the image feature and the temporal feature.Type: ApplicationFiled: July 13, 2016Publication date: July 19, 2018Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Akisato KIMURA, Yoshitaka USHIKU, Kunio KASHINO
-
Patent number: 9875301Abstract: Systems and methods for learning topic models from unstructured data and applying the learned topic models to recognize semantics for new data items are described herein. In at least one embodiment, a corpus of multimedia data items associated with a set of labels may be processed to generate a refined corpus of multimedia data items associated with the set of labels. Such processing may include arranging the multimedia data items in clusters based on similarities of extracted multimedia features and generating intra-cluster and inter-cluster features. The intra-cluster and the inter-cluster features may be used for removing multimedia data items from the corpus to generate the refined corpus. The refined corpus may be used for training topic models for identifying labels. The resulting models may be stored and subsequently used for identifying semantics of a multimedia data item input by a user.Type: GrantFiled: April 30, 2014Date of Patent: January 23, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Xian-Sheng Hua, Jin Li, Yoshitaka Ushiku
-
Publication number: 20150317389Abstract: Systems and methods for learning topic models from unstructured data and applying the learned topic models to recognize semantics for new data items are described herein. In at least one embodiment, a corpus of multimedia data items associated with a set of labels may be processed to generate a refined corpus of multimedia data items associated with the set of labels. Such processing may include arranging the multimedia data items in clusters based on similarities of extracted multimedia features and generating intra-cluster and inter-cluster features. The intra-cluster and the inter-cluster features may be used for removing multimedia data items from the corpus to generate the refined corpus. The refined corpus may be used for training topic models for identifying labels. The resulting models may be stored and subsequently used for identifying semantics of a multimedia data item input by a user.Type: ApplicationFiled: April 30, 2014Publication date: November 5, 2015Applicant: Microsoft CorporationInventors: Xian-Sheng Hua, Jin Li, Yoshitaka Ushiku