Patents by Inventor Yoshitaka Ushiku
Yoshitaka Ushiku has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240404284Abstract: An acquisition unit (30) acquires, for a work including a plurality of steps, a material feature quantity representing each material used in the task, and a video feature quantity extracted from each clip, which is a video of each step in which the task is captured. An updating unit (40) identifies an action for a material included in the clip based on the video feature quantity of each clip, and updates the material feature quantity of the identified material in accordance with the identified action. A generation unit (50) generates a sentence explaining a task procedure for each of the steps based on the updated material feature quantity, the specified action, and the video feature quantity.Type: ApplicationFiled: September 21, 2022Publication date: December 5, 2024Applicants: OMRON Corporation, KYOTO UNIVERSITYInventors: Atsushi HASHIMOTO, Yoshitaka USHIKU, Shinsuke MORI, Hirotaka KAMEKO, Taichi NISHIMURA
-
Publication number: 20240370735Abstract: A label generation method according to one aspect of the present invention prepares a first inference model trained on a first dataset obtained from a source domain, and a second inference model trained on a second dataset including second training data generated by adding a disturbance to first training data, and generates a third correct answer label for third training data, on the basis of a consensus of the prepared trained first inference model and second inference model.Type: ApplicationFiled: August 17, 2022Publication date: November 7, 2024Inventors: Takehiko OHKAWA, Atsushi HASHIMOTO, Yoshitaka USHIKU, Yoichi SATO, Takuma YAGI
-
Patent number: 12056936Abstract: A model generation apparatus according to one aspect of the present invention acquires a plurality of learning datasets each constituted by a first sample of a first time of predetermined data obtained in time series and feature information included in a second sample of the predetermined data of a future second time relative to the first time, and trains a prediction model, by machine learning, to predict feature information of the second time from the first sample of the first time, for each learning dataset. In the model generation apparatus, a rarity degree for is set each learning dataset, and, in the machine learning, the model generation apparatus trains more preponderantly on learning datasets having a higher rarity degree.Type: GrantFiled: January 23, 2020Date of Patent: August 6, 2024Assignees: OMRON Corporation, KYOTO UNIVERSITYInventors: Atsushi Hashimoto, Yuta Kamikawa, Yoshitaka Ushiku, Masaaki Iiyama, Motoharu Sonogashira
-
Publication number: 20240257922Abstract: A model generation method according to one aspect of the present invention acquires first data and second data regarding a crystal structure of a material, and performs machine learning for a first encoder and a second encoder by using the first data and the second data. The second data indicates a property of the material with an index different from that of the first data. The first encoder is configured to convert the first data into a first feature vector, and the second encoder is configured to convert the second data into a second feature vector. The dimension of the first feature vector is the same as the dimension of the second feature vector. In machine learning, the first encoder and the second encoder are trained so that the values of the feature vectors of the positive samples are positioned close to each other, and the feature vector of the negative sample is positioned far from the feature vector of the positive sample.Type: ApplicationFiled: August 17, 2022Publication date: August 1, 2024Inventors: Tatsunori TANIAI, Yoshitaka USHIKU, Naoya CHIBA, Yuta SUZUKI, Kanta ONO
-
Patent number: 11834052Abstract: An estimator generation apparatus may include a first estimator and a second estimator sharing a common encoder. The first estimator may be trained to determine a target person's state from face image data. The second estimator may be trained to reconstruct physiological data from face image data. The machine learning may allow the common encoder to have its parameters converging toward higher-accuracy local solutions for estimating the target person's state, thus generating the estimator that may estimate the target person's state more accurately.Type: GrantFiled: March 13, 2019Date of Patent: December 5, 2023Assignee: OMRON CorporationInventors: Atsushi Hashimoto, Yoshitaka Ushiku, Yasuyo Kotake
-
Publication number: 20230013870Abstract: Accuracy of a model extracting a graph structure as an intermediate representation from input data is improved. An encoding unit (100) extracts a feature amount of each of a plurality of vertices included in a graph structure (Tr) from input data (10), and calculates a likelihood that an edge is connected to the vertex. A sampling unit (130) determines the graph structure (Tr) based on a conversion result of a Gumbel-Softmax function for the likelihood. A learning unit (150) optimizes a decoding unit (140) and the encoding unit (100) by back propagation using a loss function including an error (LP) between output data (20) generated from the graph structure (Tr) and correct data.Type: ApplicationFiled: February 19, 2021Publication date: January 19, 2023Applicants: OMRON CORPORATION, KYOTO UNIVERSITYInventors: Atsushi HASHIMOTO, Yoshitaka USHIKU, Shinsuke MORI, Taichi NISHIMURA
-
Publication number: 20220139092Abstract: A model generation apparatus according to one aspect of the present invention acquires a plurality of learning datasets each constituted by a first sample of a first time of predetermined data obtained in time series and feature information included in a second sample of the predetermined data of a future second time relative to the first time, and trains a prediction model, by machine learning, to predict feature information of the second time from the first sample of the first time, for each learning dataset. In the model generation apparatus, a rarity degree for is set each learning dataset, and, in the machine learning, the model generation apparatus trains more preponderantly on learning datasets having a higher rarity degree.Type: ApplicationFiled: January 23, 2020Publication date: May 5, 2022Applicants: OMRON Corporation, NATIONAL UNIVERSITY CORPORATION, KYOTO UNIVERSITYInventors: Atsushi HASHIMOTO, Yuta KAMIKAWA, Yoshitaka USHIKU, Masaaki IIYAMA, Motoharu SONOGASHIRA
-
Publication number: 20210269046Abstract: An estimator generation apparatus may include a first estimator and a second estimator sharing a common encoder. The first estimator may be trained to determine a target person's state from face image data. The second estimator may be trained to reconstruct physiological data from face image data. The machine learning may allow the common encoder to have its parameters converging toward higher-accuracy local solutions for estimating the target person's state, thus generating the estimator that may estimate the target person's state more accurately.Type: ApplicationFiled: March 13, 2019Publication date: September 2, 2021Applicant: OMRON CorporationInventors: Atsushi HASHIMOTO, Yoshitaka USHIKU, Yasuyo KOTAKE
-
Patent number: 11004239Abstract: A repetitive structure extraction device includes an image feature extraction unit which extracts an image feature for each of a plurality of images which are captured at one or a plurality of locations and which are given different capture times, a temporal feature extraction unit which extracts, for each of the plurality of images, a temporal feature according to a predetermined period from a capture time given to the image, and a repetitive structure extraction unit which learns, on the basis of the image feature extracted for each of the plurality of images by the image feature extraction unit and the temporal feature extracted for each of the plurality of images by the temporal feature extraction unit, a repetitive structure which is used to perform interconversion between the temporal feature and a component of the image feature and which is provided according to a correlation of periodic change between the component of the image feature and the temporal feature.Type: GrantFiled: December 13, 2019Date of Patent: May 11, 2021Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Akisato Kimura, Yoshitaka Ushiku, Kunio Kashino
-
Patent number: 10839561Abstract: A repetitive structure extraction device includes an image feature extraction unit which extracts an image feature for each of a plurality of images which are captured at one or a plurality of locations and which are given different capture times, a temporal feature extraction unit which extracts, for each of the plurality of images, a temporal feature according to a predetermined period from a capture time given to the image, and a repetitive structure extraction unit which learns, on the basis of the image feature extracted for each of the plurality of images by the image feature extraction unit and the temporal feature extracted for each of the plurality of images by the temporal feature extraction unit, a repetitive structure which is used to perform interconversion between the temporal feature and a component of the image feature and which is provided according to a correlation of periodic change between the component of the image feature and the temporal feature.Type: GrantFiled: December 13, 2019Date of Patent: November 17, 2020Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Akisato Kimura, Yoshitaka Ushiku, Kunio Kashino
-
Publication number: 20200118298Abstract: A repetitive structure extraction device includes an image feature extraction unit which extracts an image feature for each of a plurality of images which are captured at one or a plurality of locations and which are given different capture times, a temporal feature extraction unit which extracts, for each of the plurality of images, a temporal feature according to a predetermined period from a capture time given to the image, and a repetitive structure extraction unit which learns, on the basis of the image feature extracted for each of the plurality of images by the image feature extraction unit and the temporal feature extracted for each of the plurality of images by the temporal feature extraction unit, a repetitive structure which is used to perform interconversion between the temporal feature and a component of the image feature and which is provided according to a correlation of periodic change between the component of the image feature and the temporal feature.Type: ApplicationFiled: December 13, 2019Publication date: April 16, 2020Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Akisato KIMURA, Yoshitaka USHIKU, Kunio KASHINO
-
Publication number: 20200118297Abstract: A repetitive structure extraction device includes an image feature extraction unit which extracts an image feature for each of a plurality of images which are captured at one or a plurality of locations and which are given different capture times, a temporal feature extraction unit which extracts, for each of the plurality of images, a temporal feature according to a predetermined period from a capture time given to the image, and a repetitive structure extraction unit which learns, on the basis of the image feature extracted for each of the plurality of images by the image feature extraction unit and the temporal feature extracted for each of the plurality of images by the temporal feature extraction unit, a repetitive structure which is used to perform interconversion between the temporal feature and a component of the image feature and which is provided according to a correlation of periodic change between the component of the image feature and the temporal feature.Type: ApplicationFiled: December 13, 2019Publication date: April 16, 2020Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Akisato KIMURA, Yoshitaka USHIKU, Kunio KASHINO
-
Patent number: 10580166Abstract: A repetitive structure extraction device includes an image feature extraction unit which extracts an image feature for each of a plurality of images which are captured at one or a plurality of locations and which are given different capture times, a temporal feature extraction unit which extracts, for each of the plurality of images, a temporal feature according to a predetermined period from a capture time given to the image, and a repetitive structure extraction unit which learns, on the basis of the image feature extracted for each of the plurality of images by the image feature extraction unit and the temporal feature extracted for each of the plurality of images by the temporal feature extraction unit, a repetitive structure which is used to perform interconversion between the temporal feature and a component of the image feature and which is provided according to a correlation of periodic change between the component of the image feature and the temporal feature.Type: GrantFiled: July 13, 2016Date of Patent: March 3, 2020Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Akisato Kimura, Yoshitaka Ushiku, Kunio Kashino
-
Publication number: 20180204354Abstract: A repetitive structure extraction device includes an image feature extraction unit which extracts an image feature for each of a plurality of images which are captured at one or a plurality of locations and which are given different capture times, a temporal feature extraction unit which extracts, for each of the plurality of images, a temporal feature according to a predetermined period from a capture time given to the image, and a repetitive structure extraction unit which learns, on the basis of the image feature extracted for each of the plurality of images by the image feature extraction unit and the temporal feature extracted for each of the plurality of images by the temporal feature extraction unit, a repetitive structure which is used to perform interconversion between the temporal feature and a component of the image feature and which is provided according to a correlation of periodic change between the component of the image feature and the temporal feature.Type: ApplicationFiled: July 13, 2016Publication date: July 19, 2018Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Akisato KIMURA, Yoshitaka USHIKU, Kunio KASHINO
-
Patent number: 9875301Abstract: Systems and methods for learning topic models from unstructured data and applying the learned topic models to recognize semantics for new data items are described herein. In at least one embodiment, a corpus of multimedia data items associated with a set of labels may be processed to generate a refined corpus of multimedia data items associated with the set of labels. Such processing may include arranging the multimedia data items in clusters based on similarities of extracted multimedia features and generating intra-cluster and inter-cluster features. The intra-cluster and the inter-cluster features may be used for removing multimedia data items from the corpus to generate the refined corpus. The refined corpus may be used for training topic models for identifying labels. The resulting models may be stored and subsequently used for identifying semantics of a multimedia data item input by a user.Type: GrantFiled: April 30, 2014Date of Patent: January 23, 2018Assignee: Microsoft Technology Licensing, LLCInventors: Xian-Sheng Hua, Jin Li, Yoshitaka Ushiku
-
Publication number: 20150317389Abstract: Systems and methods for learning topic models from unstructured data and applying the learned topic models to recognize semantics for new data items are described herein. In at least one embodiment, a corpus of multimedia data items associated with a set of labels may be processed to generate a refined corpus of multimedia data items associated with the set of labels. Such processing may include arranging the multimedia data items in clusters based on similarities of extracted multimedia features and generating intra-cluster and inter-cluster features. The intra-cluster and the inter-cluster features may be used for removing multimedia data items from the corpus to generate the refined corpus. The refined corpus may be used for training topic models for identifying labels. The resulting models may be stored and subsequently used for identifying semantics of a multimedia data item input by a user.Type: ApplicationFiled: April 30, 2014Publication date: November 5, 2015Applicant: Microsoft CorporationInventors: Xian-Sheng Hua, Jin Li, Yoshitaka Ushiku