Patents by Inventor Yoshitaka Ushiku

Yoshitaka Ushiku has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240404284
    Abstract: An acquisition unit (30) acquires, for a work including a plurality of steps, a material feature quantity representing each material used in the task, and a video feature quantity extracted from each clip, which is a video of each step in which the task is captured. An updating unit (40) identifies an action for a material included in the clip based on the video feature quantity of each clip, and updates the material feature quantity of the identified material in accordance with the identified action. A generation unit (50) generates a sentence explaining a task procedure for each of the steps based on the updated material feature quantity, the specified action, and the video feature quantity.
    Type: Application
    Filed: September 21, 2022
    Publication date: December 5, 2024
    Applicants: OMRON Corporation, KYOTO UNIVERSITY
    Inventors: Atsushi HASHIMOTO, Yoshitaka USHIKU, Shinsuke MORI, Hirotaka KAMEKO, Taichi NISHIMURA
  • Publication number: 20240370735
    Abstract: A label generation method according to one aspect of the present invention prepares a first inference model trained on a first dataset obtained from a source domain, and a second inference model trained on a second dataset including second training data generated by adding a disturbance to first training data, and generates a third correct answer label for third training data, on the basis of a consensus of the prepared trained first inference model and second inference model.
    Type: Application
    Filed: August 17, 2022
    Publication date: November 7, 2024
    Inventors: Takehiko OHKAWA, Atsushi HASHIMOTO, Yoshitaka USHIKU, Yoichi SATO, Takuma YAGI
  • Patent number: 12056936
    Abstract: A model generation apparatus according to one aspect of the present invention acquires a plurality of learning datasets each constituted by a first sample of a first time of predetermined data obtained in time series and feature information included in a second sample of the predetermined data of a future second time relative to the first time, and trains a prediction model, by machine learning, to predict feature information of the second time from the first sample of the first time, for each learning dataset. In the model generation apparatus, a rarity degree for is set each learning dataset, and, in the machine learning, the model generation apparatus trains more preponderantly on learning datasets having a higher rarity degree.
    Type: Grant
    Filed: January 23, 2020
    Date of Patent: August 6, 2024
    Assignees: OMRON Corporation, KYOTO UNIVERSITY
    Inventors: Atsushi Hashimoto, Yuta Kamikawa, Yoshitaka Ushiku, Masaaki Iiyama, Motoharu Sonogashira
  • Publication number: 20240257922
    Abstract: A model generation method according to one aspect of the present invention acquires first data and second data regarding a crystal structure of a material, and performs machine learning for a first encoder and a second encoder by using the first data and the second data. The second data indicates a property of the material with an index different from that of the first data. The first encoder is configured to convert the first data into a first feature vector, and the second encoder is configured to convert the second data into a second feature vector. The dimension of the first feature vector is the same as the dimension of the second feature vector. In machine learning, the first encoder and the second encoder are trained so that the values of the feature vectors of the positive samples are positioned close to each other, and the feature vector of the negative sample is positioned far from the feature vector of the positive sample.
    Type: Application
    Filed: August 17, 2022
    Publication date: August 1, 2024
    Inventors: Tatsunori TANIAI, Yoshitaka USHIKU, Naoya CHIBA, Yuta SUZUKI, Kanta ONO
  • Patent number: 11834052
    Abstract: An estimator generation apparatus may include a first estimator and a second estimator sharing a common encoder. The first estimator may be trained to determine a target person's state from face image data. The second estimator may be trained to reconstruct physiological data from face image data. The machine learning may allow the common encoder to have its parameters converging toward higher-accuracy local solutions for estimating the target person's state, thus generating the estimator that may estimate the target person's state more accurately.
    Type: Grant
    Filed: March 13, 2019
    Date of Patent: December 5, 2023
    Assignee: OMRON Corporation
    Inventors: Atsushi Hashimoto, Yoshitaka Ushiku, Yasuyo Kotake
  • Publication number: 20230013870
    Abstract: Accuracy of a model extracting a graph structure as an intermediate representation from input data is improved. An encoding unit (100) extracts a feature amount of each of a plurality of vertices included in a graph structure (Tr) from input data (10), and calculates a likelihood that an edge is connected to the vertex. A sampling unit (130) determines the graph structure (Tr) based on a conversion result of a Gumbel-Softmax function for the likelihood. A learning unit (150) optimizes a decoding unit (140) and the encoding unit (100) by back propagation using a loss function including an error (LP) between output data (20) generated from the graph structure (Tr) and correct data.
    Type: Application
    Filed: February 19, 2021
    Publication date: January 19, 2023
    Applicants: OMRON CORPORATION, KYOTO UNIVERSITY
    Inventors: Atsushi HASHIMOTO, Yoshitaka USHIKU, Shinsuke MORI, Taichi NISHIMURA
  • Publication number: 20220139092
    Abstract: A model generation apparatus according to one aspect of the present invention acquires a plurality of learning datasets each constituted by a first sample of a first time of predetermined data obtained in time series and feature information included in a second sample of the predetermined data of a future second time relative to the first time, and trains a prediction model, by machine learning, to predict feature information of the second time from the first sample of the first time, for each learning dataset. In the model generation apparatus, a rarity degree for is set each learning dataset, and, in the machine learning, the model generation apparatus trains more preponderantly on learning datasets having a higher rarity degree.
    Type: Application
    Filed: January 23, 2020
    Publication date: May 5, 2022
    Applicants: OMRON Corporation, NATIONAL UNIVERSITY CORPORATION, KYOTO UNIVERSITY
    Inventors: Atsushi HASHIMOTO, Yuta KAMIKAWA, Yoshitaka USHIKU, Masaaki IIYAMA, Motoharu SONOGASHIRA
  • Publication number: 20210269046
    Abstract: An estimator generation apparatus may include a first estimator and a second estimator sharing a common encoder. The first estimator may be trained to determine a target person's state from face image data. The second estimator may be trained to reconstruct physiological data from face image data. The machine learning may allow the common encoder to have its parameters converging toward higher-accuracy local solutions for estimating the target person's state, thus generating the estimator that may estimate the target person's state more accurately.
    Type: Application
    Filed: March 13, 2019
    Publication date: September 2, 2021
    Applicant: OMRON Corporation
    Inventors: Atsushi HASHIMOTO, Yoshitaka USHIKU, Yasuyo KOTAKE
  • Patent number: 11004239
    Abstract: A repetitive structure extraction device includes an image feature extraction unit which extracts an image feature for each of a plurality of images which are captured at one or a plurality of locations and which are given different capture times, a temporal feature extraction unit which extracts, for each of the plurality of images, a temporal feature according to a predetermined period from a capture time given to the image, and a repetitive structure extraction unit which learns, on the basis of the image feature extracted for each of the plurality of images by the image feature extraction unit and the temporal feature extracted for each of the plurality of images by the temporal feature extraction unit, a repetitive structure which is used to perform interconversion between the temporal feature and a component of the image feature and which is provided according to a correlation of periodic change between the component of the image feature and the temporal feature.
    Type: Grant
    Filed: December 13, 2019
    Date of Patent: May 11, 2021
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Akisato Kimura, Yoshitaka Ushiku, Kunio Kashino
  • Patent number: 10839561
    Abstract: A repetitive structure extraction device includes an image feature extraction unit which extracts an image feature for each of a plurality of images which are captured at one or a plurality of locations and which are given different capture times, a temporal feature extraction unit which extracts, for each of the plurality of images, a temporal feature according to a predetermined period from a capture time given to the image, and a repetitive structure extraction unit which learns, on the basis of the image feature extracted for each of the plurality of images by the image feature extraction unit and the temporal feature extracted for each of the plurality of images by the temporal feature extraction unit, a repetitive structure which is used to perform interconversion between the temporal feature and a component of the image feature and which is provided according to a correlation of periodic change between the component of the image feature and the temporal feature.
    Type: Grant
    Filed: December 13, 2019
    Date of Patent: November 17, 2020
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Akisato Kimura, Yoshitaka Ushiku, Kunio Kashino
  • Publication number: 20200118298
    Abstract: A repetitive structure extraction device includes an image feature extraction unit which extracts an image feature for each of a plurality of images which are captured at one or a plurality of locations and which are given different capture times, a temporal feature extraction unit which extracts, for each of the plurality of images, a temporal feature according to a predetermined period from a capture time given to the image, and a repetitive structure extraction unit which learns, on the basis of the image feature extracted for each of the plurality of images by the image feature extraction unit and the temporal feature extracted for each of the plurality of images by the temporal feature extraction unit, a repetitive structure which is used to perform interconversion between the temporal feature and a component of the image feature and which is provided according to a correlation of periodic change between the component of the image feature and the temporal feature.
    Type: Application
    Filed: December 13, 2019
    Publication date: April 16, 2020
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Akisato KIMURA, Yoshitaka USHIKU, Kunio KASHINO
  • Publication number: 20200118297
    Abstract: A repetitive structure extraction device includes an image feature extraction unit which extracts an image feature for each of a plurality of images which are captured at one or a plurality of locations and which are given different capture times, a temporal feature extraction unit which extracts, for each of the plurality of images, a temporal feature according to a predetermined period from a capture time given to the image, and a repetitive structure extraction unit which learns, on the basis of the image feature extracted for each of the plurality of images by the image feature extraction unit and the temporal feature extracted for each of the plurality of images by the temporal feature extraction unit, a repetitive structure which is used to perform interconversion between the temporal feature and a component of the image feature and which is provided according to a correlation of periodic change between the component of the image feature and the temporal feature.
    Type: Application
    Filed: December 13, 2019
    Publication date: April 16, 2020
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Akisato KIMURA, Yoshitaka USHIKU, Kunio KASHINO
  • Patent number: 10580166
    Abstract: A repetitive structure extraction device includes an image feature extraction unit which extracts an image feature for each of a plurality of images which are captured at one or a plurality of locations and which are given different capture times, a temporal feature extraction unit which extracts, for each of the plurality of images, a temporal feature according to a predetermined period from a capture time given to the image, and a repetitive structure extraction unit which learns, on the basis of the image feature extracted for each of the plurality of images by the image feature extraction unit and the temporal feature extracted for each of the plurality of images by the temporal feature extraction unit, a repetitive structure which is used to perform interconversion between the temporal feature and a component of the image feature and which is provided according to a correlation of periodic change between the component of the image feature and the temporal feature.
    Type: Grant
    Filed: July 13, 2016
    Date of Patent: March 3, 2020
    Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Akisato Kimura, Yoshitaka Ushiku, Kunio Kashino
  • Publication number: 20180204354
    Abstract: A repetitive structure extraction device includes an image feature extraction unit which extracts an image feature for each of a plurality of images which are captured at one or a plurality of locations and which are given different capture times, a temporal feature extraction unit which extracts, for each of the plurality of images, a temporal feature according to a predetermined period from a capture time given to the image, and a repetitive structure extraction unit which learns, on the basis of the image feature extracted for each of the plurality of images by the image feature extraction unit and the temporal feature extracted for each of the plurality of images by the temporal feature extraction unit, a repetitive structure which is used to perform interconversion between the temporal feature and a component of the image feature and which is provided according to a correlation of periodic change between the component of the image feature and the temporal feature.
    Type: Application
    Filed: July 13, 2016
    Publication date: July 19, 2018
    Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATION
    Inventors: Akisato KIMURA, Yoshitaka USHIKU, Kunio KASHINO
  • Patent number: 9875301
    Abstract: Systems and methods for learning topic models from unstructured data and applying the learned topic models to recognize semantics for new data items are described herein. In at least one embodiment, a corpus of multimedia data items associated with a set of labels may be processed to generate a refined corpus of multimedia data items associated with the set of labels. Such processing may include arranging the multimedia data items in clusters based on similarities of extracted multimedia features and generating intra-cluster and inter-cluster features. The intra-cluster and the inter-cluster features may be used for removing multimedia data items from the corpus to generate the refined corpus. The refined corpus may be used for training topic models for identifying labels. The resulting models may be stored and subsequently used for identifying semantics of a multimedia data item input by a user.
    Type: Grant
    Filed: April 30, 2014
    Date of Patent: January 23, 2018
    Assignee: Microsoft Technology Licensing, LLC
    Inventors: Xian-Sheng Hua, Jin Li, Yoshitaka Ushiku
  • Publication number: 20150317389
    Abstract: Systems and methods for learning topic models from unstructured data and applying the learned topic models to recognize semantics for new data items are described herein. In at least one embodiment, a corpus of multimedia data items associated with a set of labels may be processed to generate a refined corpus of multimedia data items associated with the set of labels. Such processing may include arranging the multimedia data items in clusters based on similarities of extracted multimedia features and generating intra-cluster and inter-cluster features. The intra-cluster and the inter-cluster features may be used for removing multimedia data items from the corpus to generate the refined corpus. The refined corpus may be used for training topic models for identifying labels. The resulting models may be stored and subsequently used for identifying semantics of a multimedia data item input by a user.
    Type: Application
    Filed: April 30, 2014
    Publication date: November 5, 2015
    Applicant: Microsoft Corporation
    Inventors: Xian-Sheng Hua, Jin Li, Yoshitaka Ushiku