Patents by Inventor Takayuki Osogami

Takayuki Osogami has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20190042943
    Abstract: Deep reinforcement learning of cooperative neural networks can be performed by obtaining an action and observation sequence including a plurality of time frames, each time frame including action values and observation values. At least some of the observation values of each time frame of the action and observation sequence can be input sequentially into a first neural network including a plurality of first parameters. The action values of each time frame of the action and observation sequence and output values from the first neural network corresponding to the at least some of the observation values of each time frame of the action and observation sequence can be input sequentially into a second neural network including a plurality of second parameters. An action-value function can be approximated using the second neural network, and the plurality of first parameters of the first neural network can be updated using backpropagation.
    Type: Application
    Filed: August 4, 2017
    Publication date: February 7, 2019
    Inventors: Sakyasingha Dasgupta, TAKAYUKI OSOGAMI
  • Publication number: 20190034386
    Abstract: A method is presented for predicting values of multiple input items. The method includes allowing a user to select a first set of variables and input first values therein and predicting second values for a second set of variables, the second values predicted in real-time as the first values are being inputted by the user. A tree-based prediction model is used to predict the second values. The tree-based prediction model is a regression tree or a decision tree.
    Type: Application
    Filed: December 4, 2017
    Publication date: January 31, 2019
    Inventors: Ryo Kawahara, Takayuki Osogami
  • Publication number: 20190034385
    Abstract: A method is presented for predicting values of multiple input items. The method includes allowing a user to select a first set of variables and input first values therein and predicting second values for a second set of variables, the second values predicted in real-time as the first values are being inputted by the user. A tree-based prediction model is used to predict the second values. The tree-based prediction model is a regression tree or a decision tree.
    Type: Application
    Filed: July 28, 2017
    Publication date: January 31, 2019
    Inventors: Ryo Kawahara, Takayuki Osogami
  • Publication number: 20190019082
    Abstract: Cooperative neural networks reinforcement learning may be performed by obtaining an action and observation sequence, inputting each time frame of the action and observation sequence sequentially into a first neural network including a plurality of first parameters and a second neural network including a plurality of second parameters, approximating an action-value function using the first neural network, and updating the plurality of second parameters to approximate a policy of actions by using updated first parameters.
    Type: Application
    Filed: July 12, 2017
    Publication date: January 17, 2019
    Inventors: Sakyasingha Dasgupta, Tetsuro Morimura, Takayuki Osogami
  • Patent number: 10129350
    Abstract: In a first aspect of the present invention, provided are an information processing apparatus including a behavior-history acquisition unit configured to acquire behavior histories of first users identified by first-user identification information, a transmission-history acquisition unit configured to acquire information transmission histories of second users identified by second-user identification information, and a determination unit configured to determine identity between the first users and the second users on the basis of behavior details included in the behavior histories and transmission details included in the transmission histories; a method for processing information with the information processing apparatus; and a program using the information processing apparatus.
    Type: Grant
    Filed: September 5, 2017
    Date of Patent: November 13, 2018
    Assignee: International Business Machines Corporation
    Inventors: Risa Kawanaka, Takayuki Osogami
  • Patent number: 10122811
    Abstract: In a first aspect of the present invention, provided are an information processing apparatus including a behavior-history acquisition unit configured to acquire behavior histories of first users identified by first-user identification information, a transmission-history acquisition unit configured to acquire information transmission histories of second users identified by second-user identification information, and a determination unit configured to determine identity between the first users and the second users on the basis of behavior details included in the behavior histories and transmission details included in the transmission histories; a method for processing information with the information processing apparatus; and a program using the information processing apparatus.
    Type: Grant
    Filed: September 5, 2017
    Date of Patent: November 6, 2018
    Assignee: International Business Machines Corporation
    Inventors: Risa Kawanaka, Takayuki Osogami
  • Publication number: 20180240040
    Abstract: A training method is provided. The training method includes clustering, by a processor, a plurality of items that each have an item attribute value, according to the item attribute value. The training method further includes generating, by the processor, for each item, a cluster attribute value corresponding to a cluster associated with the item. The training method also includes training, by the processor, an estimation model for estimating selection behavior of a target with respect to a choice set including two or more items, based on the cluster attribute value associated with each item included in the choice set, by using training data that includes a group of a choice set of items presented to the target and an item selected by the target from among the choice set.
    Type: Application
    Filed: November 13, 2017
    Publication date: August 23, 2018
    Inventors: Tetsuro Morimura, Yachiko Obara, Takayuki Osogami
  • Publication number: 20180240037
    Abstract: A training method is provided. The training method includes clustering, by a processor, a plurality of items that each have an item attribute value, according to the item attribute value. The training method further includes generating, by the processor, for each item, a cluster attribute value corresponding to a cluster associated with the item. The training method also includes training, by the processor, an estimation model for estimating selection behavior of a target with respect to a choice set including two or more items, based on the cluster attribute value associated with each item included in the choice set, by using training data that includes a group of a choice set of items presented to the target and an item selected by the target from among the choice set.
    Type: Application
    Filed: February 23, 2017
    Publication date: August 23, 2018
    Inventors: Tetsuro Morimura, Yachiko Obara, Takayuki Osogami
  • Publication number: 20180211160
    Abstract: A neuromorphic electric system includes a network of plural neuron circuits connected in series and in parallel to form plural layers. Each of the plural neuron circuits includes: a soma circuit that stores a charge supplied thereto and outputs a spike signal; and plural synapse circuits that supply a charge to the soma circuit according to a spike signal fed to the synapse circuits, a number of the plural synapse circuits being one more than a number of plural neuron circuits in a prior layer outputting the spike signal to the synapse circuits. One of the plural synapse circuits supplies a charge to the soma circuit in response to receiving a series of pulse signals, and the others of the plural synapse circuits supply a charge to the soma circuit in response to receiving a spike signal from corresponding neuron circuits in the prior layer.
    Type: Application
    Filed: November 3, 2017
    Publication date: July 26, 2018
    Inventors: Kohji Hosokawa, Masatoshi Ishii, Atsuya Okazaki, Junka Okazawa, Takayuki Osogami
  • Publication number: 20180211159
    Abstract: A neuromorphic electric system includes a network of plural neuron circuits connected in series and in parallel to form plural layers. Each of the plural neuron circuits includes: a soma circuit that stores a charge supplied thereto and outputs a spike signal; and plural synapse circuits that supply a charge to the soma circuit according to a spike signal fed to the synapse circuits, a number of the plural synapse circuits being one more than a number of plural neuron circuits in a prior layer outputting the spike signal to the synapse circuits. One of the plural synapse circuits supplies a charge to the soma circuit in response to receiving a series of pulse signals, and the others of the plural synapse circuits supply a charge to the soma circuit in response to receiving a spike signal from corresponding neuron circuits in the prior layer.
    Type: Application
    Filed: January 20, 2017
    Publication date: July 26, 2018
    Inventors: Kohji Hosokawa, Masatoshi Ishii, Atsuya Okazaki, Junka Okazawa, Takayuki Osogami
  • Publication number: 20180197082
    Abstract: A computer-implemented method and an apparatus are provided for learning a first model. The method includes generating a second model based on the first model. The first model is configured to perform a learning process based on sequentially inputting each of a plurality of pieces of input data that include a plurality of input values and that are from a first input data sequence. The second model is configured to learn a first learning target parameter included in the first model based on inputting, in an order differing from an order in the first model, each of a plurality of pieces of input data that include a plurality of input values and are from a second input data sequence. The method further includes performing a learning process using both the first model and the second model. The method also includes storing the first model that has been learned.
    Type: Application
    Filed: November 6, 2017
    Publication date: July 12, 2018
    Inventors: Hiroshi Kajino, Takayuki Osogami
  • Publication number: 20180197083
    Abstract: A computer-implement method and an apparatus are provided for neural network reinforcement learning. The method includes obtaining, by a processor, an action and observation sequence. The method further includes inputting, by the processor, each of a plurality of time frames of the action and observation sequence sequentially into a plurality of input nodes of a neural network. The method also includes updating, by the processor, a plurality of parameters of the neural network by using the neural network to approximate an action-value function of the action and observation sequence.
    Type: Application
    Filed: November 6, 2017
    Publication date: July 12, 2018
    Inventors: Sakyasingha Dasgupta, Takayuki Osogami
  • Publication number: 20180197080
    Abstract: A computer-implemented method and an apparatus are provided for learning a first model. The method includes generating a second model based on the first model. The first model is configured to perform a learning process based on sequentially inputting each of a plurality of pieces of input data that include a plurality of input values and that are from a first input data sequence. The second model is configured to learn a first learning target parameter included in the first model based on inputting, in an order differing from an order in the first model, each of a plurality of pieces of input data that include a plurality of input values and are from a second input data sequence. The method further includes performing a learning process using both the first model and the second model. The method also includes storing the first model that has been learned.
    Type: Application
    Filed: January 11, 2017
    Publication date: July 12, 2018
    Inventors: Hiroshi Kajino, Takayuki Osogami
  • Publication number: 20180197079
    Abstract: A computer-implement method and an apparatus are provided for neural network reinforcement learning. The method includes inputting, by a processor, and action and observation sequence. The method further includes inputting, by the processor, each of a plurality of time frames of the action and observation sequence sequentially into a plurality of input nodes of a neural network. The method also includes updating, by the processor, a plurality of parameters of the neural network by using the neural network to approximate an action-value function of the act on and observation sequence.
    Type: Application
    Filed: January 11, 2017
    Publication date: July 12, 2018
    Inventors: Sakyasingha Dasgupta, Takayuki Osogami
  • Patent number: 10009377
    Abstract: An information processing apparatus includes a policy acquisition unit configured to acquire a policy on disclosure of information on a target user; a collection unit configured to collect attributes that may be related to the target user from public information disclosed on a network to create an attribute set related to the target user; and a determination unit configured to determine whether or not the attribute set satisfies the policy.
    Type: Grant
    Filed: June 23, 2015
    Date of Patent: June 26, 2018
    Assignee: INTERNATIONAL BUSINESS MACHINES CORPORATION
    Inventors: Kohichi Kamijoh, Takayuki Osogami
  • Publication number: 20180144266
    Abstract: An apparatus, a computer readable medium, and a learning method for learning a model corresponding to time-series input data, including acquiring the time-series input data, which is a time series of input data including a plurality of input values, propagating, to a plurality of nodes in a model, each of a plurality of propagation values obtained by weighting each input value at a plurality of time points before one time point according to passage of time points, in association with the plurality of input values at the one time point, calculating a node value of a first node among the plurality of nodes by using each propagated value propagated to the first node, and updating a weight parameter used to calculate each propagation value propagated to the first node, by using a corresponding input value and a calculated error of the node value at the one time point.
    Type: Application
    Filed: November 22, 2016
    Publication date: May 24, 2018
    Inventor: Takayuki Osogami
  • Publication number: 20180129968
    Abstract: Provided are a computer program product, a learning apparatus and a learning method. The method includes calculating, by a processor, a first propagation value that is propagated from a propagation source node to a propagation destination node in a neural network including a plurality of nodes, based on node values of the propagation source node at a plurality of time points and a weight corresponding to passage of time points based on a first attenuation coefficient. The method further includes updating, by the processor, a first update parameter, which is used for updating the first attenuation coefficient, by using the first propagation value. The method also includes updating, by the processor, the first attenuation coefficient by using the first update parameter and an error of the node value of the propagation destination node.
    Type: Application
    Filed: November 7, 2016
    Publication date: May 10, 2018
    Inventor: Takayuki Osogami
  • Patent number: 9934771
    Abstract: A computer implemented method is provided for generating a prediction of a next musical note by a computer having at least a processor and a memory. A computer processor system is also provided for generating a prediction of a next musical note. The method includes storing sequential musical notes in the memory. The method further includes dividing, by the processor, the sequential musical notes into sections of a given length based on a Generative Theory of Tonal Music. The method also includes generating, by the processor, the prediction of the next musical note based upon a music model, the sections, and the sequential musical notes stored in the memory. The given length is determined based on one or more conditions.
    Type: Grant
    Filed: June 21, 2017
    Date of Patent: April 3, 2018
    Assignee: International Business Machines Corporation
    Inventors: Yachiko Obara, Shohei Ohsawa, Takayuki Osogami
  • Publication number: 20180075341
    Abstract: Regularization of neural networks. Neural networks can be regularized by obtaining an original neural network having a plurality of first-in-first-out (FIFO) queues, each FIFO queue located between a pair of nodes among a plurality of nodes of the original neural network, generating at least one modified neural network, the modified neural network being equivalent to the original neural network with a modified length of at least one FIFO queue, evaluating each neural network among the original neural network and the at least one modified neural network, and determining which neural network among the original neural network and the at least one modified neural network is most accurate, based on the evaluation.
    Type: Application
    Filed: December 28, 2016
    Publication date: March 15, 2018
    Applicant: International Business Machines Corporation
    Inventors: Sakyasingha Dasgupta, Takayuki Osogami
  • Patent number: 9916541
    Abstract: An information processing apparatus includes a history acquisition section configured to acquire history data including a history indicating that a plurality of selection subjects have selected selection objects; a learning processing section configured to allow a choice model to learn a preference of each selection subject for a feature and an environmental dependence of selection of each selection object in each selection environment using the history data, where the choice model uses a feature value possessed by each selection object, the preference of each selection subject for the feature, and the environmental dependence indicative of ease of selection of each selection object in each of a plurality of selection environments to calculate a selectability with which each of the plurality of selection subjects selects each selection object; and an output section configured to output results of learning by the learning processing section.
    Type: Grant
    Filed: December 14, 2015
    Date of Patent: March 13, 2018
    Assignee: International Business Machines Corporation
    Inventors: Takayuki Katsuki, Takayuki Osogami