Patents by Inventor Seiya Tokui

Seiya Tokui has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Patent number: 11915146
    Abstract: There is provided an information processing device which efficiently executes machine learning. The information processing device according to one embodiment includes: an obtaining unit which obtains a source code including a code which defines Forward processing of each layer constituting a neural network; a storage unit which stores an association relationship between each Forward processing and Backward processing associated with each Forward processing; and an executing unit which successively executes each code included in the source code, and which calculates an output value of the Forward processing defined by the code based on an input value at a time of execution of each code, and generates a reference structure for Backward processing in a layer associated with the code based on the association relationship stored in the storage unit.
    Type: Grant
    Filed: November 11, 2022
    Date of Patent: February 27, 2024
    Assignee: PREFERRED NETWORKS, INC.
    Inventors: Seiya Tokui, Yuya Unno, Kenta Oono, Ryosuke Okuta
  • Publication number: 20240046104
    Abstract: There is provided an information processing device which efficiently executes machine learning. The information processing device according to one embodiment includes: an obtaining unit which obtains a source code including a code which defines Forward processing of each layer constituting a neural network; a storage unit which stores an association relationship between each Forward processing and Backward processing associated with each Forward processing; and an executing unit which successively executes each code included in the source code, and which calculates an output value of the Forward processing defined by the code based on an input value at a time of execution of each code, and generates a reference structure for Backward processing in a layer associated with the code based on the association relationship stored in the storage unit.
    Type: Application
    Filed: October 16, 2023
    Publication date: February 8, 2024
    Applicant: Preferred Networks, Inc.
    Inventors: Seiya TOKUI, Yuya UNNO, Kenta OONO, Ryosuke OKUTA
  • Publication number: 20230129676
    Abstract: A compiler, for generating machine code to be executed in a chip including a plurality of distributed memories connected by a tree structure topology, includes at least one memory and at least one processor. The at least one processor is configured to associate each element of a tensor to be processed with an address in the plurality of memories included in the chip, based on a stride and a number of divisions in a predetermined hierarchy of the tree structure with respect to the tensor to be processed.
    Type: Application
    Filed: October 24, 2022
    Publication date: April 27, 2023
    Inventor: Seiya TOKUI
  • Publication number: 20230111538
    Abstract: There is provided an information processing device which efficiently executes machine learning. The information processing device according to one embodiment includes: an obtaining unit which obtains a source code including a code which defines Forward processing of each layer constituting a neural network; a storage unit which stores an association relationship between each Forward processing and Backward processing associated with each Forward processing; and an executing unit which successively executes each code included in the source code, and which calculates an output value of the Forward processing defined by the code based on an input value at a time of execution of each code, and generates a reference structure for Backward processing in a layer associated with the code based on the association relationship stored in the storage unit.
    Type: Application
    Filed: November 11, 2022
    Publication date: April 13, 2023
    Applicant: Preferred Networks, Inc.
    Inventors: Seiya TOKUI, Yuya UNNO, Kenta OONO, Ryosuke OKUTA
  • Patent number: 11521070
    Abstract: There is provided an information processing device which efficiently executes machine learning. The information processing device according to one embodiment includes: an obtaining unit which obtains a source code including a code which defines Forward processing of each layer constituting a neural network; a storage unit which stores an association relationship between each Forward processing and Backward processing associated with each Forward processing; and an executing unit which successively executes each code included in the source code, and which calculates an output value of the Forward processing defined by the code based on an input value at a time of execution of each code, and generates a reference structure for Backward processing in a layer associated with the code based on the association relationship stored in the storage unit.
    Type: Grant
    Filed: September 2, 2016
    Date of Patent: December 6, 2022
    Assignee: Preferred Networks, Inc.
    Inventors: Seiya Tokui, Yuya Unno, Kenta Oono, Ryosuke Okuta
  • Publication number: 20200167657
    Abstract: A training apparatus includes one or more memories and one or more processors. The one or more processors are configured to generate a graph based on a path of an error backward propagation, assign an identifier to each node based on the path of the error backward propagation in the graph, and execute the error backward propagation based on the graph and on the identifier.
    Type: Application
    Filed: November 25, 2019
    Publication date: May 28, 2020
    Applicant: Preferred Networks, Inc.
    Inventors: Seiya TOKUI, Daisuke NISHINO, Hiroyuki Vincent YAMAZAKI, Naotoshi SEO, Akifumi IMANISHI
  • Publication number: 20190378018
    Abstract: There is provided an information processing device which efficiently executes machine learning. The information processing device according to one embodiment includes: an obtaining unit which obtains a source code including a code which defines Forward processing of each layer constituting a neural network; a storage unit which stores an association relationship between each Forward processing and Backward processing associated with each Forward processing; and an executing unit which successively executes each code included in the source code, and which calculates an output value of the Forward processing defined by the code based on an input value at a time of execution of each code, and generates a reference structure for Backward processing in a layer associated with the code based on the association relationship stored in the storage unit.
    Type: Application
    Filed: August 26, 2019
    Publication date: December 12, 2019
    Applicant: Preferred Networks, Inc.
    Inventors: Seiya TOKUI, Yuya UNNO, Kenta OONO, Ryosuke OKUTA
  • Publication number: 20190325346
    Abstract: Machine learning with model filtering and model mixing for edge devices in a heterogeneous environment is disclosed. In an example embodiment, an edge device includes a communication module, a data collection device, a memory, a machine learning module, and a model mixing module. The edge device analyzes collected data with a model for a first task, outputs a result, and updates the model to create a local model. The edge device communicates with other edge devices in a heterogeneous group, transmits a request for local models to the heterogeneous group, and receives local models from the heterogeneous group. The edge device filters the local models by structure metadata, including second local models, which relate to a second task. The edge device performs a mix operation of the second local models to generate a mixed model which relates to the second task, and transmits the mixed model to the heterogeneous group.
    Type: Application
    Filed: June 28, 2019
    Publication date: October 24, 2019
    Applicant: Preferred Networks, Inc.
    Inventors: Daisuke OKANOHARA, Justin B. CLAYTON, Toru NISHIKAWA, Shohei HIDO, Nobuyuki KUBOTA, Nobuyuki OTA, Seiya TOKUI
  • Patent number: 10387794
    Abstract: Machine learning with model filtering and model mixing for edge devices in a heterogeneous environment is disclosed. In an example embodiment, an edge device includes a communication module, a data collection device, a memory, a machine learning module, and a model mixing module. The edge device analyzes collected data with a model for a first task, outputs a result, and updates the model to create a local model. The edge device communicates with other edge devices in a heterogeneous group, transmits a request for local models to the heterogeneous group, and receives local models from the heterogeneous group. The edge device filters the local models by structure metadata, including second local models, which relate to a second task. The edge device performs a mix operation of the second local models to generate a mixed model which relates to the second task, and transmits the mixed model to the heterogeneous group.
    Type: Grant
    Filed: January 22, 2015
    Date of Patent: August 20, 2019
    Assignee: PREFERRED NETWORKS, INC.
    Inventors: Daisuke Okanohara, Justin B. Clayton, Toru Nishikawa, Shohei Hido, Nobuyuki Kubota, Nobuyuki Ota, Seiya Tokui
  • Publication number: 20180349772
    Abstract: There is provided an information processing device which efficiently executes machine learning. The information processing device according to one embodiment includes: an obtaining unit which obtains a source code including a code which defines Forward processing of each layer constituting a neural network; a storage unit which stores an association relationship between each Forward processing and Backward processing associated with each Forward processing; and an executing unit which successively executes each code included in the source code, and which calculates an output value of the Forward processing defined by the code based on an input value at a time of execution of each code, and generates a reference structure for Backward processing in a layer associated with the code based on the association relationship stored in the storage unit.
    Type: Application
    Filed: September 2, 2016
    Publication date: December 6, 2018
    Inventors: Seiya TOKUI, Yuya UNNO, Kenta OONO, Ryosuke OKUTA
  • Publication number: 20180253665
    Abstract: A machine learning heterogeneous edge device, method, and system are disclosed. In an example embodiment, an edge device includes a communication module, a data collection device, a memory, a machine learning module, a group determination module, and a leader election module. The edge device analyzes collected data with a model, outputs a result, and updates the model to create a local model. The edge device communicates with other edge devices in a heterogeneous group. The edge device determines group membership and determines a leader edge device. The edge device receives a request for the local model, transmits the local model to the leader edge device, receives a mixed model created by the leader edge device performing a mix operation of the local model and a different local model, and replaces the local model with the mixed model.
    Type: Application
    Filed: May 3, 2018
    Publication date: September 6, 2018
    Applicant: Preferred Networks, Inc.
    Inventors: Daisuke Okanohara, Justin Clayton, Toru Nishikawa, Shohei Hido, Nobuyuki Kubota, Nobuyuki Ota, Seiya Tokui
  • Patent number: 9990587
    Abstract: A machine learning heterogeneous edge device, method, and system are disclosed. In an example embodiment, an edge device includes a communication module, a data collection device, a memory, a machine learning module, a group determination module, and a leader election module. The edge device analyzes collected data with a model, outputs a result, and updates the model to create a local model. The edge device communicates with other edge devices in a heterogeneous group. The edge device determines group membership and determines a leader edge device. The edge device receives a request for the local model, transmits the local model to the leader edge device, receives a mixed model created by the leader edge device performing a mix operation of the local model and a different local model, and replaces the local model with the mixed model.
    Type: Grant
    Filed: January 22, 2015
    Date of Patent: June 5, 2018
    Assignee: PREFERRED NETWORKS, INC.
    Inventors: Daisuke Okanohara, Justin B. Clayton, Toru Nishikawa, Shohei Hido, Nobuyuki Kubota, Nobuyuki Ota, Seiya Tokui
  • Publication number: 20160217387
    Abstract: Machine learning with model filtering and model mixing for edge devices in a heterogeneous environment is disclosed. In an example embodiment, an edge device includes a communication module, a data collection device, a memory, a machine learning module, and a model mixing module. The edge device analyzes collected data with a model for a first task, outputs a result, and updates the model to create a local model. The edge device communicates with other edge devices in a heterogeneous group, transmits a request for local models to the heterogeneous group, and receives local models from the heterogeneous group. The edge device filters the local models by structure metadata, including second local models, which relate to a second task. The edge device performs a mix operation of the second local models to generate a mixed model which relates to the second task, and transmits the mixed model to the heterogeneous group.
    Type: Application
    Filed: January 22, 2015
    Publication date: July 28, 2016
    Inventors: Daisuke Okanohara, Justin B. Clayton, Toru Nishikawa, Shohei Hido, Nobuyuki Kubota, Nobuyuki Ota, Seiya Tokui
  • Publication number: 20160217388
    Abstract: A machine learning heterogeneous edge device, method, and system are disclosed. In an example embodiment, an edge device includes a communication module, a data collection device, a memory, a machine learning module, a group determination module, and a leader election module. The edge device analyzes collected data with a model, outputs a result, and updates the model to create a local model. The edge device communicates with other edge devices in a heterogeneous group. The edge device determines group membership and determines a leader edge device. The edge device receives a request for the local model, transmits the local model to the leader edge device, receives a mixed model created by the leader edge device performing a mix operation of the local model and a different local model, and replaces the local model with the mixed model.
    Type: Application
    Filed: January 22, 2015
    Publication date: July 28, 2016
    Inventors: Daisuke Okanohara, Justin B. Clayton, Toru Nishikawa, Shohei Hido, Nobuyuki Kubota, Nobuyuki Ota, Seiya Tokui