Patents by Inventor Seiya Tokui
Seiya Tokui has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 11915146Abstract: There is provided an information processing device which efficiently executes machine learning. The information processing device according to one embodiment includes: an obtaining unit which obtains a source code including a code which defines Forward processing of each layer constituting a neural network; a storage unit which stores an association relationship between each Forward processing and Backward processing associated with each Forward processing; and an executing unit which successively executes each code included in the source code, and which calculates an output value of the Forward processing defined by the code based on an input value at a time of execution of each code, and generates a reference structure for Backward processing in a layer associated with the code based on the association relationship stored in the storage unit.Type: GrantFiled: November 11, 2022Date of Patent: February 27, 2024Assignee: PREFERRED NETWORKS, INC.Inventors: Seiya Tokui, Yuya Unno, Kenta Oono, Ryosuke Okuta
-
Publication number: 20240046104Abstract: There is provided an information processing device which efficiently executes machine learning. The information processing device according to one embodiment includes: an obtaining unit which obtains a source code including a code which defines Forward processing of each layer constituting a neural network; a storage unit which stores an association relationship between each Forward processing and Backward processing associated with each Forward processing; and an executing unit which successively executes each code included in the source code, and which calculates an output value of the Forward processing defined by the code based on an input value at a time of execution of each code, and generates a reference structure for Backward processing in a layer associated with the code based on the association relationship stored in the storage unit.Type: ApplicationFiled: October 16, 2023Publication date: February 8, 2024Applicant: Preferred Networks, Inc.Inventors: Seiya TOKUI, Yuya UNNO, Kenta OONO, Ryosuke OKUTA
-
Publication number: 20230129676Abstract: A compiler, for generating machine code to be executed in a chip including a plurality of distributed memories connected by a tree structure topology, includes at least one memory and at least one processor. The at least one processor is configured to associate each element of a tensor to be processed with an address in the plurality of memories included in the chip, based on a stride and a number of divisions in a predetermined hierarchy of the tree structure with respect to the tensor to be processed.Type: ApplicationFiled: October 24, 2022Publication date: April 27, 2023Inventor: Seiya TOKUI
-
Publication number: 20230111538Abstract: There is provided an information processing device which efficiently executes machine learning. The information processing device according to one embodiment includes: an obtaining unit which obtains a source code including a code which defines Forward processing of each layer constituting a neural network; a storage unit which stores an association relationship between each Forward processing and Backward processing associated with each Forward processing; and an executing unit which successively executes each code included in the source code, and which calculates an output value of the Forward processing defined by the code based on an input value at a time of execution of each code, and generates a reference structure for Backward processing in a layer associated with the code based on the association relationship stored in the storage unit.Type: ApplicationFiled: November 11, 2022Publication date: April 13, 2023Applicant: Preferred Networks, Inc.Inventors: Seiya TOKUI, Yuya UNNO, Kenta OONO, Ryosuke OKUTA
-
Patent number: 11521070Abstract: There is provided an information processing device which efficiently executes machine learning. The information processing device according to one embodiment includes: an obtaining unit which obtains a source code including a code which defines Forward processing of each layer constituting a neural network; a storage unit which stores an association relationship between each Forward processing and Backward processing associated with each Forward processing; and an executing unit which successively executes each code included in the source code, and which calculates an output value of the Forward processing defined by the code based on an input value at a time of execution of each code, and generates a reference structure for Backward processing in a layer associated with the code based on the association relationship stored in the storage unit.Type: GrantFiled: September 2, 2016Date of Patent: December 6, 2022Assignee: Preferred Networks, Inc.Inventors: Seiya Tokui, Yuya Unno, Kenta Oono, Ryosuke Okuta
-
Publication number: 20200167657Abstract: A training apparatus includes one or more memories and one or more processors. The one or more processors are configured to generate a graph based on a path of an error backward propagation, assign an identifier to each node based on the path of the error backward propagation in the graph, and execute the error backward propagation based on the graph and on the identifier.Type: ApplicationFiled: November 25, 2019Publication date: May 28, 2020Applicant: Preferred Networks, Inc.Inventors: Seiya TOKUI, Daisuke NISHINO, Hiroyuki Vincent YAMAZAKI, Naotoshi SEO, Akifumi IMANISHI
-
Publication number: 20190378018Abstract: There is provided an information processing device which efficiently executes machine learning. The information processing device according to one embodiment includes: an obtaining unit which obtains a source code including a code which defines Forward processing of each layer constituting a neural network; a storage unit which stores an association relationship between each Forward processing and Backward processing associated with each Forward processing; and an executing unit which successively executes each code included in the source code, and which calculates an output value of the Forward processing defined by the code based on an input value at a time of execution of each code, and generates a reference structure for Backward processing in a layer associated with the code based on the association relationship stored in the storage unit.Type: ApplicationFiled: August 26, 2019Publication date: December 12, 2019Applicant: Preferred Networks, Inc.Inventors: Seiya TOKUI, Yuya UNNO, Kenta OONO, Ryosuke OKUTA
-
Publication number: 20190325346Abstract: Machine learning with model filtering and model mixing for edge devices in a heterogeneous environment is disclosed. In an example embodiment, an edge device includes a communication module, a data collection device, a memory, a machine learning module, and a model mixing module. The edge device analyzes collected data with a model for a first task, outputs a result, and updates the model to create a local model. The edge device communicates with other edge devices in a heterogeneous group, transmits a request for local models to the heterogeneous group, and receives local models from the heterogeneous group. The edge device filters the local models by structure metadata, including second local models, which relate to a second task. The edge device performs a mix operation of the second local models to generate a mixed model which relates to the second task, and transmits the mixed model to the heterogeneous group.Type: ApplicationFiled: June 28, 2019Publication date: October 24, 2019Applicant: Preferred Networks, Inc.Inventors: Daisuke OKANOHARA, Justin B. CLAYTON, Toru NISHIKAWA, Shohei HIDO, Nobuyuki KUBOTA, Nobuyuki OTA, Seiya TOKUI
-
Patent number: 10387794Abstract: Machine learning with model filtering and model mixing for edge devices in a heterogeneous environment is disclosed. In an example embodiment, an edge device includes a communication module, a data collection device, a memory, a machine learning module, and a model mixing module. The edge device analyzes collected data with a model for a first task, outputs a result, and updates the model to create a local model. The edge device communicates with other edge devices in a heterogeneous group, transmits a request for local models to the heterogeneous group, and receives local models from the heterogeneous group. The edge device filters the local models by structure metadata, including second local models, which relate to a second task. The edge device performs a mix operation of the second local models to generate a mixed model which relates to the second task, and transmits the mixed model to the heterogeneous group.Type: GrantFiled: January 22, 2015Date of Patent: August 20, 2019Assignee: PREFERRED NETWORKS, INC.Inventors: Daisuke Okanohara, Justin B. Clayton, Toru Nishikawa, Shohei Hido, Nobuyuki Kubota, Nobuyuki Ota, Seiya Tokui
-
Publication number: 20180349772Abstract: There is provided an information processing device which efficiently executes machine learning. The information processing device according to one embodiment includes: an obtaining unit which obtains a source code including a code which defines Forward processing of each layer constituting a neural network; a storage unit which stores an association relationship between each Forward processing and Backward processing associated with each Forward processing; and an executing unit which successively executes each code included in the source code, and which calculates an output value of the Forward processing defined by the code based on an input value at a time of execution of each code, and generates a reference structure for Backward processing in a layer associated with the code based on the association relationship stored in the storage unit.Type: ApplicationFiled: September 2, 2016Publication date: December 6, 2018Inventors: Seiya TOKUI, Yuya UNNO, Kenta OONO, Ryosuke OKUTA
-
Publication number: 20180253665Abstract: A machine learning heterogeneous edge device, method, and system are disclosed. In an example embodiment, an edge device includes a communication module, a data collection device, a memory, a machine learning module, a group determination module, and a leader election module. The edge device analyzes collected data with a model, outputs a result, and updates the model to create a local model. The edge device communicates with other edge devices in a heterogeneous group. The edge device determines group membership and determines a leader edge device. The edge device receives a request for the local model, transmits the local model to the leader edge device, receives a mixed model created by the leader edge device performing a mix operation of the local model and a different local model, and replaces the local model with the mixed model.Type: ApplicationFiled: May 3, 2018Publication date: September 6, 2018Applicant: Preferred Networks, Inc.Inventors: Daisuke Okanohara, Justin Clayton, Toru Nishikawa, Shohei Hido, Nobuyuki Kubota, Nobuyuki Ota, Seiya Tokui
-
Patent number: 9990587Abstract: A machine learning heterogeneous edge device, method, and system are disclosed. In an example embodiment, an edge device includes a communication module, a data collection device, a memory, a machine learning module, a group determination module, and a leader election module. The edge device analyzes collected data with a model, outputs a result, and updates the model to create a local model. The edge device communicates with other edge devices in a heterogeneous group. The edge device determines group membership and determines a leader edge device. The edge device receives a request for the local model, transmits the local model to the leader edge device, receives a mixed model created by the leader edge device performing a mix operation of the local model and a different local model, and replaces the local model with the mixed model.Type: GrantFiled: January 22, 2015Date of Patent: June 5, 2018Assignee: PREFERRED NETWORKS, INC.Inventors: Daisuke Okanohara, Justin B. Clayton, Toru Nishikawa, Shohei Hido, Nobuyuki Kubota, Nobuyuki Ota, Seiya Tokui
-
Publication number: 20160217387Abstract: Machine learning with model filtering and model mixing for edge devices in a heterogeneous environment is disclosed. In an example embodiment, an edge device includes a communication module, a data collection device, a memory, a machine learning module, and a model mixing module. The edge device analyzes collected data with a model for a first task, outputs a result, and updates the model to create a local model. The edge device communicates with other edge devices in a heterogeneous group, transmits a request for local models to the heterogeneous group, and receives local models from the heterogeneous group. The edge device filters the local models by structure metadata, including second local models, which relate to a second task. The edge device performs a mix operation of the second local models to generate a mixed model which relates to the second task, and transmits the mixed model to the heterogeneous group.Type: ApplicationFiled: January 22, 2015Publication date: July 28, 2016Inventors: Daisuke Okanohara, Justin B. Clayton, Toru Nishikawa, Shohei Hido, Nobuyuki Kubota, Nobuyuki Ota, Seiya Tokui
-
Publication number: 20160217388Abstract: A machine learning heterogeneous edge device, method, and system are disclosed. In an example embodiment, an edge device includes a communication module, a data collection device, a memory, a machine learning module, a group determination module, and a leader election module. The edge device analyzes collected data with a model, outputs a result, and updates the model to create a local model. The edge device communicates with other edge devices in a heterogeneous group. The edge device determines group membership and determines a leader edge device. The edge device receives a request for the local model, transmits the local model to the leader edge device, receives a mixed model created by the leader edge device performing a mix operation of the local model and a different local model, and replaces the local model with the mixed model.Type: ApplicationFiled: January 22, 2015Publication date: July 28, 2016Inventors: Daisuke Okanohara, Justin B. Clayton, Toru Nishikawa, Shohei Hido, Nobuyuki Kubota, Nobuyuki Ota, Seiya Tokui