Patents by Inventor Takeharu EDA
Takeharu EDA has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Publication number: 20240354970Abstract: In an inference system including an edge and a server, an acquisition unit acquires information regarding movement of a predetermined subject imaged by a first camera among cameras on the edge at least at a first time point. An estimation unit estimates a second camera that images the predetermined subject at a second time point later than the first time point on the basis of a movement destination of the predetermined subject.Type: ApplicationFiled: July 21, 2021Publication date: October 24, 2024Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Kyoku SHI, Takeharu EDA, Akira SAKAMOTO, Shohei ENOMOTO, Ichiro MORINAGA
-
Publication number: 20240338939Abstract: A processing system (100) is a processing method for performing inference processing in an edge device (20) and a server device (30), the method including a first transmission process in which the edge device (20) transmits first data based on data to be inferred to a server device that performs first inference, and a second transmission process in which the edge device (20) transmits second data based on the data to be inferred to an execution unit that performs second inference in response to a request from the server device (30), in which the request from the server device (30) is made in a case where a result of the first inference performed in the server device (30) is equal to or less than predetermined confidence.Type: ApplicationFiled: July 14, 2021Publication date: October 10, 2024Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Ichiro MORINAGA, Takeharu EDA, Akira SAKAMOTO, Kyoku SHI, Shohei ENOMOTO
-
Publication number: 20240290083Abstract: A processing system (100) is a processing system that executes inference process in an edge device (30) and a server device (20). The edge device (30) includes an inference unit (31) that executes inference related to a first task on inference target data by using a DNN1; and a determination unit (32) that transmits an intermediate output value of the DNN1 used to execute the inference related to the first task to the server device (20) so that the server device (20) executes a second task which is different from the first task and has a higher operation amount than the first task.Type: ApplicationFiled: June 24, 2021Publication date: August 29, 2024Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Kyoku SHI, Takeharu EDA, Akira SAKAMOTO, Shohei ENOMOTO, Ichiro MORINAGA
-
Publication number: 20240095581Abstract: A processing method executed by a processing system that performs first inference in an edge device and performs second inference in a server device, the processing method includes determining whether or not a tendency of a target data group on which inference is performed is changed in at least one of the edge device or the server device on a basis of a variation in load or a decrease in inference accuracy in at least one of the edge device or the server device, and executing relearning of at least one of a first model that performs the first inference or a second model that performs the second inference in a case where it is determined that the tendency of the target data group is changed.Type: ApplicationFiled: November 24, 2020Publication date: March 21, 2024Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Kyoku SHI, Shohei ENOMOTO, Takeharu EDA, Akira SAKAMOTO
-
Publication number: 20230409884Abstract: A processing system is performed by using an edge device and a server device, wherein the edge device includes first processing circuitry configured to input divided data obtained by dividing processing data into a plurality of pieces to a corresponding first model among a plurality of first models, and cause inference in each of the first models to be executed, and output, to the server device, only the divided data for which it is determined that an inference result in the corresponding first model matches a predetermined condition among a plurality of pieces of the divided data, and the server device includes second processing circuitry configured to execute inference processing on the divided data output from the edge device by using a second model having a higher amount of computation than that of the first model.Type: ApplicationFiled: November 24, 2020Publication date: December 21, 2023Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Akira SAKAMOTO, Ichiro MORINAGA, Kyoku SHI, Shohei ENOMOTO, Takeharu EDA
-
Publication number: 20230224360Abstract: A processing system performs using an edge device and a server device, wherein the edge device includes processing circuitry configured to process processing target data and output a processing result of the processing target data, determine that the server device is to execute processing related to the processing target data when an evaluation value for evaluating which of the edge device and the server device is to process the processing target data satisfies a condition, determine that the evaluation value is included in a range for determining that processing is to be executed by the edge device when the processing result of the processing target data satisfies a predetermined evaluation, and output the processing result of the processing target data processed, and transmit data that causes the server device to execute the processing related to the processing target data when determining that the server device is to execute the processing.Type: ApplicationFiled: June 15, 2020Publication date: July 13, 2023Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Masahisa KAWASHIMA, Daisuke HAMURO, Yoshinori MATSUO, Takeharu EDA, Akira SAKAMOTO, Shohei ENOMOTO, Kyoku SHI
-
Patent number: 11640428Abstract: An index generation unit (15a) generates an index in which a plurality of combinations of query data that is a collation source and target data that is a collation destination are listed in a predetermined order. A batch generation unit (15b) uses a plurality of combinations of query data and target data in an order according to the index to generate a batch with a predetermined volume. A collation unit (16) calculates a degree of similarity between the query data and the target data for each combination included in the batch, in which processes are parallelized and performed.Type: GrantFiled: August 23, 2019Date of Patent: May 2, 2023Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Sanae Muramatsu, Takeharu Eda, Keita Mikami
-
Publication number: 20230112076Abstract: An estimation unit inputs learning data to a lightweight model for outputting an estimation result in accordance with data input and acquires a first estimation result. Further, the updating unit updates a parameter of the lightweight model so that a model cascade including the lightweight model and a high-accuracy model is optimized in accordance with the first estimation result and a second estimation result obtained by inputting the learning data to the high-accuracy model, which is a model for outputting an estimation result in accordance with input data and has a lower processing speed than the first model or a higher estimation accuracy than the lightweight model.Type: ApplicationFiled: March 6, 2020Publication date: April 13, 2023Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Shohei ENOMOTO, Takeharu EDA, Akira SAKAMOTO, Kyoku SHI, Yoshihiro IKEDA
-
Publication number: 20210334706Abstract: An augmentation apparatus (10) causes a generative model that generates data from a label to learn first data and second data to which a label has been added. In addition, the augmentation apparatus (10) uses the generative model that learned the first data and the second data to generate data for augmentation from the label added to the first data. In addition, the augmentation apparatus (10) adds the label added to the first data to augmented data obtained by integrating the first data and the data for augmentation.Type: ApplicationFiled: August 22, 2019Publication date: October 28, 2021Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Shinya YAMAGUCHI, Takeharu EDA, Sanae MURAMATSU
-
Publication number: 20210326384Abstract: An index generation unit (15a) generates an index in which a plurality of combinations of query data that is a collation source and target data that is a collation destination are listed in a predetermined order. A batch generation unit (15b) uses a plurality of combinations of query data and target data in an order according to the index to generate a batch with a predetermined volume. A collation unit (16) calculates a degree of similarity between the query data and the target data for each combination included in the batch, in which processes are parallelized and performed.Type: ApplicationFiled: August 23, 2019Publication date: October 21, 2021Applicant: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Sanae MURAMATSU, Takeharu EDA, Keita MIKAMI