Patents by Inventor Motoki Yoshinaga

Motoki Yoshinaga has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).

  • Publication number: 20240402995
    Abstract: A data processing apparatus includes a multiply-accumulate operation unit configured to perform multiply-accumulate operation on data based on a given multiply-accumulate operation parameter, an arithmetic operation result retention unit configured to retain an arithmetic operation result obtained by the multiply-accumulate operation unit, a command processing unit configured to, based on a given command, perform processing defined by the command on the arithmetic operation result retained by the arithmetic operation result retention unit, and a distribution unit configured to receive a parameter sequence obtained by a combination of the multiply-accumulate operation parameter and the command and configured to, based on a content and sequential order of the combination, give the multiply-accumulate operation parameter to the multiply-accumulate operation unit and give the command to the command processing unit.
    Type: Application
    Filed: June 3, 2024
    Publication date: December 5, 2024
    Inventors: MOTOKI YOSHINAGA, MASAMI KATO
  • Publication number: 20240404265
    Abstract: A computation apparatus, comprises a first processing unit configured to obtain a first feature by executing computation of a neural network with use of a first coefficient that is not to be updated in online learning of the neural network, a second processing unit configured to obtain a second feature by executing the computation of the neural network with use of the first feature and a second coefficient that is to be updated in the online learning, and an update unit configured to update the second coefficient by executing the online learning with use of the second coefficient and a second feature that has been obtained by the second processing unit in a past. Processing of the first processing unit and processing of the update unit are executed in parallel.
    Type: Application
    Filed: May 22, 2024
    Publication date: December 5, 2024
    Inventors: Kazuhiro MIMA, Motoki YOSHINAGA
  • Publication number: 20240345945
    Abstract: An apparatus includes memories hold feature planes each corresponding to a corresponding layer of layers in a neural network, a calculation unit performs calculation processing on the feature planes, and a memory control unit reads a feature plane from any of the memories and input the feature plane to the calculation unit, and writes a feature plane output from the calculation unit to any of the memories. In a case where feature planes corresponding to different layers are connected and the calculation processing is performed, the memory control unit writes the feature planes to be connected, in memories other than a specific memory among the memories, reads the feature planes to be connected, from the memories other than the specific memory and inputs the feature planes to the calculation unit, and writes the feature plane output from the calculation unit in the specific memory.
    Type: Application
    Filed: April 11, 2024
    Publication date: October 17, 2024
    Inventors: YUTAKA MURATA, MASAMI KATO, TSEWEI CHEN, MOTOKI YOSHINAGA
  • Publication number: 20240177475
    Abstract: A data processing apparatus includes a storage unit configured to store a plurality of types of parameter groups to be used in a plurality of types of recognition tasks, a selection unit configured to select two or more recognition tasks to be executed from among the plurality of types of recognition tasks, a holding unit configured to hold parameter groups, a transfer unit configured to transfer parameter groups to be used in the two or more recognition tasks in sequence from the storage unit to the holding unit, and an execution unit configured to execute the two or more recognition tasks in sequence using the parameter groups held in the holding unit.
    Type: Application
    Filed: November 22, 2023
    Publication date: May 30, 2024
    Inventors: Masami Kato, TSEWEI CHEN, MOTOKI YOSHINAGA
  • Publication number: 20240112352
    Abstract: An information processing apparatus comprises a processing unit configured to execute, in frame periods corresponding to respective frames, an inference relating to the frame and training relating to the frame. When an inference relating to a first frame has been completed but training relating to the first frame has not been completed in a first frame period corresponding to the first frame, the processing unit executes training relating to the first frame and an inference and training relating to a second frame in a second frame period corresponding to the second frame subsequent to the first frame.
    Type: Application
    Filed: September 28, 2023
    Publication date: April 4, 2024
    Inventors: MOTOKI YOSHINAGA, Masami Kato
  • Patent number: 11853864
    Abstract: A data processing apparatus for executing data processing using a neural network including a plurality of hierarchal levels includes an extraction unit configured to extract intermediate feature data from input feature data, a calculation unit configured to calculate output feature data by reducing the number of channels of the intermediate feature data, a storage unit configured to store the output feature data calculated by the calculation unit and provide the input feature data to the extraction unit, and a control unit configured to control the number of channels of the intermediate feature data to be extracted by the extraction unit and the number of channels of the output feature data to be calculated by the calculation unit.
    Type: Grant
    Filed: February 24, 2020
    Date of Patent: December 26, 2023
    Assignee: Canon Kabushiki Kaisha
    Inventors: Motoki Yoshinaga, Tsewei Chen, Masami Kato
  • Publication number: 20230342230
    Abstract: A data processing apparatus comprises one or more memories storing instructions and one or more processors that, upon execution of the instructions, are configured to sequentially perform processing of data by a plurality of hierarchically connected processing nodes, store, in the one or more memories, processing results of the plurality of respective processing nodes, and processing statuses of and parameters for the plurality of respective processing nodes, the parameters being used determine a processing node to perform the processing, cyclically specify processing nodes, from among the plurality of processing nodes, to perform the processing in an order based on hierarchy, determine whether the processing by a specified processing node is performable based on the stored processing statuses, and determine a processing node to perform the processing based on a result of determination and the stored parameter for the specified processing node.
    Type: Application
    Filed: April 24, 2023
    Publication date: October 26, 2023
    Inventors: MOTOKI YOSHINAGA, KINYA OSA, Masami Kato
  • Publication number: 20230334820
    Abstract: An apparatus sets coefficients in a first array based on first information indicating an image capturing orientation of a first image, generates a first map by applying the coefficients to the first image, acquires a template feature corresponding to an object based on the first map, registers the template feature in an array based on the first information, sets coefficients in a second array based on second information indicating an image capturing orientation of a second image, generates a second map by applying the coefficients set in the second array to the second image, sets the template feature in a feature array based on the second information, performs a correlation calculation between the template feature set in the feature array and the second map, and detects the object from the second image based on a result of the correlation calculation.
    Type: Application
    Filed: March 29, 2023
    Publication date: October 19, 2023
    Inventors: Masami Kato, TSEWEI CHEN, Shiori Wakino, MOTOKI YOSHINAGA
  • Patent number: 11574188
    Abstract: There is provided with a data processing apparatus. An acquisition unit acquires feature plane data of a layer included in a neural network. A control unit outputs a first control signal corresponding to the layer for controlling first compression processing and a second control signal corresponding to the layer for controlling second compression processing. A first compression unit performs the first compression processing corresponding to the first control signal on the feature plane data. A second compression unit performs the second compression processing corresponding to the second control signal on the feature plane data after the first compression processing. A type of processing of the second compression processing is different from the first compression processing.
    Type: Grant
    Filed: March 5, 2020
    Date of Patent: February 7, 2023
    Assignee: CANON KABUSHIKI KAISHA
    Inventors: Motoki Yoshinaga, Tsewei Chen, Masami Kato
  • Publication number: 20220392207
    Abstract: An information processing apparatus operable to perform computation processing in a neural network comprises a coefficient storage unit configured to store filter coefficients of the neural network, a feature storage unit configured to store feature data, a storage control unit configured to store in the coefficient storage unit a part of previously obtained feature data as template feature data, a convolution operation unit configured to compute new feature data by a convolution operation between feature data stored in the feature storage unit and filter coefficients stored in the coefficient storage unit, and compute, by a convolution operation between feature data stored in the feature storage unit and the template feature data stored in the coefficient storage unit, correlation data between the feature data stored in the feature storage unit and the template feature data.
    Type: Application
    Filed: May 26, 2022
    Publication date: December 8, 2022
    Inventors: Masami Kato, Shiori Wakino, Tsewei Chen, Kinya Osa, Motoki Yoshinaga
  • Publication number: 20200293885
    Abstract: There is provided with a data processing apparatus. An acquisition unit acquires feature plane data of a layer included in a neural network. A control unit outputs a first control signal corresponding to the layer for controlling first compression processing and a second control signal corresponding to the layer for controlling second compression processing. A first compression unit performs the first compression processing corresponding to the first control signal on the feature plane data. A second compression unit performs the second compression processing corresponding to the second control signal on the feature plane data after the first compression processing. A type of processing of the second compression processing is different from the first compression processing.
    Type: Application
    Filed: March 5, 2020
    Publication date: September 17, 2020
    Inventors: Motoki Yoshinaga, Tsewei Chen, Masami Kato
  • Publication number: 20200285961
    Abstract: A data processing apparatus for executing data processing using a neural network including a plurality of hierarchal levels includes an extraction unit configured to extract intermediate feature data from input feature data, a calculation unit configured to calculate output feature data by reducing the number of channels of the intermediate feature data, a storage unit configured to store the output feature data calculated by the calculation unit and provide the input feature data to the extraction unit, and a control unit configured to control the number of channels of the intermediate feature data to be extracted by the extraction unit and the number of channels of the output feature data to be calculated by the calculation unit.
    Type: Application
    Filed: February 24, 2020
    Publication date: September 10, 2020
    Inventors: Motoki Yoshinaga, Tsewei Chen, Masami Kato