Patents by Inventor Yuki Arikawa
Yuki Arikawa has filed for patents to protect the following inventions. This listing includes patent applications that are pending as well as patents that have already been granted by the United States Patent and Trademark Office (USPTO).
-
Patent number: 12210410Abstract: A transfer processing device includes an arithmetic instruction number acquisition circuit, a buffer circuit, a transfer information acquisition circuit, and a software processing unit. The arithmetic instruction number acquisition circuit acquires a transfer instruction number corresponding to transfer information which is information related to the next transfer destination of an arithmetic instruction. The buffer circuit is arranged between the arithmetic instruction number acquisition circuit and the transfer information acquisition circuit, and temporarily stores and relays the arithmetic instruction and the arithmetic instruction number supplied from the arithmetic instruction number acquisition circuit to the transfer information acquisition circuit. The transfer information acquisition circuit acquires transfer information on the basis of the arithmetic instruction number, and gives the acquired transfer information to the arithmetic instruction.Type: GrantFiled: December 10, 2020Date of Patent: January 28, 2025Assignee: Nippon Telegraph and Telephone CorporationInventors: Tsuyoshi Ito, Kenji Tanaka, Yuki Arikawa, Kazuhiko Terada, Tsutomu Takeya, Takeshi Sakamoto
-
Publication number: 20250002955Abstract: It is to provide, for example: an exopolysaccharide (EPS) production method with improved EPS productivity; EPS produced by the production method; a method for producing a fermented product comprising EPS, the method having improved EPS productivity; a fermented product produced by the production method; an agent for promoting production of EPS of bacteria of the genus Bifidobacterium; and a method for promoting production of an exopolysaccharide (EPS) of a bacterium of the genus Bifidobacterium. The bacterium of the genus Bifidobacterium is cultured in a culture medium comprising L-fucose.Type: ApplicationFiled: September 20, 2022Publication date: January 2, 2025Inventors: Ayaka Tamura, Wakako Ohtsubo, Haruki Kitazawa, Binghui Zhou, Yuki Arikawa
-
Publication number: 20240430131Abstract: A data transfer device transfers a received packet. If the transfer destination of the received packet is a data processing device under control, the data transfer device (confirms the state of a shared bus, and if the shared bus is usable, transfers the packet to the data processing device of the transfer destination via the shared bus, gives the use right of the shared bus to one of the data processing devices in response to communication requirements from the data processing devices, and receives, via the shared bus, a packet from the data processing device given the use right.Type: ApplicationFiled: November 8, 2022Publication date: December 26, 2024Inventors: Naoki Miura, Takeshi Sakamoto, Yuki Arikawa, Tsuyoshi Ito, Kenji Tanaka, Masaru Katayama, Yuta Ukon
-
Publication number: 20240429920Abstract: An FPGA includes a reconfigurable circuit region, an access acceptance unit that transfers data to be processed included in a function use request from a client to a function circuit constructed in the circuit region and returns a processing result to the client, and a function token table in which a function ID that is identification information for each portion of the circuit region, a function name representing a function of the function circuit, and a token that is identification information of the function circuit are stored in association with each other.Type: ApplicationFiled: August 31, 2021Publication date: December 26, 2024Inventors: Kenji Tanaka, Yuki Arikawa, Tsuyoshi Ito, Takeshi Sakamoto
-
Publication number: 20240419403Abstract: An embodiment is a deep learning inference system including a memory and processors configured to read operation code and parameters from a global memory space of the memory and perform an arithmetic operation of a neural network. The processors are further configured to read processing target data from local memory spaces corresponding to target clients, perform arithmetic operations, and store arithmetic operation results in the local memory spaces corresponding to the target clients.Type: ApplicationFiled: December 7, 2021Publication date: December 19, 2024Inventors: Kenji Tanaka, Yuki Arikawa, Tsuyoshi Ito, Naoki Miura, Takeshi Sakamoto
-
Publication number: 20240414351Abstract: AI inference system includes an encoder circuit that encodes an image captured by a camera, a decoder circuit that decodes the image, a first inference processing unit that executes first inference processing on the decoded image for each frame, a second inference processing unit that executes second inference processing with higher accuracy than the first inference only for frames of an image whose confidence value of the first inference exceeds a predetermined threshold, and a frame rate control unit that reduces frame rates of the encoder circuit and the decoder circuit from an initial value to a low speed value when the confidence value is equal to or less than the predetermined threshold.Type: ApplicationFiled: November 22, 2021Publication date: December 12, 2024Inventors: Kenji Tanaka, Yuki Arikawa, Tsuyoshi Ito, Naoki Miura, Takeshi Sakamoto
-
Publication number: 20240414103Abstract: A computing system includes top-of-rack switches, a spine switch, and PIN blades arranged between the top-of-rack switches and resource blades. Each of the PIN blades includes a plurality of processing blocks, and each of the processing blocks includes a functional unit that performs predetermined processing on data included in the received packet.Type: ApplicationFiled: December 7, 2021Publication date: December 12, 2024Inventors: Kenji Tanaka, Yuki Arikawa, Tsuyoshi Ito, Naoki Miura, Takeshi Sakamoto
-
Publication number: 20240413908Abstract: A relay system relays data transmitted to any one of a plurality of transmission destinations, and includes a reception unit that receives the data, a conversion unit that converts the data received by the reception unit into an optical signal having a wavelength varying according to a transmission destination to which the data is to be transmitted among the plurality of transmission destinations, and an output unit that outputs the optical signal converted by the conversion unit to an optical transmission line.Type: ApplicationFiled: December 16, 2021Publication date: December 12, 2024Inventors: Tsuyoshi Ito, Yuki Arikawa, Kenji Tanaka
-
Publication number: 20240411601Abstract: An embodiment is a computer system for processing input data, the system including a plurality of calculators, and a host connected to the plurality of calculators configured to control the plurality of calculators, wherein the processed data is configured to be transferred between the plurality of calculators, the calculator includes a trace buffer configured to record trace data upon detection of a predetermined event from the input data, and the trace data has a timestamp value that is the detection time of the event.Type: ApplicationFiled: November 12, 2021Publication date: December 12, 2024Inventors: Yuki Arikawa, Naoki Miura, Kenji Tanaka, Tsuyoshi Ito, Takeshi Sakamoto, Yusuke Muranaka
-
Publication number: 20240411602Abstract: A computing machine is a computing machine capable of adding or deleting a computational resource R for processing input data input from outside, and includes: a state information acquisition unit that acquires state information indicating a state of the computing machine; and a performance estimation unit that estimates, on the basis of the state indicated by the state information, a change in processing performance of the computing machine when at least one of dynamic addition or deletion of a computational resource or an increase in data amount of the input data or output data occurs.Type: ApplicationFiled: December 8, 2021Publication date: December 12, 2024Inventors: Yuki Arikawa, Kenji Tanaka, Tsuyoshi Ito, Naoki Miura, Takeshi Sakamoto
-
Publication number: 20240402992Abstract: An embodiment is an electronic computer including a plurality of arithmetic circuits which sequentially execute a plurality of processings on a processing data, and a controller which executes a program and performs a control of causing the plurality of arithmetic circuits to sequentially execute the plurality of processings. An arithmetic circuit ID is imparted to each of the plurality of arithmetic circuits, the plurality of arithmetic circuits include a first arithmetic circuit that executes a first processing among the plurality of processings, and a second arithmetic circuit that executes a second processing that executes processing on a processing result of the first processing among the plurality of processings, and the first arithmetic circuit transmits the processing result of the first processing to the arithmetic circuit ID of the second arithmetic circuit as a destination.Type: ApplicationFiled: November 12, 2021Publication date: December 5, 2024Inventors: Naoki Miura, Yuki Arikawa, Takeshi Sakamoto, Yusuke Muranaka, Sampath Priyankara, Teruaki Ishizaki, Tsuyoshi Ito, Kenji Tanaka
-
Publication number: 20240402756Abstract: An embodiment is a computer system which processes input data which includes a plurality of arithmetic parts; and a host part connected to the plurality of arithmetic parts and configured to control the plurality of arithmetic parts, in which the processed data is transferred between the plurality of arithmetic parts, the arithmetic part includes trace parts which record trace data using detection of a predetermined event from the input data as a trigger, the trace data has a timestamp value which is a detection time of the event based on an operating frequency of the arithmetic part, and the timestamp values of the plurality of arithmetic parts are synchronized.Type: ApplicationFiled: November 12, 2021Publication date: December 5, 2024Inventors: Yuki Arikawa, Naoki Miura, Kenji Tanaka, Tsuyoshi Ito, Takeshi Sakamoto, Yusuke Muranaka
-
Patent number: 12131246Abstract: A distributed deep learning system that can achieve speeding-up by processing learning in parallel at a large number of learning nodes connected with a communication network and perform faster cooperative processing among the learning nodes connected through the communication network is provided. The distributed deep learning system includes: a plurality of computing interconnect devices 1 connected with each other through a ring communication network 3 through which communication is possible in one direction; and a plurality of learning nodes 2 connected with the respective computing interconnect devices 1 in a one-to-one relation, and each computing interconnect device 1 executes communication packet transmission-reception processing between the learning nodes 2 and All-reduce processing simultaneously in parallel.Type: GrantFiled: May 27, 2019Date of Patent: October 29, 2024Assignee: Nippon Telegraph and Telephone CorporationInventors: Junichi Kato, Kenji Kawai, Huycu Ngo, Yuki Arikawa, Tsuyoshi Ito, Takeshi Sakamoto
-
Publication number: 20240272872Abstract: A computing system includes a first computer for writing an arithmetic circuit in a reconfigurable first region included in a first accelerator and a second computer for writing the arithmetic circuit in a reconfigurable second region included in a second accelerator different from the first accelerator and having the same circuit arrangement as the first region. When the first computer writes a new arithmetic circuit in the first region, the second computer writes the new arithmetic circuit in a partial region of the second region at the same position as the unwritten partial region of the first region. The first computer does not write the new arithmetic circuit in the first region when the new arithmetic circuit is not normally written, and writes the new arithmetic circuit in the unwritten partial region of the first region when the new arithmetic circuit is normally written.Type: ApplicationFiled: June 21, 2021Publication date: August 15, 2024Inventors: Tsuyoshi Ito, Yuki Arikawa, Tsutomu Takeya, Kenji Tanaka
-
Patent number: 12056082Abstract: Each NIC performs an aggregation calculation of data output from each processor in a normal order including a head NIC located at a head position of a first pipeline connection, an intermediate NIC located at an intermediate position, and a tail NIC located at a tail position, and when the aggregation calculation in the tail NIC is completed, each NIC starts distribution of an obtained aggregation result, distributes the aggregation result in a reverse order including the tail NIC, the intermediate NIC, and the head NIC, and outputs the aggregation result to the processor of the communication interface.Type: GrantFiled: November 11, 2020Date of Patent: August 6, 2024Assignee: Nippon Telegraph and Telephone CorporationInventors: Kenji Tanaka, Tsuyoshi Ito, Yuki Arikawa, Tsutomu Takeya, Kazuhiko Terada, Takeshi Sakamoto
-
Patent number: 12045183Abstract: A distributed processing node includes a computing device that calculates gradient data of a loss function from an output result obtained by inputting learning data to a learning target model, an interconnect device that aggregates gradient data between the distributed processing node and other distributed processing nodes, a computing function unit that is provided in a bus device and performs processing of gradient data from the computing device, and a DMA controller that controls DMA transfer of gradient data between the computing device and the bus device and DMA transfer of gradient data between the bus device and the interconnect device.Type: GrantFiled: April 2, 2020Date of Patent: July 23, 2024Assignee: Nippon Telegraph and Telephone CorporationInventors: Tsuyoshi Ito, Kenji Tanaka, Yuki Arikawa, Kazuhiko Terada, Takeshi Sakamoto
-
Patent number: 12035295Abstract: A scheduling apparatus includes: a division control device configured to divide an entire communicable area into a plurality of areas; a combination generation device (12-1 to 12-N) configured to generate candidate patterns of combinations of transmission points and user terminals for each area; a combination evaluation device (13-1 to 13-N) configured to calculate evaluation values of candidate patterns for each area; an optimal combination holding device (15-1 to 15-N) configured to hold an optimal combination pattern among candidate patterns for each area; a calculation result sharing device configured to output an evaluation value of an optimal combination pattern to the combination evaluation device (13-1 to 13-N) for sharing with the areas as shared information; and an overall transmission weight matrix calculation device configured to calculate a transmission weight matrix for an entire communicable area based on a result obtained by combining optimal combination patterns of the areas.Type: GrantFiled: May 31, 2019Date of Patent: July 9, 2024Assignee: Nippon Telegraph and Telephone CorporationInventors: Yuki Arikawa, Takeshi Sakamoto
-
Patent number: 12010554Abstract: In a scheduling calculation unit, a setting parameter value, such as channel information, can be set in a short time period. A scheduling system includes: a transfer source process unit that compresses M-bit information (M is an integer of two or more) obtained from user equipment, to N bits (N<M, N is an integer of one or more); a transfer destination process unit that expands N-bit information transmitted from the transfer source process unit, or N-bit information read from the transfer source process unit, to L bits (L>N, L is an integer of two or more), and stores the expanded information; and a scheduling calculation unit that identifies an optimal combination pattern between a transmission point and the user equipment, using L-bit information stored in the transfer destination process unit.Type: GrantFiled: December 13, 2019Date of Patent: June 11, 2024Assignee: Nippon Telegraph and Telephone CorporationInventors: Yuki Arikawa, Takeshi Sakamoto
-
Patent number: 12008468Abstract: Each of learning nodes calculates gradients of a loss function from an output result obtained by inputting learning data to a learning target neural network, converts a calculation result into a packet, and transmits the packet to a computing interconnect device. The computing interconnect device receives the packet transmitted from each of the learning nodes, acquires a value of the gradients stored in the packet, calculates a sum of the gradients, converts a calculation result into a packet, and transmits the packet to each of the learning nodes. Each of the learning nodes receives the packet transmitted from the computing interconnect device and updates a constituent parameter of a neural network based on a value stored in the packet.Type: GrantFiled: February 6, 2019Date of Patent: June 11, 2024Assignee: NIPPON TELEGRAPH AND TELEPHONE CORPORATIONInventors: Junichi Kato, Kenji Kawai, Huycu Ngo, Yuki Arikawa, Tsuyoshi Ito, Takeshi Sakamoto
-
Publication number: 20240184965Abstract: A calculation resource control device includes an input unit to which a processing content specified by a user is input, an equivalent circuit preparation unit that collects candidates for an equivalent circuit that is a processing circuit having a function of executing a part of the processing content to output an equivalent circuit candidate group, and a function chain creation unit that determining a processing execution circuit from the equivalent circuit candidate group on the basis of a predetermined reference, determines a connection order of the processing execution circuit, and outputs a function chain for executing the processing content.Type: ApplicationFiled: April 28, 2021Publication date: June 6, 2024Inventors: Yuki Arikawa, Kenji Tanaka, Tsuyoshi Ito, Tsutomu Takeya, Takeshi Sakamoto