Patents Assigned to Stream Computing Inc.
  • Patent number: 12267152
    Abstract: The present disclosure provides a synchronization circuit, including M group synchronization signal generating circuits and a node synchronization signal generating circuit. For the synchronization circuit provided in the embodiments of the present disclosure, synchronization indication signals can be separately generated by a plurality of group synchronization signal generating circuits, so as to drive a node synchronization signal generating circuit to generate synchronization signals, thereby efficiently implementing synchronization control over a plurality of nodes in a multi-node environment.
    Type: Grant
    Filed: January 28, 2022
    Date of Patent: April 1, 2025
    Assignee: Stream Computing Inc.
    Inventors: Fei Luo, Weiwei Wang
  • Patent number: 12244498
    Abstract: A data transmission circuit and method, a core, a chip with a multi-core structure, an electronic device and a storage medium are provided. The data transmission circuit includes a receiver, a controller, a lookup table circuit and a selector. The receiver is configured to receive an original data packet from Fabric; the controller is configured to determine whether the original data packet needs to be relayed according to an original control bit, and control a first input terminal of the selector to be enabled in response to that the original data packet needs to be relayed; the selector is configured to send a new data packet to the Fabric via the first input terminal, wherein the new data packet includes the original data and a new header acquired by the lookup table circuit according to an original index. In this way, power consumption of the data transmission circuit is reduced.
    Type: Grant
    Filed: January 28, 2022
    Date of Patent: March 4, 2025
    Assignee: Stream Computing Inc.
    Inventors: Fei Luo, Weiwei Wang
  • Patent number: 12072730
    Abstract: The present disclosure provides a synchronization signal generating circuit, a chip, and a synchronization method and a synchronization device, based on a multi-core architecture, configured to generate a synchronization signal for M node groups, wherein each of the node groups includes at least one node, and M is an integer greater than or equal to 1. The synchronization signal generating circuit includes: a synchronization signal generating sub-circuit and M group ready signal generating sub-circuits. The M group ready signal generating sub-circuits are in one-to-one correspondence with the M node groups. The synchronization signal generating sub-circuit generates a first synchronization signal based on the first to-be-started signal, wherein the first synchronization signal is configured to instruct the K nodes in the first node group to start synchronization.
    Type: Grant
    Filed: January 28, 2022
    Date of Patent: August 27, 2024
    Assignee: Stream Computing Inc.
    Inventors: Weiwei Wang, Fei Luo
  • Publication number: 20240119110
    Abstract: A method for generating a computation flow graph scheduling scheme includes grouping original vertexes in an original computation flow graph, so as to obtain first computation flow graphs; determining the number N of computing units required to process a single batch of computation data in parallel; copying N first computation flow graphs, so as to obtain second computation flow graphs; adding auxiliary vertexes to the second computation flow graphs, so as to obtain third computation flow graphs; constructing integer linear programming according to the third computation flow graphs; and solving the integer linear programming, so as to obtain a scheduling scheme for the third computation flow graphs. The method converts an original computation flow graph into third computation flow graphs and integer linear programming is constructed to obtain a scheduling scheme.
    Type: Application
    Filed: November 30, 2023
    Publication date: April 11, 2024
    Applicant: Beijing Stream Computing Inc.
    Inventors: Rui Cao, Wenyuan Lv, Xiaoqiang Dan, Lei Liu
  • Publication number: 20230067432
    Abstract: Disclosed is a task allocation method, apparatus, electronic device, and computer-readable storage medium. The task allocation method includes: in response to receiving a synchronization signal, executing, by the master processing core, a task update instruction to obtain a to-be-executed task segment; receiving, by a processing core for executing the task, the to-be-executed task segment, wherein the processing core for executing the task includes the master processing core and/or the slave processing core; executing, by the processing core for executing the task, the to-be-executed task segment; and in response to completion of execution of the to-be-executed task segment, sending, by the processing core for executing the task, a synchronization request signal, wherein the synchronization request signal is configured to trigger generation of the synchronization signal.
    Type: Application
    Filed: October 25, 2022
    Publication date: March 2, 2023
    Applicant: Stream Computing Inc.
    Inventors: Weiwei Wang, Fei Luo
  • Publication number: 20230069032
    Abstract: Disclosed is a data processing apparatus, chip, and data processing method. The data processing apparatus includes: a plurality of processing cores having a preset execution sequence, the plurality of processing cores including a head processing core and at least one other processing core; wherein the head processing core is configured to send an instruction, and receive and execute a program obtained according to the instruction; and each of the other processing cores is configured to receive and execute a program sent by a previous processing core in the preset execution sequence.
    Type: Application
    Filed: October 25, 2022
    Publication date: March 2, 2023
    Applicant: Stream Computing Inc.
    Inventors: Fei Luo, Weiwei Wang
  • Publication number: 20230004429
    Abstract: Disclosed is a task allocation method, apparatus, electronic device, and computer-readable storage medium. The task allocation method includes: in response to receiving a synchronization signal, determining, by the microprocessor, whether allocation of a task segment to the processing core is required according to the synchronization signal, wherein the task segment is a part of a task; in response to the allocation of the task segment to the processing core being required, instructing, by the microprocessor, to allocate the task segment to the processing core; receiving, by the processing core, the task segment; and in response to satisfying a first preset condition, sending, by the processing core, a synchronization request signal, wherein the synchronization request signal is configured to trigger the generation of the synchronization signal.
    Type: Application
    Filed: September 6, 2022
    Publication date: January 5, 2023
    Applicant: Stream Computing Inc.
    Inventors: Fei Luo, Weiwei Wang
  • Publication number: 20220269622
    Abstract: Embodiments of the present disclosure provide a data processing method, apparatus, electronic device and computer-readable storage medium. The data processing method includes: receiving, by a processing core, a synchronization signal; determining, by the processing core, according to the synchronization signal, a first storage area used by a self-task of the processing core and a second storage area used by a non-self-task of the processing core; wherein the first storage area differs from the second storage area; accessing the first storage area to execute the self-task and accessing the second storage area to execute the non-self-task by the processing core. Through the above method, the storage areas corresponding to different tasks of the processing core are separated, which solves the technical problems of complex data consistency processing mechanism and low processing efficiency caused by reading from and writing into the same storage area in the existing technology.
    Type: Application
    Filed: April 21, 2022
    Publication date: August 25, 2022
    Applicant: Stream Computing Inc.
    Inventors: Fei Luo, Weiwei Wang
  • Publication number: 20220149963
    Abstract: The present disclosure provides a synchronization circuit, including M group synchronization signal generating circuits and a node synchronization signal generating circuit. For the synchronization circuit provided in the embodiments of the present disclosure, synchronization indication signals can be separately generated by a plurality of group synchronization signal generating circuits, so as to drive a node synchronization signal generating circuit to generate synchronization signals, thereby efficiently implementing synchronization control over a plurality of nodes in a multi-node environment.
    Type: Application
    Filed: January 28, 2022
    Publication date: May 12, 2022
    Applicant: Stream Computing Inc.
    Inventors: Fei Luo, Weiwei Wang
  • Publication number: 20220147097
    Abstract: The present disclosure provides a synchronization signal generating circuit, a chip, and a synchronization method and a synchronization device, based on a multi-core architecture, configured to generate a synchronization signal for M node groups, wherein each of the node groups includes at least one node, and M is an integer greater than or equal to 1. The synchronization signal generating circuit includes: a synchronization signal generating sub-circuit and M group ready signal generating sub-circuits. The M group ready signal generating sub-circuits are in one-to-one correspondence with the M node groups. The synchronization signal generating sub-circuit generates a first synchronization signal based on the first to-be-started signal, wherein the first synchronization signal is configured to instruct the K nodes in the first node group to start synchronization.
    Type: Application
    Filed: January 28, 2022
    Publication date: May 12, 2022
    Applicant: Stream Computing Inc.
    Inventors: Weiwei Wang, Fei Luo
  • Publication number: 20220150168
    Abstract: A data transmission circuit and method, a core, a chip with a multi-core structure, an electronic device and a storage medium are provided. The data transmission circuit includes a receiver, a controller, a lookup table circuit and a selector. The receiver is configured to receive an original data packet from Fabric; the controller is configured to determine whether the original data packet needs to be relayed according to an original control bit, and control a first input terminal of the selector to be enabled in response to that the original data packet needs to be relayed; the selector is configured to send a new data packet to the Fabric via the first input terminal, wherein the new data packet includes the original data and a new header acquired by the lookup table circuit according to an original index. In this way, power consumption of the data transmission circuit is reduced.
    Type: Application
    Filed: January 28, 2022
    Publication date: May 12, 2022
    Applicant: Stream Computing Inc.
    Inventors: Fei Luo, Weiwei Wang