Application load adaptive multi-stage parallel data processing architecture
Systems and methods provide an extensible, multi-stage, realtime application program processing load adaptive, manycore data processing architecture shared dynamically among instances of parallelized and pipelined application software programs, according to processing load variations of said programs and their tasks and instances, as well as contractual policies. The invented techniques provide, at the same time, both application software development productivity, through presenting for software a simple, virtual static view of the actually dynamically allocated and assigned processing hardware resources, together with high program runtime performance, through scalable pipelined and parallelized program execution with minimized overhead, as well as high resource efficiency, through adaptively optimized processing resource allocation.
Latest ThroughPuter, Inc. Patents:
- Graphic Pattern-Based Passcode Generation and Authentication
- Online Trained Object Property Estimator
- Task switching and inter-task communications for coordination of applications executing on a multi-user parallel processing architecture
- Task switching and inter-task communications for coordination of applications executing on a multi-user parallel processing architecture
- Systems and methods for managing resource allocation and concurrent program execution on an array of processor cores
This application is a reissue of U.S. Pat. No. 9,465,667 issued Oct. 11, 2016 which is a divisional of U.S. Utility application Ser. No. 15/042,159, filing date Feb. 12, 2016, now allowed now issued as U.S. Pat. No. 9,400,694 on Jul. 26, 2016 and pending reissue as U.S. Reissue Application Ser. No. 16/046,718 filed July 26, 2018, which is a continuation of U.S. application Ser. No. 14/261,384, filed Sep. 26, 2013 Apr. 24, 2014, now issued as U.S. Pat. No. 9,262,204, which is a continuation of a U.S. application Ser. No. 13/684,473, filed Nov. 23, 2012, now issued as U.S. Pat. No. 8,789,065, which is incorporated by reference in its entirety and which claims the benefit of the following provisional applications, each of which is incorporated by reference in its entirety:
 U.S. Provisional Application No. 61/657,708, filed Jun. 8, 2012;
 U.S. Provisional Application No. 61/673,725, filed Jul. 19, 2012;
 U.S. Provisional Application No. 61/721,686, filed Nov. 2, 2012; and
 U.S. Provisional Application No. 61/727,372, filed Nov. 16, 2012.
This application is also related to the following, each of which is incorporated by reference in its entirety:
 U.S. Utility application Ser. No. 13/184,028, filed Jul. 15, 2011;
 U.S. Utility application Ser. No. 13/270,194, filed Oct. 10, 2011;
 U.S. Utility application Ser. No. 13/277,739, filed Nov. 21, 2011; and
 U.S. Utility application Ser. No. 13/297,455, filed Nov. 16, 2011.BACKGROUND
1. Technical Field
This invention pertains to the field of data processing and networking, particularly to techniques for connecting tasks of parallelized programs running on multi-stage manycore processor with each other as well as with external parties with high resource efficiency and high data processing throughput rate.
2. Descriptions of the Related Art
Traditionally, advancements in computing technologies have fallen into two categories. First, in the field conventionally referred to as high performance computing, the main objective has been maximizing the processing speed of one given computationally intensive program running on a dedicated hardware comprising a large number of parallel processing elements. Second, in the field conventionally referred to as utility or cloud computing, the main objective has been to most efficiently share a given pool of computing hardware resources among a large number of user application programs. Thus, in effect, one branch of computing technology advancement effort has been seeking to effectively use a large number of parallel processors to accelerate execution of a single application program, while another branch of the effort has been seeking to efficiently share a single pool of computing capacity among a large number of user applications to improve the utilization of the computing resources.
However, there have not been any major synergies between these two efforts; often, pursuing any one of these traditional objectives rather happens at the expense of the other. For instance, it is clear that a practice of dedicating an entire parallel processor based (super) computer per individual application causes severely sub-optimal computing resource utilization, as much of the capacity would be idling much of the time. On the other hand, seeking to improve utilization of computing systems by sharing their processing capacity among a number of user applications using conventional technologies will cause non-deterministic and compromised performance for the individual applications, along with security concerns.
As such, the overall cost-efficiency of computing is not improving as much as any nominal improvements toward either of the two traditional objectives would imply: traditionally, single application performance maximization comes at the expense of system utilization efficiency, while overall system efficiency maximization comes at the expense of performance of by the individual application programs. There thus exists a need for a new parallel computing architecture, which, at the same time, enables increasing the speed of executing application programs, including through execution of a given application in parallel across multiple processor cores, as well as improving the utilization of the computing resources available, thereby maximizing the collective application processing throughput for a given cost budget.
Moreover, even outside traditional high performance computing, the application program performance requirements will increasingly be exceeding the processing throughput achievable from a single central processing unit (CPU) core, e.g. due to the practical limits being reached on the CPU clock rates. This creates an emerging requirement for intra-application parallel processing (at ever finer grades) also for mainstream software programs (i.e. applications not traditionally considered high performance computing). Notably, these internally parallelized mainstream enterprise and web applications will be largely deployed on dynamically shared cloud computing infrastructure. Accordingly, the emerging form of mainstream computing calls for technology innovation supporting the execution of large number of internally parallelized applications on dynamically shared resource pools, such as manycore processors.
Furthermore, conventional microprocessor and computer system architectures use significant portions of their computation capacity (e.g. CPU cycles or core capacity of manycore arrays) for handling input and output (IO) communications to get data transferred between a given processor system and external sources or destinations as well as between different stages of processing within the given system. For data volume intensive computation workloads and/or manycore processor hardware with high IO bandwidth needs, the portion of computation power spent on IO and data movements can be particularly high. To allow using maximized portion of the computing capacity of processors for processing the application programs and application data (rather than for system functions such as IO data movements), architectural innovations are also needed in the field of manycore processor IO subsystems. In particular, there is a need for a new manycore processor system data flow and IO architecture whose operation, while providing high IO data throughput performance, causes little or no overhead in terms of usage of the computation units of the processor.SUMMARY
The invented systems and methods provide an extensible, multi-stage, application program load adaptive, parallel data processing architecture shared dynamically among a set of application software programs according to processing load variations of said programs. The invented techniques enable any program task instance to exchange data with any of the task instances of its program within the multi-stage parallel data processing platform, while allowing any of said task instances to be executing at any core of their local processors, as well allowing any identified destination task instance to be not assigned for execution by any core for periods of time, and while said task instances lack knowledge of which core, if any, at said platform is assigned for executing any of said task instances at any given time.
An aspect of the invention provides a system for information connectivity among tasks of a set of software programs hosted on a multi-stage parallel data processing platform. Such a system comprises: 1) a set of manycore processor based processing stages, each stage providing an array of processing cores, wherein each of said tasks is hosted on one of the processing stages, with tasks hosted on a given processing stage referred to as locally hosted tasks of that stage, 2) a hardware implemented data packet switching cross-connect (XC) connecting data packets from an output port of a processing stage to an input port of a given processing stage if a destination software program task of the data packet is hosted at the given processing stage, and 3) a hardware implemented receive logic subsystem, at any given one of the processing stages, connecting data packets from input ports of the given processing stage to the array of cores of that stage, so that a given data packet is connected to such a core, if any exist at a given time, among said array that is assigned at the given time to process a program instance to which the given input packet is directed to. Various embodiments of such systems further comprise features whereby: a) at a given processing stage, a hardware implemented controller i) periodically allocates the array of cores of the given stage among instances of its locally hosted tasks at least in part based on volumes of data packets connected through the XC to its locally hosted tasks and ii) accordingly inserts the identifications of the destination programs for the data packets passed from the given processing stage for switching at the XC, to provide isolation between different programs among the set; b) the system supports multiple instances of each of the locally hosted tasks at their processing stages, and packet switching through the XC to an identified instance of a given destination program task; c) said tasks are located across at least a certain subset of the processing stages so as to provide an equalized expected aggregate task processing load for each of the processing stages of said subset; and/or d) said tasks are identified with incrementing intra-program task IDs according to their descending processing load levels within a given program, wherein, among at least a subset of the processing stages, each processing stage of said subset hosts one of the tasks of each of the set programs so as to equalize sums of said task IDs of the tasks located on each of the processing stages of said subset.
An aspect of the invention further provides a method for information connectivity among tasks of a set of software programs. Such a method comprises: 1) hosting said tasks on a set of manycore processor based processing stages, each stage providing an array of processing cores, with tasks hosted on a given processing stage referred to as locally hosted tasks of that stage, 2) at a data packet switching cross-connect (XC), connecting data packets from an output port of a processing stage to an input port of a given processing stage if a destination software program task identified for a given data packet is hosted at the given processing stage, and 3) at any given one of the processing stages, connecting data packets from input ports of the given processing stage to the array of cores of that stage, so that a given data packet is connected to such a core, if any exist at a given time, among said array that is assigned at the given ime to process a program instance to which the given input packet is directed to. Various embodiments of the method comprise further steps and features as follows: a) periodically allocating, by a controller at a given one of the processing stages, the array of cores of the given stage among instances of its locally hosted tasks at least in part based on volumes of data packets connected through the XC to its locally hosted tasks, with the controller, according to said allocating, inserting the identifications of the destination programs for the data packets passed from the given processing stage for switching at the XC, to provide isolation between different programs among the set; b) the steps of allocating and connecting, both at the XC and the given one of the processing stages, are implemented by hardware logic that operates without software involvement; c) supporting multiple instances of each of the locally hosted tasks at their processing stages, and packet switching through the XC to an identified instance of a given destination task; d) said tasks are located across at least a certain subset of the processing stages so as to provide an equalized expected aggregate task processing load for each of the processing stages of said subset; and/or e) said tasks are identified with incrementing intra-program task IDs according to their descending processing load levels within a given program, wherein, among at least a subset of the processing stages, each processing stage of said subset hosts one of the tasks of each of the set programs so as to equalize sums of said task IDs of the tasks located on each of the processing stages of said subset.
A further aspect of the invention provides hardware logic system for connecting input data to instances of a set of programs hosted on a manycore processor having an array of processing cores. Such a system comprises: 1) demultiplexing logic for connecting input data packets from a set of input data ports to destination program instance specific input port buffers based on a destination program instance identified for each given input data packet, and 2) multiplexing logic for connecting data packets from said program instance specific buffers to the array of cores based on identifications, for each given core of the array, of a program instance assigned for execution at the given core at any given time. An embodiment of the system further comprises a hardware logic controller that periodically assigns, at least in part based on volumes of input data packets at the program instance specific input port buffers, instances of the programs for execution on the array of cores, and accordingly forms, for the multiplexing logic, the identification of the program instance that is assigned for execution at each core of the array of cores.
Yet further aspect of the invention provides a method for connecting input data to instances of a set of programs hosted on a manycore processor having an array of processing cores. Such a method comprises: 1) demultiplexing input data packets from a set of input data ports to destination program instance specific input port buffers according to a destination program instance identified for each given input data packet, and 2) multiplexing data packets from said program instance specific buffers to the array of cores according to identifications, for each given core of the array, of a program instance assigned for execution at the given core at any given time. In a particular embodiment of the method comprise a further step as follows: periodically forming the identifications of the program instances executing at the array of cores through i) allocating the array of cores among the set of programs at least in part based on volumes of input data packets at the input port buffers associated with individual programs of the set and ii) assigning, based at least in part based on said allocating, the cores of the array for executing specific instances of the programs. Moreover, in an embodiment, the above method is implemented by hardware logic that operates without software involvement.
A yet further aspect of the invention provides a method for periodically arranging a set of executables of a given software program in an execution priority order, with an executable referring to a task, an instance, an instance of a task of the program, or equals thereof. Such a method comprises: 1) buffering input data at an array of executable specific input port buffers, wherein a buffer within said array buffers, from an input port associated with the buffer, such data that arrived that is directed to the executable associated with the buffer, 2) calculating numbers of non-empty buffers associated with each of the executables, and 3) ranking the executables in their descending execution priority order at least in part according to their descending order in terms numbers of non-empty buffers associated with each given executable. In a particular embodiment of this method, the step of ranking involves I) forming, for each given executable, a 1st phase bit vector having as many bits as there are input ports from where the buffers receive their input data, with this number of ports denoted with X, and wherein a bit at index x of said vector indicates whether the given executable has exactly x non-empty buffers, with x being an integer between 0 and X, II) forming, from bits at equal index values of the 1st phase bit vectors of each of the executables, a row of X 2nd phase bit vectors, where a bit at index y of the 2nd phase bit vector at index x of said row indicates whether an executable with ID number y within the set has exactly x non-empty buffers, wherein y is an integer from 0 to a maximum number of the executables less 1, as well as III) the following substeps: i) resetting the present priority order index to a value representing a greatest execution priority; and ii) until either all bits of each of the 2nd phase bit vectors are scanned or an executable is associated with the lowest available execution priority, scanning the row of the 2nd phase bit vectors for active-state bits, one 2nd phase bit vector at a time, starting from row index X while decrementing the row index after reaching bit index 0 of any given 2nd phase bit vector, and based upon encountering an active-state bit: i) associating the executable with ID equal to the index of the active-state bit within its 2nd phase bit vector with the present priority order index and ii) changing the present priority order index to a next lower level of execution priority. Moreover, in an embodiment, the above method is implemented by hardware logic that operates without software involvement.
General notes about this specification (incl. text in the drawings):
- For brevity: ‘application (program)’ is occasionally written in as ‘app’, ‘instance’ as ‘inst’ and ‘application-task/instance’ as ‘app-task/inst’.
- Receive (RX) direction is toward the cores of the manycore processor of a given processing stage, and transmit (TX) direction is outward from the cores.
- The term IO refers both to the system 1 (
FIG. 1) external input and output ports as well as ports interconnecting the processing stages 300 of the system.
- Ports, such as external or inter-stage ports of the multi-stage parallel processing system 1 (
FIG. 1) can be implemented either as distinct physical ports or as e.g. time or frequency division channels on shared physical connections.
- Terms software program, application program, application and program are used interchangeably in this specification, and each generally refer to any type of computer software able to run on data processing systems based on the architecture.
- Term ‘task’ in this specification refers to a part of a program, and covers the meanings of related terms such as actor, thread etc.
- References to a “set of” units of a given type, such as programs, logic modules or memory segments can, depending on the nature of a particular embodiment or operating scenario, refer to any positive number of such units.
- While the term ‘processor’ more specifically refers to the processing core fabric 510 (
FIG. 5), it will also be used, where it streamlines the text, to refer to a processor system 500 ( FIGS. 3-4) and a processing stage 300 ( FIGS. 1 and 3) within the system 1.
- Typically, there will be one task type per an application hosted per each of the processing stages 300 in the system 1 per
FIG. 1(while the system 1 supports multiple processing stages and multiple application programs per each stage).
- A master type task of a single application-instance (app-inst) hosted at entry stage processing system can have multiple parallel worker type tasks of same type hosted at multiple worker stage processing systems. Generally, a single upstream app-inst-task can feed data units to be processed in parallel by multiple downstream app-inst-task: s within the same system 1.
- Identifiers such as ‘master’ and ‘worker’ tasks or processing stages are not used here in a sense to restrict the nature of such tasks or processing; these identifiers are here used primarily to illustrate a possible, basic type of distribution of workloads among different actors. For instance, the entry stage processing system may host, for a given application, simply tasks that pre-process (e.g. qualify, filter, classify, format, etc.) the RX data units and pass them to the worker stage processing systems as tagged with the pre-processing notations, while the worker stage processor systems may host the actual master (as well as worker) actors conducting the main data processing called for by such received data units. Generally, a key idea of the presented processing system and IO architecture is that the worker stages of processing—where bulk of the intra-application parallel and/or pipelined processing typically is to occur, providing the performance gain of using parallel task instances and/or pipelined tasks to lower the processing latency and improve the on-time IO throughput—receive their input data units as directed to specific destination app-task instances, while the external parties are allowed to communicate with a given application program hosted on a system 1 through a single, constant contact point (the ‘master’ task hosted on the entry stage processor, possibly with its specified instance).
- Specifications below assume there to be X IO ports, Y core slots on a processor 500, M application programs configured and up to N instances per each application for a processor 500, and up to T tasks (or processing stages) per a given application (instance), wherein the capacity parameters X, Y, M, N and T are some positive integers, and wherein the individual ports, cores, applications, tasks and instances, are identified with their ID#s ranging from 0 to said capacity parameter value less 1 for each of the measures (ports, cores, apps, instances, tasks or processing stages).
The invention is described herein in further detail by illustrating the novel concepts in reference to the drawings. General symbols and notations used in the drawings:
- Boxes indicate a functional digital logic module; unless otherwise specified for a particular embodiment, such modules may comprise both software and hardware logic functionality.
- Arrows indicate a digital signal flow. A signal flow may comprise one or more parallel bit wires. The direction of an arrow indicates the direction of primary flow of information associated with it with regards to discussion of the system functionality herein, but does not preclude information flow also in the opposite direction.
- A dotted line marks a border of a group of drawn elements that form a logical entity with internal hierarchy, such as the modules constituting the multi-core processing fabric 110 in
- Lines or arrows crossing in the drawings are decoupled unless otherwise marked.
- For clarity of the drawings, generally present signals for typical digital logic operation, such as clock signals, or enable, address and data bit components of write or read access buses, are not shown in the drawings.
General operation of the application load adaptive, multi-stage parallel data processing system 1 per
While the processing of any given application (server program) at a system 1 is normally parallelized and/or pipelined, and involves multiple tasks (many of which tasks and instances thereof can execute simultaneously on the manycore arrays of the processors 300), the system enables external parties to communicate with any such application hosted on the system 1 without having to know about any specifics (incl. existence, status, location) of their internal tasks or parallel instances thereof. As such, the incoming data units to the system 1 are expected to identify just their destination application (and where it matters, the application instance number), rather than any particular task within it. Moreover, the system enables external parties to communicate with any given application hosted on a system 1 through any of the network ports 10, 50 without knowing whether or at which cores any instance of the given application task (app-task) may be executing at any time. Furthermore, the architecture enables the aforesaid flexibility and efficiency through its hardware logic functionality, so that no system or application software running on the system 1 needs to either be aware of whether or where any of the instances of any of the app-tasks may be executing at any given time, or through which port any given inter-task or external communication may have occurred or be occurring. Thus the system 1, while providing a highly dynamic, application workload adaptive usage of the system processing and communications resources, allows the software running on and/or remotely using the system to be designed with a straightforward, abstracted view of the system: the software (both the server programs hosted on a system 1 as well as clients etc. remote agents interacting with such programs hosted on the system) can assume that all applications (as well all their tasks and instances thereof) hosted on by the given system 1 are always executing on their virtual dedicated processor cores within the system. Also, where useful, said virtual dedicated processors can also be considered by software to be timeshare slices on a single (very high speed) processor. The architecture thereby enables achieving, at the same time, both the vital application software development productivity (simple, virtual static view of the actually highly dynamic processing hardware) together with high program runtime performance (scalable parallel program execution with minimized overhead) and resource efficiency (adaptively optimized resource allocation) benefits. Techniques enabling such benefits of the architecture are described in the following through more detailed technical study of the system 1 and its subsystems.
The XC 200 subsystems per
Note that in
Moreover, the set of applications 610 (
As seen in the example of the table above, the sum of the task ID#s (with each task ID# representing the workload ranking of its task within its application) is the same for any row i.e. for each of the four processing stages of this example. Applying this load balancing scheme for differing numbers of processing stages, tasks and applications is straightforward based on the above example and the discussion herein. In such system wide processing load balancing schemes supported by system 1, a key idea is that each worker stage processor 300 gets one of the tasks from each of the applications so that collectively the tasks configured for any given worker stage processor 500 have the intra-app task IDs of the full range from ID#0 through ID#T−1 with one task of each ID# value (wherein the intra-app task ID#s are assigned for each app according to their descending busyness level) so that the overall task processing load is to be, as much as possible, equal across all worker-stage processors 300 of the system 1. Advantages of these schemes supported by systems 1 include achieving optimal utilization efficiency of the processing resources and eliminating or at least minimizing the possibility or effects of any of the worker-stage processors 300 forming system wide performance bottlenecks. In
In the following, we continue by exploring the internal structure and operation of a given processing stage 300, a high level functional block diagram for which is shown in
As illustrated in
The RX logic connecting the input packets from the input ports 290 to the local processing cores arranges the data from all the input ports 290 according to their indicated destination applications and then provides for each core of the manycore processor 500 read access to the input packets for the app-task instance executing on the given core at any given time. At this point, it shall be recalled that there is one app-task hosted per processing stage 500 per each of the applications 610 (
The main operation of the RX logic shown in
The input packets arriving over the input ports are demuxed by individual RX network port specific demultiplexers (demux:s) 405 to their indicated (via overhead bits) destination app-inst and input port specific FIFO buffers 410. At the RX subsystem 400, there will thus be FIFOs 410 specific to each input port 290 for each app-inst able to run on the manycore processor 500. In
Logic at each application scope FIFO module 420 signals 430 to the manycore processor system 500 the present processing load level of the application as a number of the ready to execute instances of the given app-task and, as well as the priority order of such instances. An app-inst is taken as ready to execute when it has unread input data in its FIFO 410. As discussed in greater depth in connection with
For the info flow 430 (
The RX logic subsystem 400 is implemented by digital hardware logic and is able to operate without software involvement. Note that the concept of software involvement as used in this specification relates to active, dynamic software operation, not to configuration of the hardware elements according aspects and embodiments of the invention through software where no change in such configuration is needed to accomplish the functionality according to this specification.
This specification continues by describing the internal elements and operation of the processor system 500 (for the processing system 300 of
Any of the cores 520 of a system 500 can comprise any types of software program processing hardware resources, e.g. central processing units (CPUs), graphics processing units (GPUs), digital signal processors (DSPs) or application specific processors (ASPs) etc., and in programmable logic (FPGA) implementation, the core type for any core slot 520 is furthermore reconfigurable per expressed demands 430 of the active app-tasks.
As illustrated in
A hardware logic based controller module 540 within the processor system 500, through a periodic process, allocates and assigns the cores 520 of the processor 500 among the set of applications 610 (
Note that the verb “to assign” is used herein reciprocally, i.e., it can refer, depending on the perspective, both to assignment of cores 520 to app-inst:s 640 (see
The controller module 540 is implemented by digital hardware logic within the system, and the controller exercises its repeating algorithms, including those of process 700 per
Note also that, among the applications 620 there can be supervisory or maintenance software programs for the system 500, used for instance to support configuring other applications 620 for the system 500, as well as provide general functions such as system boot-up and diagnostics.
In the context of
The process 700, periodically selecting and mapping the to-be-executing instances of the set 610 of applications to the array of processing cores within the processor 500, involves the following steps:
- (1) allocating 710 the array 515 of cores among the set of applications 610, based on CDFs 530 and CEs 717 of the applications, to produce for each application 620 a number of cores 520 allocated to it 715 (for the time period in between the current and the next run of the process 700); and
- (2) based at least in part on the allocating 710, for each given application that was allocated at least one core: (a) selecting 720, according to the app-inst priority list 535, the highest priority instances of the given application for execution corresponding to the number of cores allocated to the given application, and (b) mapping 730 each selected app-inst to one of the available cores of the array 515, to produce, i) per each core of the array, an identification 560 of the app-inst that the given core was assigned to, as well as ii) for each app-inst selected for execution on the fabric 515, an identification 550 of its assigned core.
The periodically produced and updated outputs 550, 560 of the controller 540 process 700 will be used for periodically re-configuring connectivity through the mux:s 450 (
FIGS. 4) and 580 ( FIG. 5) as well as the fabric memory access subsystem 800, as described in the following with references to FIGS. 8-10.
Fabric Memory Access Subsystem for Manycore Processor Per
Based on the control 560 by the controller 540 for a given core indicating that it will be subject to an app-inst switchover, the currently executing app-inst is made to stop executing and its processing state from the core is backed up 810, 940 (
Note that applying of updated app-inst ID# configurations 560 for the core specific mux:s 1020 of XC 870 (see
The XC 830 comprises a set of app-inst specific mux:s 910, each of which selects the write and read control access bus from the set 810 identified 550 to it for write direction access 940 to its associated app-inst specific segment 950 at the memory array 850. Each such app-inst specific mux 910 makes these selections based on control 550 from the controller 540 that identifies the core (if any) presently assigned to process its associated app-inst.
At digital logic design level, the write access (incl. read control) bus instance within the set 810 from the core ID #y (y is an integer between 0 and Y−1) is connected to the data input #y of each mux 910 of XC 830, so that the identification 550 of the appropriate source core ID# by the controller to a given mux 910 causes the XC 830 to connect the write and read control buses 810 from the core array 515 to the proper app-inst specific segments 950 within the memory 850. The controller 540 uses information from an application instance ID# addressed look-up-table per Table 4 format (shown later in this specification, under heading “Summary of process flow and information formats . . . ’) in supplying the present processing core (if any) identifications 550 to the application instance specific mux:s 910 of XC 830 (the info flow 550 also includes a bit indicating whether a given app-inst was selected for execution at a given time if not this active/inactive app-inst indicator bit causes the muxes 910 to disable write access to such app-inst's memory 950).
In addition to write data, address and enable (and any other relevant write access signals), the buses 810 and 940 include the read access control signals including the read address to memory 950, from their source cores to their presently assigned processing app-inst:s' memory segments 950, to direct read access from the cores of the array 515 to the memory array 850, which function is illustrated in
The XC 870 (see
Similar to the digital logic level description of the mux 910 (in connection to
Module-Level Implementation Specifications for the Application Instance to Core Placement Process:
The steps of the process 700 (
Objectives for the core allocation algorithm 710 include maximizing the processor 500 core utilization (i.e., generally minimizing, and so long as there are ready app-inst:s, eliminating core idling), while ensuring that each application gets at least up to its entitled (e.g. a contract based minimum) share of the processor 500 core capacity whenever it has processing load to utilize such amount of cores. Each application configured for a given manycore processor 500 is specified its entitled quota 717 of the cores, at least up to which quantity of cores it is to be allocated whenever it is able to execute on such number of cores in parallel; sum of the applications' core entitlements (CEs) 717 is not to exceed the total number of core slots in the given processor 500. Each application program on the processor 500 gets from each run of the algorithm 710:
- (1) at least the lesser of its (a) CE 717 and (b) Core Demand Figure (CDF) 530 worth of the cores (and in case (a) and (b) are equal, the ‘lesser’ shall mean either of them, e.g. (a)); plus
- (2) as much beyond that to match its CDF as is possible without violating condition (1) for any application on the processor 500; plus
- (3) the application's even division share of any cores remaining unallocated after conditions (1) and (2) are satisfied for all applications 610 sharing the processor 500.
The algorithm 710 allocating cores 520 to application programs 620 runs as follows:
- (i) First, any CDFs 530 by all application programs up to their CE 717 of the cores within the array 515 are met. E.g., if a given program #P had its CDF worth zero cores and entitlement for four cores, it will be allocated zero cores by this step (i). As another example, if a given program #Q had its CDF worth five cores and entitlement for one core, it will be allocated one core by this stage of the algorithm 710. To ensure that each app-task will be able at least communicate with other tasks of its application at some defined minimum frequency, the step (i) of the algorithm 710 allocates for each application program, regardless of the CDFs, at least one core once in a specified number (e.g. sixteen) of process 700 runs.
- (ii) Following step (i), any processing cores remaining unallocated are allocated, one core per program at a time, among the application programs whose demand 530 for processing cores had not been met by the amounts of cores so far allocated to them by preceding iterations of this step (ii) within the given run of the algorithm 710. For instance, if after step (i) there remained eight unallocated cores and the sum of unmet portions of the program CDFs was six cores, the program #Q, based on the results of step (i) per above, will be allocated four more cores by this step (ii) to match its CDF.
- (iii) Following step (ii), any processing cores still remaining unallocated are allocated among the application programs evenly, one core per program at time, until all the cores of the array 515 are allocated among the set of programs 610. Continuing the example case from steps (i) and (ii) above, this step (iii) will allocate the remaining two cores to certain two of the programs (one for each). Programs with zero existing allocated cores, e.g. program #P from step (i), are prioritized in allocating the remaining cores at the step (iii) stage of the algorithm 710.
Moreover, the iterations of steps (ii) and (iii) per above are started from a revolving application program ID#s within the set 610, e.g. so that the application ID# to be served first by these iterations is incremented by one (and returning to ID#0 after reaching the highest application ID#) for each successive run of the process 700 and the algorithm 710 as part of it. Furthermore, the revolving start app ID#s for the steps (ii) and (iii) are kept at offset from each other equal to the number of app:s sharing the processor divided by two.
Accordingly, all cores 520 of the array 515 are allocated on each run of the related algorithms 700 according to applications processing load variations while honoring their contractual entitlements. The allocating of the array of cores 515 by the algorithm 710 is done in order to minimize the greatest amount of unmet demands for cores (i.e. greatest difference between the CDF and allocated number of cores for any given application 620) among the set of programs 610, while ensuring that any given program gets at least its entitled share of the processing cores following such runs of the algorithm for which it demanded 530 at least such entitled share 717 of the cores.
To study further details of the process 700, let us consider the cores of the processor 500 to be identified as core #0 through core #(Y−1). For simplicity and clarity of the description, we will from hereon consider an example processor 500 under study with a relatively small number Y of sixteen cores. We further assume here a scenario of relatively small number of also sixteen application programs configured to run on that processor 500, with these applications identified for the purpose of the description herein alphabetically, as application #A through application #P. Note however that the architecture presents no actual limits for the number of cores, applications or their instances for a given processor 500. For example, instances of processor 500 can be configured a number of applications that is lesser or greater than (as well as equal to) the number of cores.
Following the allocation 710 of the set of cores 515 among the applications 610, for each active application on the processor 500 (that were allocated one or more cores by the latest run of the core allocation algorithm 710), the individual ready-to-execute app-inst:s 640 are selected 720 and mapped 730 to the number of cores allocated to the given application. One schedulable 640 app-inst is assigned per one core 520 by each run of the process 700.
The appi-inst selection 720 step of the process 700 produces, for each given application of the set 610, lists 725 of to-be-executing app-inst:s to be mapped 730 to the subset of cores of the array 515. Note that, as part of the periodic process 700, the selection 720 of to-be-executing app-inst for any given active application (such that was allocated 710 at least one core) is done, in addition to following of a chance in allocation 710 of cores among applications, also following a change in app-inst priority list 535 of the given application, including when not in connection to reallocation 710 of cores among the applications. The active app-inst to core mapping 730 is done logically individually for each application, however keeping track of which cores are available for any given application (by first assigning for each application their respective subsets of cores among the array 515 and then running the mapping 730 in parallel for each application that has new app-inst:s to be assigned to their execution cores).
The app-inst to core mapping algorithm 730 for any application begins by keeping any continuing app-inst:s, i.e., app-inst:s selected to run on the array 515 both before and after the present app-inst switchovers, mapped to their current cores also on the next allocation period. After that rule is met, any newly selected app-inst:s for the application are mapped to available cores. Specifically, assuming that a given application was allocated k (a positive integer) cores beyond those used by its continuing app-inst:s, k highest priority ready but not-yet-mapped app-inst:s of the application are mapped to k next available (i.e. not-yet-assigned) cores within the array 515 allocated to the application. In case that any given application had less than k ready but not-yet-mapped app-inst:s, the highest priority other (e.g. waiting, not ready) app-inst:s are mapped to the remaining available cores among the number cores allocated to the given application; these other app-inst:s can thus directly begin executing on their assigned cores once they become ready. The placing of newly selected app-inst:s, i.e., selected instances of applications beyond the app-inst:s continuing over the switchover transition time, is done by mapping such yet-to-be-mapped app-inst:s in incrementing app-inst ID# order to available cores in incrementing core ID# order.
Summary of Process Flow and Information Formats Produced and Consumed by Main Stages of the App-Inst to Core Mapping Process:
According to an embodiment of the invention, the production of updated mappings 560, 550 between selected app-inst:s 725 and the processing core slots 520 of the processor 500 by the process 700 (
The RX logic 400 produces for each application 620 its CDF 530, e.g. an integer between 0 and the number of cores within the array 515 expressing how many concurrently executable app-inst:s 640 the application presently has ready to execute. The information format 530, as used by the core allocation phase of the process 700, is such that logic with the core allocation module 710 repeatedly samples the application CDF bits written 430 to it by the RX logic 400 (
Regarding Table 1 above, note that the values of entries shown are simply examples of possible values of some of the application CDFs, and that the CDF values of the applications can change arbitrarily for each new run of the process 700 and its algorithm 710 using snapshots of the CDFs.
Based (in part) on the application ID# indexed CDF array 530 per Table 1 above, the core allocation algorithm 710 of the process 700 produces another similarly formatted application ID indexed table, whose entries 715 at this stage are the number of cores allocated to each application on the processor 500, as shown in Table 2 below:
Regarding Table 2 above, note again that the values of entries shown are simply examples of possible number of cores allocated to some of the applications after a given run on the algorithm 710, as well as that in hardware logic this array 715 can be simply the numbers of cores allocated per application, as the application ID# for any given entry of this array is given by the index # of the given entry in the array 715.
The app-inst selection sub-process 720, done individually for each application of the set 610, uses as its inputs the per-application core allocations 715 per Table 2 above, as well as priority ordered lists 535 of ready app-inst IDs of any given application. Each such application specific list 535 has the (descending) app-inst priority level as its index, and, as a values stored at each such indexed element, the intra-application scope instance ID#, plus, for processors 500 supporting reconfigurable core slot, an indication of the target core type (e.g. CPU, DSP, GPU or a specified ASP) demanded by the app-inst, as shown in the example of Table 3 below:
Notes regarding implicit indexing and non-specific examples used for values per Tables 1-2 apply also for Table 3.
The RX logic 400 writes 430 for each application 620 of the set 610 the intra-app instance priority list 535 per Table 3 to controller 540, to be used as an input for the active app-inst selection sub-process 720, which produces per-application listings 725 of selected app-inst:s, along with their corresponding target core types where applicable. Based at least in part on the application specific active app-inst listings 725, the core to app-inst assignment algorithm module 730 produces a core ID# indexed array 550 indexed with the application and instance IDs, and provides as its contents the assigned processing core ID (if any), per Table 4 below:
Finally, by inverting the roles of index and contents from Table 4, an array 560 expressing to which app-inst ID# each given core of the fabric 510 got assigned, per Table 5 below, is formed. Specifically, Table 5 is formed by using as its index the contents of Table 4 i.e. the core ID numbers (other than those marked ‘Y’), and as its contents the app-inst ID index from Table 4 corresponding each core ID# (along with, where applicable, the core type demanded by the given app-inst, with the core type for any given selected app-inst being denoted as part of the information flow 725 (
Regarding Tables 4 and 5 above, note that the symbolic application IDs (A through P) used here for clarity will in digital logic implementation map into numeric representations, e.g. in the range from 0 through 15. Also, the notes per Tables 1-3 above regarding the implicit indexing (i.e., core ID for any given app-inst ID entry is given by the index of the given entry, eliminating the need to store the core IDs in this array) apply for the logic implementation of Tables 4 and 5 as well.
In hardware logic implementation the application and the intra-app-inst IDs of Table 5 are bitfields of same digital entry at any given index of the array 560; the application ID bits are the most significant bits (MSBs) and the app-inst ID bits the least significant (LSBs), and together these identify the active app-inst's memory 950 in the memory array 850 (for the core with ID# equaling the given index to app-inst ID# array per Table 5).
By comparing Tables 4 and 5 above, it is seen that the information contents at Table 4 are the same as at Table 5; the difference in purposes between them is that while Table 5 gives for any core slot 520 its active app-inst ID#560 to process (along with the demanded core type), Table 4 gives for any given app-inst its processing core 550 (if any at a given time). As seen from
Note further that, according to the process 700, when the app-inst to core placement module 730 gets an updated list of selected app-inst:s 725 for one or more applications 620 (following a change in either or both of core to application allocations 715 or app-inst priority lists 535 of one or more applications), it will be able to identify from Tables 4 and 5 the following:
- I. The set of activating, to-be-mapped, app-inst:s, i.e., app-inst:s within lists 725 not mapped to any core by the previous run of the placement algorithm 730. This set I is produced by taking those app-inst:s from the updated selected app-inst lists 725, per Table 4 format, whose core ID# was ‘Y’ (indicating app-inst not active) in the latest Table 4;
- II. The set of deactivating app-inst:s, i.e., app-inst:s that were included in the previous, but not in the latest, selected app-inst lists 725. This set II is produced by taking those app-inst:s from the latest Table 4 whose core ID# was not ‘Y’ (indicating app-inst active) but that were not included in the updated selected app-inst lists 725; and
- III. The set of available cores, i.e., cores 520 which in the latest Table 5 were assigned to the set of deactivating app-inst:s (set II above).
The placer module 730 uses the above info to map the active app-inst:s to cores of the array in a manner that keeps the continuing app-inst:s executing on their present cores, thereby maximizing utilization of the core array 515 for processing the user applications 620. Specifically, the placement algorithm 730 maps the individual app-inst:s 640 within the set I of activating app-inst:s in their increasing app-inst ID# order for processing at core instances within the set III of available cores in their increasing core ID# order.
Moreover, regarding placement of activating app-inst:s (set I as discussed above), the placement algorithm 730 seeks to minimize the amount of core slots for which the activating app-inst demands a different execution core type than the deactivating app-inst did. I.e., the placer will, to the extent possible, place activating app-inst:s to such core slots where the deactivating app-inst had the same execution core type. E.g., activating app-inst demanding the DSP type execution core will be placed to the core slots where the deactivating app-inst:s also had run on DSP type cores. This sub-step in placing the activation app-inst:s to their target core slots uses as one of its inputs the new and preceding versions of (the core slot ID indexed) app-inst ID and core type arrays per Table 5, to allow matching activating appinst:s and the available core slots according to the core type.Architectural Cost-Efficiency Benefits
Advantages of the system capacity utilization and application performance optimization techniques described in the foregoing include:
- Increased user's utility, measured as demanded-and-allocated cores per unit cost, as well as, in most cases, allocated cores per unit cost
- Increased revenue generating capability for the service provider from CE based billables, per unit cost for a system 1. This enables increasing the service provider's operating cash flows generated or supported by a system 1 of certain cost level. Also, compared to a given computing service provider's revenue level, this reduces the provider's cost of revenue, allowing the provider to offer more competitive contract pricing, by passing on at least a portion of the savings to the customers (also referred to as users) running programs 620 on the system 1, thereby further increasing the customer's utility of the computing service subscribed to (in terms of compute capacity received when needed, specifically, number of cores allocated and utilized for parallel program execution) per unit cost of the service.
At a more technical level, the dynamic parallel processing techniques per
Moreover, the hardware operating system 540 and the processing fabric memory access subsystem 800 (described in relation to
To summarize, the dynamic parallel execution environment provided by the system 1 enables each application program to dynamically get a maximized number of cores that it can utilize concurrently so long as such demand-driven core allocation allows all applications on the system to get at least up to their entitled number of cores whenever their processing load actually so demands.
The presented architecture moreover provides straightforward IO as well as inter-app-task communications for the set of application (server) programs configured to run on the system per
To achieve this, the architecture involves an entry-stage “master-stage”) processing system (typically with the master tasks of the set of applications 610 hosted on it), which distribute the received data processing workloads for worker-stage processing systems, which host the rest of the tasks of the application programs, with the exception of the parts (tasks) of the program hosted on the exit stage processing system, which typically assembles the processing results from the worker stage tasks for transmission to the appropriate external parties. External users and applications communicates directly with the entry and (in their receive direction, exit) stage processing system i.e. with the master tasks of each application, and these master tasks pass on data load units (requests/messages/files/steams) for processing by the worker tasks on the worker-stage processing systems, with each such data unit identified by their app-task instance ID#s, and with the app ID# bits inserted by controllers 540, to ensure inter-task communications stay within their authorized scope, by default within the local application. There may be multiple instances of any given (locally hosted) app-task executing simultaneously on both the entry/exit as well as worker stage manycore processors, to accommodate variations in the types and volumes of the processing workloads at any given time, both between and within the applications 620 (
The received and buffered data loads to be processed drive, at least in part, the dynamic allocating and assignment of cores among the app-inst:s at any given stage of processing by the multi-stage manycore processing system, in order to maximize the total (value adding, e.g. revenue-generating) on-time IO data processing throughput of the system across all the applications on the system.
The architecture provides a straightforward way for the hosted applications to access and exchange their IO and inter-task data without concern of through which input/output ports any given IO data units may have been received or are to be transmitted at any given stage of processing, or whether or at which cores of their host processors any given source or destination app-task instances may be executing at any given time. External parties (e.g. client programs) interacting with the (server) application programs hosted on the system 1 are likewise able to transact with such applications through a virtual static contact point, i.e., the (initially non-specific, and subsequently specifiable instance of the) master task of any given application, while within the system the applications are dynamically parallelized and/or pipelined, with their app-task instances able to activate, deactivate and be located without restrictions.
The dynamic parallel program execution techniques thus enable dynamically optimizing the allocation of parallel processing capacity among a number of concurrently running application software programs, in a manner that is adaptive to realtime processing loads of the applications, with minimized system (hardware and software) overhead costs. Furthermore, the system per
- Practically all the application processing time of all the cores across the system is made available to the user applications, as there is no need for a common system software to run on the system (e.g. to perform on the cores traditional system software tasks such as time tick processing, serving interrupts, scheduling, placing applications and their tasks to the cores, billing, policing, etc.).
- The application programs do not experience any considerable delays in ever waiting access to their (e.g. contract-based) entitled share of the system processing capacity, as any number of the processing applications configured for the system can run on the system concurrently, with a dynamically optimized number of parallel (incl. pipelined) cores allocated per an application.
- The allocation of the processing time across all the cores of the system among the application programs sharing the system is adaptive to realtime processing loads of these applications.
- There is inherent security (including, where desired, isolation) between the individual processing applications in the system, as each application resides in its dedicated (logical) segments of the system memories, and can safely use the shared processing system effectively as if it was the sole application running on it. This hardware based security among the application programs and tasks sharing the manycore data processing system per
FIGS. 1-10further facilitates more straight-forward, cost-efficient and faster development and testing of applications and tasks to run on such systems, as undesired interactions between the different user application programs can be disabled already at the system hardware resource access level.
The dynamic parallel execution techniques thus enable maximizing data processing throughput per unit cost across all the user applications configured to run on the shared multi-stage manycore processing system.
The presented manycore processor architecture with hardware based scheduling and context switching accordingly ensures that any given application gets at least its entitled share of the dynamically shared parallel processing system capacity whenever the given application actually is able to utilize at least its entitled quota of system capacity, and as much processing capacity beyond its entitled quota as is possible without blocking the access to the entitled and fair share of the processing capacity by any other application program that is actually able at that time to utilize such capacity that it is entitled to. For instance, the dynamic parallel execution architecture presented thus enables any given user application to get access to the full processing capacity of the manycore system whenever the given application is the sole application offering processing load for the shared manycore system. In effect, the techniques per
The references , , , , , , ,  and  provide further reference specifications and use cases for aspects and embodiments of the invented techniques. Among other such aspects disclosed in these references, the reference , at its paragraphs 69-81 and its
This description and drawings are included to illustrate architecture and operation of practical and illustrative example embodiments of the invention, but are not meant to limit the scope of the invention. For instance, even though the description does specify certain system parameters to certain types and values, persons of skill in the art will realize, in view of this description, that any design utilizing the architectural or operational principles of the disclosed systems and methods, with any set of practical types and values for the system parameters, is within the scope of the invention. For instance, in view of this description, persons of skill in the art will understand that the disclosed architecture sets no actual limit for the number of cores in a given system, or for the maximum number of applications or tasks to execute concurrently. Moreover, the system elements and process steps, though shown as distinct to clarify the illustration and the description, can in various embodiments be merged or combined with other elements, or further subdivided and rearranged, etc., without departing from the spirit and scope of the invention. It will also be obvious to implement the systems and methods disclosed herein using various combinations of software and hardware. Finally, persons of skill in the art will realize that various embodiments of the invention can use different nomenclature and terminology to describe the system elements, process phases etc. technical concepts in their respective implementations. Generally, from this description many variants will be understood by one skilled in the art that are yet encompassed by the spirit and scope of the invention.
1. A system for dynamic computing resource management, the system comprising:
- a first hardware logic subsystem configured to periodically, for at least some of successive core allocation periods (CAPs), execute an allocation of an array of processing cores among a set of software programs, where each of the set of software programs has one or more instances of the corresponding program, said subsystem comprising: (i) hardware logic configured to carry out a first round of the allocation, by which round a subset of the cores are allocated among the programs so that any actually materialized demands for the cores by each of the programs up to their respective entitled shares of the cores are met; and (ii) hardware logic configured to carry out a second round of the allocation, by which round any of the cores that remain unallocated after the first round are allocated among the programs whose materialized demands for the cores had not been met by amounts of the cores so far allocated to them by the present execution of the allocation;
- a second hardware logic subsystem for buffering input data for the instances of the set of programs at an array of program instance specific input data buffers, wherein a given buffer within said array buffers such input data that is directed to the program instance associated with the given buffer, and wherein the materialized demand for the cores by a given one of the programs, for an upcoming CAP, is expressed as a digital value that is formed at least in part based on numbers of non-empty input data buffers of the given program during the ongoing CAP; and
- a third hardware logic subsystem for assigning individual program instances of the set to individual cores of the array in a manner that assigns each such instance of the programs, which was selected, following the allocation, for execution on the array of cores on consecutive CAPs, to same one of the cores for execution on each of such consecutive CAPs.
2. The system of claim 1, wherein the materialized demand for the cores by a given one of the programs is expressed as a number of schedulable instances that the given program has ready for execution during the ongoing CAP.
3. The system of claim 2, wherein the number of schedulable instances of the given program is determined at least in part based on a number of instances of the given program that have input data available for processing during the ongoing CAP.
4. The system of claim 3, wherein an instance of the given program has dedicated for it a set of hardware based input data buffers, and wherein said instance is deemed to have data available for processing when at least one of its dedicated input data buffers is non-empty.
5. The system of claim 1, wherein the number of schedulable instances that the given program has ready for execution for the CAP following the present execution of the allocation is formed independently of (1) the respective numbers for other programs of the set, (2) the other programs' utilizations of any cores allocated to them, and (3) utilization of the cores across the array.
6. The system of claim 1, wherein, on at least some executions of the allocation, the subset of the cores allocated by the first round comprises zero cores, whereas, on at least some of the other executions of the allocation, the subset of the cores allocated by the first round comprises at least one, and up to all, of the, cores.
7. The system of claim 1 further comprising hardware logic configured to carry out a third round of the allocation, by which round any of the cores that remain unallocated after the second round are allocated among the programs.
8. The system of claim 1 implemented by hardware logic.
9. The system of claim 1, wherein the second hardware logic subsystem and the third hardware logic subsystem are implemented by hardware logic that operates, at least on some of the CAPs, without software involvement.
10. A dynamic computing resource management process method comprising the steps of:
- a sub-process for periodically, for at least some of successive core allocation periods (CAPs), executing, by first hardware logic, an allocation of an array of processing cores among a set of software programs, where each of the set of software programs has one or more instances of the corresponding program, said sub-process allocation comprising: (i) a first round of the allocation, by which round a subset of the cores are allocated among the programs so that any actually materialized demands for the cores by each of the programs up to their respective entitled shares of the cores are met; and (ii) a second round of the allocation, by which round any of the cores that remain unallocated after the first round are allocated among the programs whose materialized demands for the cores had not been met by amounts of the cores so far allocated to them by the present execution of the allocation; and
- a sub-process for assigning, by second hardware logic, individual program instances of the set of programs to individual cores of the array in a manner that assigns each such instance of the programs, which was selected, at least in part based on the allocation, for execution on the array of cores on consecutive CAPs, to same one of the cores for execution on each of such consecutive CAPs,
- wherein: any given instance of a given one of the programs has an array of one or more input data buffers dedicated to the given instance, and the materialized demand for the cores by the given program is determined at least in part based on a number of instances of the given program that have input data available in at least one buffer within their respective arrays of input data buffers during the ongoing CAP.
11. The process method of claim 10, further involving a sub-process for comprising buffering, by third hardware logic, input data for instances of the programs at an array of program instance specific input data buffers, wherein a given buffer within said array buffers such input data that is directed to the program instance associated with the given buffer.
12. The process method of claim 11, wherein the materialized demand for the cores by a given one of the programs, for an upcoming CAP, is expressed as a digital value that is formed at least in part based on numbers of non-empty input data buffers of the given program during the ongoing CAP.
13. The process method of claim 10, wherein the number of schedulable instances that the given program has ready for execution for the CAP following the present execution of the allocation is formed independently of (1) the respective numbers for other programs of the set, (2) the other programs' utilizations of any cores allocated to them, and (3) utilization of the cores across the array.
14. The process method of claim 10, wherein, on at least some executions of the allocation, the subset of the cores allocated by the first round comprises zero cores, whereas, on at least some of the other executions of the allocation, the subset of the cores allocated by the first round comprises at least one, and up to all of the, cores.
15. The process method of claim 10 implemented entirely by hardware logic.
16. The process of claim 10 implemented by hardware logic that operates, at least on some of the CAPs, without software involvement.
17. The process method of claim 10, wherein the sub-process for executing the allocation further comprises a third round of the allocation, by which round any of the cores that remain unallocated after the second round are allocated among the programs.
18. A system for computing resource management, comprising:
- a first sub-system for periodically, for at least some of successive core allocation periods (CAPs), executing an allocation of an array of processing cores among a set of software programs, where each of the set of software programs has one or more instances of the corresponding program, said sub-system comprising: (i) a module for carrying out a first round of the allocation, by which round a subset of the cores are allocated among the programs so that any actually materialized demands for the cores by each of the programs up to their respective entitled shares of the cores are met; and (ii) a module for carrying out a second round of the allocation, by which round any of the cores that remain unallocated after the first round are allocated among the programs whose materialized demands for the cores had not been met by amounts of the cores so far allocated to them by the present execution of the allocation; and
- a second sub-system for assigning individual program instances of the set of programs to individual cores of the array in a manner that assigns each such instance of the programs, which was selected, based at least in part on the allocation, for execution on the array of cores on consecutive CAPs, to same one of the cores for execution on each of such consecutive CAPs,
- wherein: at least one of said sub-systems each of the first sub-system and the second sub-system is implemented in hardware logic, any given instance of a given one of the programs has an array of one or more input data buffers dedicated to the given instance, and the materialized demand for the cores by the given program is determined at least in part based on a number of instances of the given program that have data available in at least one buffer within their respective arrays of input data buffers during the ongoing CAP.
19. The system of claim 18, wherein the first sub-system for executing the allocation further comprising comprises a module for carrying out a third round of the allocation, by which round any of the cores that remain unallocated after the second round are allocated among the programs.
20. The system of claim 18 implemented by hardware logic that, wherein each of the first hardware logic and the second hardware logic operates, at least on some of the CAPs, without software involvement.
|4402046||August 30, 1983||Cox et al.|
|4403286||September 6, 1983||Fry et al.|
|4404628||September 13, 1983||Angelo|
|4956771||September 11, 1990||Neustaedter|
|5031146||July 9, 1991||Umina et al.|
|5237673||August 17, 1993||Orbits et al.|
|5303369||April 12, 1994||Borcherding et al.|
|5452231||September 19, 1995||Butts et al.|
|5519829||May 21, 1996||Wilson|
|5612891||March 18, 1997||Butts|
|5752030||May 12, 1998||Konno et al.|
|5809516||September 15, 1998||Ukai et al.|
|5931959||August 3, 1999||Kwiat|
|6072781||June 6, 2000||Feeney et al.|
|6108683||August 22, 2000||Kamada et al.|
|6289434||September 11, 2001||Roy|
|6289440||September 11, 2001||Casselman|
|6334175||December 25, 2001||Chih|
|6345287||February 5, 2002||Fong et al.|
|6353616||March 5, 2002||Elwalid et al.|
|6366157||April 2, 2002||Abdesselem et al.|
|6721948||April 13, 2004||Morgan|
|6769017||July 27, 2004||Bhat et al.|
|6782410||August 24, 2004||Bhagat et al.|
|6816905||November 9, 2004||Sheets et al.|
|6912706||June 28, 2005||Stamm et al.|
|7058868||June 6, 2006||Guettaf|
|7093258||August 15, 2006||Miller et al.|
|7099813||August 29, 2006||Nightingale|
|7110417||September 19, 2006||El-Hennawey et al.|
|7178145||February 13, 2007||Bono|
|7328314||February 5, 2008||Kendall et al.|
|7370013||May 6, 2008||Aziz et al.|
|7389403||June 17, 2008||Alpert et al.|
|7406407||July 29, 2008||Larus|
|7447873||November 4, 2008||Nordquist|
|7461376||December 2, 2008||Geye et al.|
|7490328||February 10, 2009||Gavish et al.|
|7503045||March 10, 2009||Aziz et al.|
|7518396||April 14, 2009||Kondapalli et al.|
|7581079||August 25, 2009||Pechanek|
|7599753||October 6, 2009||Taylor et al.|
|7665092||February 16, 2010||Vengerov|
|7698541||April 13, 2010||Robles|
|7738496||June 15, 2010||Raza|
|7743001||June 22, 2010||Vermeulen et al.|
|7760625||July 20, 2010||Miyaho et al.|
|7765547||July 27, 2010||Cismas et al.|
|7802255||September 21, 2010||Pilkington|
|7805706||September 28, 2010||Ly|
|7861063||December 28, 2010||Golla et al.|
|7908606||March 15, 2011||Depro et al.|
|7984246||July 19, 2011||Yung et al.|
|7990974||August 2, 2011||Gmuender et al.|
|8001549||August 16, 2011||Henmi|
|8015392||September 6, 2011||Naik et al.|
|8018961||September 13, 2011||Gopinath|
|8032889||October 4, 2011||Conrad et al.|
|8046766||October 25, 2011||Rhine|
|8059674||November 15, 2011||Cheung et al.|
|8060610||November 15, 2011||Herington|
|8087029||December 27, 2011||Lindholm|
|8095662||January 10, 2012||Lappas et al.|
|8098255||January 17, 2012||Fouladi et al.|
|8230070||July 24, 2012||Buyya et al.|
|8271730||September 18, 2012||Piry et al.|
|8327126||December 4, 2012||Bell, Jr. et al.|
|8352611||January 8, 2013||Maddhuri et al.|
|8429630||April 23, 2013||Nickolov et al.|
|8447933||May 21, 2013||Nishihara|
|8533674||September 10, 2013||Abrams et al.|
|8539207||September 17, 2013||LeGrand|
|8566836||October 22, 2013||Ramaraju et al.|
|8713572||April 29, 2014||Chambliss et al.|
|8713574||April 29, 2014||Creamer et al.|
|8745241||June 3, 2014||Waldspurger|
|8893016||November 18, 2014||Diamond|
|9348724||May 24, 2016||Ota et al.|
|9448847||September 20, 2016||Sandstrom|
|9608933||March 28, 2017||Emaru|
|20020040400||April 4, 2002||Masters|
|20020056033||May 9, 2002||Huppenthal|
|20020112091||August 15, 2002||Schott et al.|
|20020124012||September 5, 2002||Liem et al.|
|20020129080||September 12, 2002||Hentschel et al.|
|20020141343||October 3, 2002||Bays|
|20020143843||October 3, 2002||Mehta|
|20020169828||November 14, 2002||Blanchard|
|20030018807||January 23, 2003||Larsson et al.|
|20030235200||December 25, 2003||Kendall et al.|
|20040088488||May 6, 2004||Ober et al.|
|20040111724||June 10, 2004||Libby|
|20040128401||July 1, 2004||Fallon et al.|
|20040158637||August 12, 2004||Lee|
|20040168170||August 26, 2004||Miller|
|20040193806||September 30, 2004||Koga et al.|
|20040210900||October 21, 2004||Jones et al.|
|20050010502||January 13, 2005||Birkestrand et al.|
|20050013705||January 20, 2005||Farkas et al.|
|20050036515||February 17, 2005||Cheung|
|20050055694||March 10, 2005||Lee|
|20050080999||April 14, 2005||Angsmark et al.|
|20050182838||August 18, 2005||Sheets et al.|
|20050188372||August 25, 2005||Inoue et al.|
|20050193186||September 1, 2005||Gazsi et al.|
|20050198476||September 8, 2005||Gazsi et al.|
|20050235070||October 20, 2005||Young et al.|
|20050268298||December 1, 2005||Hunt et al.|
|20050278551||December 15, 2005||Goodnow et al.|
|20060036774||February 16, 2006||Schott et al.|
|20060059485||March 16, 2006||Onufryk et al.|
|20060061794||March 23, 2006||Ito et al.|
|20060070078||March 30, 2006||Dweck et al.|
|20060075265||April 6, 2006||Hamaoka et al.|
|20060179194||August 10, 2006||Jensen|
|20060195847||August 31, 2006||Amano et al.|
|20060218376||September 28, 2006||Pechanek|
|20070074011||March 29, 2007||Borkar et al.|
|20070153802||July 5, 2007||Anke et al.|
|20070220517||September 20, 2007||Lippett|
|20070226482||September 27, 2007||Borkar et al.|
|20070291576||December 20, 2007||Yang|
|20080077927||March 27, 2008||Armstrong et al.|
|20080086395||April 10, 2008||Brenner et al.|
|20080134191||June 5, 2008||Warrier et al.|
|20080189703||August 7, 2008||Im et al.|
|20080244588||October 2, 2008||Leiserson et al.|
|20080256339||October 16, 2008||Xu et al.|
|20090037554||February 5, 2009||Herington|
|20090049443||February 19, 2009||Powers et al.|
|20090070762||March 12, 2009||Franaszek et al.|
|20090178047||July 9, 2009||Astley et al.|
|20090187756||July 23, 2009||Nollet et al.|
|20090198866||August 6, 2009||Chen et al.|
|20090265712||October 22, 2009||Herington|
|20090327446||December 31, 2009||Wittenschlaeger|
|20100043008||February 18, 2010||Marchand|
|20100049963||February 25, 2010||Bell, Jr. et al.|
|20100058346||March 4, 2010||Narang et al.|
|20100100883||April 22, 2010||Booton|
|20100131955||May 27, 2010||Brent et al.|
|20100138913||June 3, 2010||Saroj et al.|
|20100153700||June 17, 2010||Capps, Jr. et al.|
|20100153955||June 17, 2010||Sirota et al.|
|20100162230||June 24, 2010||Chen et al.|
|20100192155||July 29, 2010||Nam et al.|
|20100205602||August 12, 2010||Zedlewski et al.|
|20100232396||September 16, 2010||Jing et al.|
|20100268889||October 21, 2010||Conte et al.|
|20100287320||November 11, 2010||Querol et al.|
|20110014893||January 20, 2011||Davis et al.|
|20110035749||February 10, 2011||Krishnakumar et al.|
|20110050713||March 3, 2011||McCrary et al.|
|20110055480||March 3, 2011||Guyetant et al.|
|20110096667||April 28, 2011||Arita et al.|
|20110119674||May 19, 2011||Nishikawa|
|20110154348||June 23, 2011||Elnozahy et al.|
|20110161969||June 30, 2011||Arndt et al.|
|20110173432||July 14, 2011||Cher et al.|
|20110197048||August 11, 2011||Chung et al.|
|20110247012||October 6, 2011||Uehara|
|20110249678||October 13, 2011||Bonicatto et al.|
|20110258317||October 20, 2011||Sinha et al.|
|20110296138||December 1, 2011||Carter et al.|
|20110321057||December 29, 2011||Mejdrich et al.|
|20120022832||January 26, 2012||Shannon et al.|
|20120079501||March 29, 2012||Sandstrom|
|20120089985||April 12, 2012||Adar et al.|
|20120173734||July 5, 2012||Kimbrel et al.|
|20120216012||August 23, 2012||Vorbach et al.|
|20120221886||August 30, 2012||Barsness et al.|
|20120222038||August 30, 2012||Katragadda et al.|
|20120222042||August 30, 2012||Chess et al.|
|20120246450||September 27, 2012||Abdallah|
|20120266176||October 18, 2012||Vojnovic et al.|
|20120303809||November 29, 2012||Patel et al.|
|20120324458||December 20, 2012||Peterson et al.|
|20130013903||January 10, 2013||Bell, Jr. et al.|
|20130222402||August 29, 2013||Peterson et al.|
|20140123135||May 1, 2014||Huang et al.|
|20140181501||June 26, 2014||Hicok et al.|
|20140317378||October 23, 2014||Lippett|
|20140331236||November 6, 2014||Mitra et al.|
|20150178116||June 25, 2015||Jorgensen et al.|
|20150339798||November 26, 2015||Peterson et al.|
|20150378776||December 31, 2015||Lippett|
|20160048394||February 18, 2016||Vorbach et al.|
|20160080201||March 17, 2016||Huang et al.|
|20160378538||December 29, 2016||Kang|
|20170097838||April 6, 2017||Nagapudi et al.|
- Han, Wei, et al., Multi-core Architectures with Dynamically Reconfigurable Array Processors for the WiMAx Physical layer, pp. 115-120, 2008.
- Jean, J et al., Dynamic reconfirmation to support concurrent applications, IEEE Transactions on Computers, vol. 48, Issue 6, pp. 591-602, Jun. 1999.
- Loh, Gabriel H., 3 D-Stacked Memory Architectures for Multi-Core Processors, IEEE Computer Society, pp. 453-464, 2008.
- McCan, Cathy, et al., A Dynamic Processor Allocation Policy for Multiprogrammed Shared-Memory Multiprocessors, 1993, ACM, 33 pages (146-178).
- Ismail, M. I., et al., “Program-based static allocation policies for highly parallel computers,” Proceedings International Phoenix Conference on Computers and Communications, Scottsdale, AZ, 1995, pp. 61-68.
- Morishita, et al., Design of a multiprocessor system supporting interprocess message communication, Journal of the Faculty of Engineering, University of Tokyo, Series A, No. 24, 1986, pp. 36-37.
- Hindman, Benjamin, et al., Mesos: A Platform for Fine-Grained Resource Sharing in the Data Center, Proceedings of NSDI '11: 8th USENIX Symposium on Networked Systems Design and Implementation, Mar. 30, 2011, pp. 295-308.
- Murthy, Arun C., et al., Architecture of Next Generation Apache Hadoop MapReduce Framework, 2011, 14 pages.
- Shieh, Alan, et al., Sharing the Data Center Network, Proceedings of NSDI '11: 8th USENIX Symposium on Networked Systems Design and Implementation, Mar. 30, 2011, pp. 309-322.
- [#HADOOP-3445] Implementing core scheduler functionality in Resource Manager (V1) for Hadoop, Accessed May 18, 2018, 12 pages, https://issues.apache.org/jira/si/jira.issueviews:issue-html/HADOOP-3445/HADOOP-3445.html.
- Zaharia, Matei et al., Job Scheduling for Multi-User MapReduce Clusters, Apr. 30, 2009, actual publication date unknown,18 pages, Electrical Engineering and Computer Sciences, University of California at Berkeley, https://www2.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-55.pdf.
- Isard, Michael et al., Quincy: Fair Scheduling for Distributed Computing Clusters, Accessed May 18, 2018, 20 pages, https://www.sigops.org/sosp/sosp09/papers/isard-sosp09.pdf.
- Tsai, Chang-Hao, System Architectures with Virtualized Resources in a Large-Scale Computing Infrastructure, 2009, 146 pages, Computer Science and Engineering, The University of Michigan, https://kabru.eecs.umich.edu/papers/thesis/chtsai-thesis.pdf.
- Sandholm, Thomas et al., Dynamic Proportional Share Scheduling in Hadoop, Accessed May 18, 2018, 20 pages, Hewlett-Packard Laboratories, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.591.4477&rep=rep1&type=pdf.
- Mohan, Shiwali et al., Towards a Resource Aware Scheduler in Hadoop, Dec. 21, 2009, 10 pages, Computer Science and Engineering, University of Michigan, Ann Arbor, https://pdfs.semanticscholar.org/d2e3/c7b60967934903f0837219772c6972ede93e.pdf.
- Tian, Chao et al., A Dynamic MapReduce Scheduler for Heterogeneous Workloads, 2009, pp. 218-224, IEEE Computer Society, https://pdfs.semanticscholar.org/679f/73d810e2ac9e2e84de798d853b6fb0b0206a.pdf.
- Fischer, Michael J. et al., Assigning Tasks for Efficiency in Hadoop, 2010, 11 pages, https://www.researchgate.net/profile/Xueyuan_Su/publication/221257628_Assigning_tasks_for_efficiency_in_Hadoop/links/53df31100cf216e4210c5fd1/Assigning-tasks-for-efficiency-in-Hadoop.pdf.
- Cooper, Brian F. et al., Building a Cloud for Yahoo!, 2009, 9 pages, IEEE Computer Society Technical Committee on Data Engineering, https://www.researchgate.net/profile/Rodrigo_Fonseca3/publication/220282767_Building_a_Cloud_for_Yahoo/links/0912f5109da99ddf6a000000/Building-a-Cloud-for-Yahoo.pdf.
- Lim, Harold C. et al., Automated Control in Cloud Computing: Challenges and Opportunities, Jun. 19, 2009, 6 pages, ACM, https://www2.cs.duke.edu/nicl/pub/papers/acdc09-lim.pdf.
- Wen et al., “Minimizing Migration on Grid Environments: an Experience on Sun Grid Engine” Journal of Information Technology and Applications, vol. 1, No. 4, pp. 297-304 (2007).
- Gentzsch, et al., “Sun Grid Engine: Towards Creating a Compute Power Grid.” IEEE Computer Society, Proceedings of the 1st International Symposium on Cluster Computing and the Grid (2001).
- Borges, et al., “Sun Grid Engine, a new scheduler for EGEE middleware,” (2018).
- Shankar, Uma, Oracle Grid Engine Administration Guide, Release 6.2 Update 7, Aug. 2011, 202 pages, Oracle Corporation.
- Non-Final Rejection issued in related U.S. Appl. No. 13/297,455 dated Mar. 14, 2013, 23 pages.
- Final Rejection issued in related U.S. Appl. No. 13/297,455 dated Apr. 18, 2013, 18 pages.
- Non-Final Rejection issued in related U.S. Appl. No. 13/297,455 dated Jun. 19, 2014, 15 pages.
- Final Rejection issued in related U.S. Appl. No. 13/297,455 dated Sep. 3, 2014, 18 pages.
- Non-Final Rejection issued in related U.S. Appl. No. 13/297,455 dated Oct. 3, 2014, 29 pages.
- Final Rejection issued in related U.S. Appl. No. 13/297,455 dated Mar. 26, 2015, 14 pages.
- Examiner's Answer issued in related U.S. Appl. No. 13/297,455 dated Feb. 10, 2016, 9 pages.
- Non-Final Rejection issued in related U.S. Appl. No. 13/684,473 dated Mar. 7, 2014, 24 pages.
- Non-Final Rejection issued in related U.S. Appl. No. 14/318,512 dated Feb. 12, 2016, 25 pages.
- Non-Final Rejection issued in related U.S. Appl. No. 14/318,512 dated Jun. 1, 2016, 18 pages.
- Non-Final Rejection issued in related U.S. Appl. No. 14/521,490 dated May 4, 2017, 19 pages.
- Final Rejection issued in related U.S. Appl. No. 14/521,490 dated Jul. 28, 2017, 16 pages.
- Non-Final Rejection issued in related U.S. Appl. No. 14/521,490 dated May 17, 2018, 23 pages.
- Non-Final Rejection issued in related U.S. Appl. No. 15/267,153 dated Mar. 9, 2018, 23 pages.
- Non-Final Rejection issued in related U.S. Appl. No. 15/273,731 dated Aug. 6, 2018, 94 pages.
- Ghodsi, Ali, et al., Dominant Resource Fairness: Fair Allocation of Multiple Resource Types, Proceedings of NSDI 11: 8th USENIX Symposium on Networked Systems Design and Implementation, Mar. 30, 2011, pp. 309-322.
- Non-Final Rejection issued in related U.S. Appl. No. 15/267,153 dated Aug. 24, 2018, 54 pages.
- Warneke et al., “Nephele: efficient parallel data processing in the cloud,” MTAGS '09 Proceedings of the 2nd Workshop on Many-Task Computing on Grids and Supercomputers, Article No. 8 (2009).
- Final Rejection issued in related U.S. Appl. No. 14/521,490 dated Nov. 14, 2018, 21 pages.
- Partial Reconfiguration User Guide, a Xilinx, Inc. user document UG702 (v14.2) Jul. 25, 2012.
- Dye, David, Partial Reconfiguration of Xilinx FPGAs Using ISE Design Suite, a Xilinx, Inc. White Paper WP374 (v1.2), May 30, 2012.
- Lamonnier et al., Accelerate Partial Reconfiguration with a 100% Hardware Solution, Xcell Journal, Issue 79, Second Quarter 2012, pp. 44-49.
- 7 Series FPGAs Configuration User Guide, a Xilinx, Inc. User Guide UG470 (v1.4) Jul. 19, 2012.
- Partial Reconfiguration Tutorial, PlanAhead Design Tool, a Xilinx, Inc. User Guide UG743 (v14.1) May 8, 2012.
- Tam et al., Fast Configuration of PCI Express Technology through Partial Reconfiguration, a Xilinx, Inc. Application Note XAPP883 (v1.0) Nov. 19, 2010.
- Singh, Deshanand, Implementing FPGA Design with the OpenCL Standard, an Altera Corporation White Paper WP-01173-2.0, Nov. 2012.
- First Examination Report issued in IN Application No. 401/MUM/2011 dated Nov. 9, 2018.
- Examination Report issued in IN Application No. 2414/MUM/2011 dated Jul. 25, 2019.
- Notice of Allowance issued in U.S. Appl. No. 16/046,718 dated Aug. 13, 2019.
International Classification: G06F 9/46 (20060101); G06F 15/173 (20060101); H04L 12/28 (20060101); G06F 9/50 (20060101); G06F 9/54 (20060101); G06F 9/48 (20060101); H04L 12/933 (20130101);