Configurable logic platform with reconfigurable processing circuitry
An architecture for a load-balanced groups of multi-stage manycore processors shared dynamically among a set of software applications, with capabilities for destination task defined intra-application prioritization of inter-task communications (ITC), for architecture-based ITC performance isolation between the applications, as well as for prioritizing application task instances for execution on cores of manycore processors based at least in part on which of the task instances have available for them the input data, such as ITC data, that they need for executing.
Latest ThroughPuter, Inc. Patents:
- ONLINE TRAINED OBJECT PROPERTY ESTIMATOR
- RESPONDING TO APPLICATION DEMAND IN A SYSTEM THAT USES PROGRAMMABLE LOGIC COMPONENTS
- CONFIGURABLE LOGIC PLATFORM WITH RECONFIGURABLE PROCESSING CIRCUITRY
- Responding to application demand in a system that uses programmable logic components
- Configurable logic platform with reconfigurable processing circuitry
This application is a continuation of U.S. application Ser. No. 17/470,926 filed Sep. 9, 2021, which is a continuation application of U.S. application Ser. No. 17/463,098 filed Aug. 31, 2021 (now U.S. Pat. No. 11,347,556), which is a continuation application of U.S. application Ser. No. 17/344,636 filed Jun. 10, 2021 (now U.S. Pat. No. 11,188,388), which is a continuation application of U.S. application Ser. No. 17/195,174 filed Mar. 8, 2021 (now U.S. Pat. No. 11,036,556), which is a continuation application of U.S. application Ser. No. 16/434,581 filed Jun. 7, 2019 (now U.S. Pat. No. 10,942,778), which is a continuation application of U.S. application Ser. No. 15/267,153 filed Sep. 16, 2016 (now U.S. Pat. No. 10,318,353), which is a continuation application of U.S. application Ser. No. 14/318,512 filed Jun. 27, 2014 (now U.S. Pat. No. 9,448,847), which claims the benefit and priority of the following provisional applications:
[1] U.S. Provisional Application No. 61/934,747 filed Feb. 1, 2014; and
[2] U.S. Provisional Application No. 61/869,646 filed Aug. 23, 2013;
This application is also related to the following patented applications:
[3] U.S. Utility application Ser. No. 13/184,028, filed Jul. 15, 2011;
[4] U.S. Utility application Ser. No. 13/270,194, filed Oct. 10, 2011;
[5] U.S. Utility application Ser. No. 13/277,739, filed Nov. 21, 2011;
[6] U.S. Utility application Ser. No. 13/297,455, filed Nov. 16, 2011;
[7] U.S. Utility application Ser. No. 13/684,473, filed Nov. 23, 2012;
[8] U.S. Utility application Ser. No. 13/717,649, filed Dec. 17, 2012;
[9] U.S. Utility application Ser. No. 13/901,566, filed May 24, 2013; and
[10] U.S. Utility application Ser. No. 13/906,159, filed May 30, 2013.
All above identified applications are hereby incorporated by reference in their entireties for all purposes.
BACKGROUND Technical FieldThis invention pertains to the field of information processing, particularly to techniques for managing execution of multiple concurrent, multi-task software programs on parallel processing hardware.
Descriptions of the Related ArtConventional microprocessor and computer system architectures rely on system software for handling runtime matters relating to sharing processing resources among multiple application programs and their instances, tasks etc., as well as orchestrating the concurrent (parallel and/or pipelined) execution between and within the individual applications sharing the given set of processing resources. However, the system software consumes by itself ever increasing portions of the system processing capacity, as the number of applications, their instances and tasks and the pooled processing resources would grow, as well as the more frequently the optimizations of the dynamic resource management among the applications and their tasks would be needed to be performed, in response to variations in the applications' and their instances' and tasks' processing loads etc. variables of the processing environment. As such, the conventional approaches for supporting dynamic execution of concurrent programs on shared processing capacity pools will not scale well.
This presents significant challenges to the scalability of the networked utility (‘cloud’) computing model, in particular as there will be a continuously increasing need for greater degrees of concurrent processing also at intra-application levels, in order to enable increasing individual application on-time processing throughput performance, without the automatic speed-up from processor clock rates being available due to the practical physical and economic constraints faced by the semiconductor etc. physical hardware implementation technologies.
To address the challenges per above, there is a need for inventions enabling scalable, multi-application dynamic concurrent execution on parallel processing systems, with high resource utilization efficiency, high application processing on-time throughput performance, as well built-in, architecture based security and reliability.
SUMMARYAn aspect of the invention provides systems and methods for arranging secure and reliable, concurrent execution of a set of internally parallelized and pipelined software programs on a pool of processing resources shared dynamically among the programs, wherein the dynamic sharing of the resources is based at least in part on i) processing input data loads for instances and tasks of the programs and ii) contractual capacity entitlements of the programs.
An aspect of the invention provides methods and systems for intelligent, destination task defined prioritization of inter-task communications (ITC) for a computer program, for architectural ITC performance isolation among a set of programs executing concurrently on a dynamically shared data processing platform, as well as for prioritizing instances of the program tasks for execution at least in part based on which of the instances have available to them their input data, including ITC data, enabling any given one of such instances to execute at the given time.
An aspect of the invention provides a system for prioritizing instances of a software program for execution. Such a system comprises: 1) a subsystem for determining which of the instances are ready to execute on an array of processing cores, at least in part based on whether a given one of the instances has available to it input data to process, and 2) a subsystem for assigning a subset of the instances for execution on the array of cores based at least in part on the determining. Various embodiments of that system include further features such as features whereby a) the input data is from a data source such that the given instance has assigned a high priority for purposes of receiving data; b) the input data is such data that it enables the given program instance to execute; c) the subset includes cases of none, some as well as all of the instances of said program; d) the instance is: a process, a job, a task, a thread, a method, a function, a procedure or an instance any of the foregoing, or an independent copy of the given program; and/or e) the system is implemented by hardware logic that is able to operate without software involvement.
An aspect of the invention provides a hardware logic implemented method for prioritizing instances of a software program for execution, with such a method involving: classifying instances of the program into the following classes, listed in the order from higher to lower priority for execution, i.e., in their reducing execution priority order: (I) instances indicated as having high priority input data for processing, and (II) any other instances. Various embodiments of that method include further steps and features such as features whereby a) the other instances are further classified into the following sub-classes, listed in their reducing execution priority order: (i) instances indicated as able to execute presently without the high priority input data, and (ii) any remaining instances; b) the high priority input data is data that is from a source where its destination instance, of said program, is expecting high priority input data; c) a given instance of the program comprises tasks, with one of said tasks referred to as a destination task and others as source tasks of the given instance, and for the given instance, a unit of the input data is considered high priority if it is from such one of the source tasks that the destination task has assigned a high priority for inter-task communications to it; d) for any given one of the instances, a step of computing a number of its non-empty source task specific buffers among its input data buffers such that belong to source tasks of the given instance indicated at the time as high priority source tasks for communications to the destination task of the given instance, with this number referred to as an H number for its instance, and wherein, within the class I), the instances are prioritized for execution at least in part according to magnitudes of their H numbers, in descending order such that an instance with a greater H number is prioritized before an instance with lower H number; e) in case of two or more of the instances tied for the greatest H number, such tied instances are prioritized at least in part according to their respective total numbers of non-empty input data buffers, and/or 0 at least one of the instances is either a process, a job, a task, a thread, a method, a function, a procedure, or an instance any of the foregoing, or an independent copy of the given program.
An aspect of the invention provides a system for processing a set of computer programs instances, with inter-task communications (ITC) performance isolation among the set of program instances. Such a system comprises: 1) a number of processing stages; and 2) a group of multiplexers connecting ITC data to a given stage among the processing stages, wherein a multiplexer among said group is specific to one given program instance among said set. The system hosts each task of the given program instance at different one of the processing stages, and supports copies of same task software code being located at more than one of the processing stages in parallel. Various embodiments of this system include further features such as a) a feature whereby at least one of processing stages comprises multiple processing cores such as CPU execution units, with, for any of the cores, at any given time, one of the program instances assigned for execution; b) a set of source task specific buffers for buffering data destined for a task of the given program instance located at the given stage, referred to as a destination task, and hardware logic for forming a hardware signal indicating whether sending ITC is presently permitted to a given buffer among the source task specific buffers, with such forming based at least in part on a fill level of the given buffer, and with such a signal being connected to a source task for which the given buffer is specific to; c) a feature providing, for the destination task, a set of source task specific buffers, wherein a given buffer is specific to one of the other tasks of the program instance for buffering ITC from said other task to the destination task; d) feature wherein the destination task provides ITC prioritization information for other tasks of the program instance located at their respective ones of the stages; d) a feature whereby the ITC prioritization information is provided by the destination task via a set of one or more hardware registers, with each register of the set specific to one of the other tasks of the program instance, and with each register configured to store a value specifying a prioritization level of the task that it is specific to, for purposes of ITC communications to the destination task; e) an arbitrator controlling from which source task of the program instance the multiplexer specific to that program instance will read its next ITC data unit for the destination task; and/or f) a feature whereby the arbitrator prioritizes source tasks of the program instance for selection by the multiplexer to read its next ITC data unit based at least in part on at least one of: (i) source task specific ITC prioritization information provided by the destination task, and (ii) source task specific availability information of ITC data for the destination task from the other tasks of the program instance.
Accordingly, aspects of the invention involve application-program instance specific hardware logic resources for secure and reliable ITC among tasks of application program instances hosted at processing stages of a multi-stage parallel processing system. Rather than seeking to inter-connect the individual processing stages or cores of the multi-stage manycore processing system as such, the invented mechanisms efficiently inter-connect the tasks of any given application program instance using the per application program instance specific inter-processing stage ITC hardware logic resources. Due to the ITC being handled with such application program instance specific hardware logic resources, the ITC performance experience by one application instance does not depend on the ITC resource usage (e.g. data volume and inter-task communications intensiveness) of the other applications sharing the given data processing system per the invention. This results in effective inter-application isolation for ITC in a multi-stage parallel processing system shared dynamically among multiple application programs.
An aspect of the invention provides systems and methods for scheduling instances of software programs for execution based at least in part on (1) availability of input data of differing priorities for any given one of the instances and/or (2) availability, on their fast-access memories, of memory contents needed by any given one of the instances to execute.
An aspect of the invention provides systems and methods for optimally allocating and assigning input port capacity to a data processing systems among data streams of multiple software programs based at least in part on input data load levels and contractual capacity entitlements of the programs.
An aspect of the invention provides systems and methods for resolution of resource access contentions, for resources including computing, storage and communication resources such as memories, queues, ports or processors. Such methods enable multiple potential user systems for a shared resource, in a coordinated and fair manner, to avoid conflicting resource access decisions, even while multiple user systems are deciding on access to set of shared resources concurrently, including at the same clock cycle.
An aspect of the invention provides systems and methods for load balancing, whereby the load balancer is configured to forward, by its first layer, any packets without destination instance within its destination application specified (referred to as no-instance-specified packets or NIS packets for short) it receives from its network input to such one of the processing systems in the local load balancing group that presently has the highest score for accepting NIS packets for the destination app of the given NIS packet. The load balancers further have destination processing system (i.e. for each given application, instance group) specific sub-modules, which, for NIS packets forwarded to them by the first layer balancing logic, specify a destination instance among the available, presently inactive instance resources of the destination app of a given NIS packet to which to forward the given NIS packet. In at least some embodiments of the invention, the score for accepting NIS packets for a destination processing system among the load balancing group is based at least in part on the amount of presently inactive instance resources at the given processing system for the destination application of a given NIS packet.
FIGS. and related descriptions in the following provide specifications for embodiments and aspects of hardware-logic based systems and methods for inter-task communications (ITC) with destination task defined source task prioritization, for input data availability based prioritization of instances of a given application task for execution on processing cores of a processing stage hosting the given task, for architecture-based application performance isolation for ITC in multi-stage manycore data processing system, as well as for load balancing of incoming processing data units among a group of such processing systems.
The invention is described herein in further detail by illustrating the novel concepts in reference to the drawings. General symbols and notations used in the drawings:
-
- Boxes indicate a functional module comprising digital hardware logic.
- Arrows indicate a digital signal flow. A signal flow may comprise one or more parallel bit wires. The direction of an arrow indicates the direction of primary flow of information associated with it with regards to discussion of the system functionality herein, but does not preclude information flow also in the opposite direction.
- A dotted line marks a border of a group of drawn elements that form a logical entity with internal hierarchy.
- An arrow reaching to a border of a hierarchical module indicate connectivity of the associated information to/from all sub-modules of the hierarchical module.
- Lines or arrows crossing in the drawings are decoupled unless otherwise marked.
- For clarity of the drawings, generally present signals for typical digital logic operation, such as clock signals, or enable, address and data bit components of write or read access buses, are not shown in the drawings.
General notes regarding this specification (incl. text in the drawings):
-
- For brevity: ‘application (program)’ is occasionally written in as ‘app’, ‘instance’ as ‘inst’ and ‘application-task/instance’ as ‘app-task/inst’ and so forth.
- Terms software program, application program, application and program are used interchangeably in this specification, and each generally refers to any type of executable computer program.
- In
FIG. 5 , and through the related discussions, the buffers 260 are considered to be First-in First-Out buffers (FIFO); however also other types than first-in first-out buffers can be used in various embodiments.
Illustrative embodiments and aspects of the invention are described in the following with references to the FIGS.
The load balancing per
-
- The processing systems 1 count, for each of the application programs (apps) hosted on them:
- a number X of their presently inactive instance resources, i.e., the number of additional parallel instances of the given app at the given processing system that could be activated at the time; and
- from the above number, the portion Y (if any) of the additional activatable instances within the Core Entitlement (CE) level of the given app, wherein the CE is a number of processing cores at (any one of) the processing stages of the given processing system up to which the app in question is assured to get its requests for processing cores (to be assigned for its active instances) met;
- the difference W=X−Y. The quantities X and/or W and Y, per each of the apps hosted on the load balancing group 2, are signaled 5 from each processing system 1 to the load balancers 4.
- In addition, load balancing logic 4 computes the collective sum Z of the Y numbers across all the apps (with this across-apps-sum Z naturally being the same for all apps on a given processing system).
- From the above numbers, for each app, the load balancer module 4 counts a no-instance-specified (NIS) packet forwarding preference score (NIS score) for each processing system in the given load balancing group with a formula of: A*Y+B*W+C*Z, where A, B and C are software programmable, defaulting to e.g. A=4, B=1 and C=2.
- In forming the NIS scores for a given app (by formula per above), a given instance of the app under study is deemed available for NIS packets at times that the app instance software has set an associated device register bit (specific to that app-inst) to an active value, and unavailable otherwise. The multiplexing (muxing) mechanism used to connect the app-instance software, from whichever core at its host manycore processor it may be executing at any given time, to its app-instance specific memory, is used also for connecting the app-instance software to its NIS-availability control device register.
- The app-instance NIS availability control register of a given app-instance is reset (when the app-instance software otherwise would still keep its NIS availability control register at its active stage) also automatically by processing stage RX logic hardware whenever there is data at the input buffer for the given app-instance.
- Each of the processing systems in the given load balancing group signals their NIS scores for each app hosted on the load balancing group to each of the load balancers 4 in front of the row 2 of processing systems. Also, the processing systems 1 provide to the load balancers app specific vectors (as part of info flows 9) indicating which of their local instance resources of the given app are available for receiving NIS packets (i.e. packets with no destination instance specified).
- Data packets from the network inputs 10 to the load balancing group include bits indicating whether any given packet is a NIS packet such that has its destination app but not any particular instance of the app specified. The load balancer 3 forwards any NIS packet it receives from its network input 10 to the processing system 1 in the local load balancing group 2 with the highest NIS score for the destination app of the given NIS packet. (In case of ties among the processing systems for the NIS score for the given destination app, the logic forwards the packet to the processing system among such tied systems based on their ID #, e.g. to the system with lowest ID #.) The forwarding of a NIS packet to a particular processing system 1 (in the load balancing group 2 of such systems) is done by this first layer of load balancing logic by forming packet write enable vectors where each given bit is a packet write enable bit specific to the processing system within the given load balancing group of the same system index # as the given bit in its write enable bit vector. For example, the processing system ID #2 from a load balancing group of processing systems of ID #0 through ID #4 takes the bit at index 2 of the packet write enable vectors from the load balancers of the given group. In a straightforward scheme, the processing system #K within a given load balancing group hosts the instance group #K of each of the apps hosted by this group of the processing systems (where K=0, 1, . . . , max nr of processing systems in the load balancing group less 1).
- The load balancers 3 further have destination processing system 1 (i.e. for each given app, instance group) specific submodules, which, for NIS packets forwarded to them by the first layer balancing logic (per above), specify a destination instance among the available (presently inactive) instance resources of the destination app of a given NIS packet to which to forward the given NIS packet. In an straightforward scheme, for each given NIS packet forwarded to it, this instance group specific load balancing submodule selects, from the at-the-time available instances of the of the destination app, within the instance group that the given submodule is specific to, the instance resource with lowest ID #.
- For other (not NIS) packets, the load balancer logic 3 simply forwards a given (non NIS) packet to the processing system 1 in the load balancing group 2 that hosts, for the destination app of the given packet, the instance group of the identified destination instance of the packet.
- According to the forwarding decision per above bullet points, the (conceptual, actually distributed per the destination processing systems) packet switch module 6 filters packets from the output buses 15 of the load balancers 3 to input buses 19 of the destination processing systems, so that each given processing system 1 in the load balancing group 2 receives as active packet transmissions (marked e.g. by write by write enable signaling) on its input bus 19, from the packets arriving from the load balancer inputs 10, those packets that were indicated as destined to the given system 1 at entry to the load balancers, as well as the NIS packets that the load balancers of the set 4 forwarded to that given system 1.
- Note also that the network inputs 10 to the load balancers, as well as all the bold data path arrows in the FIGS., may comprise a number of parallel of (e.g. 10 Gbps) ports.
- The load balancing logic implements coordination among port modules of the same balancer, so that any given NIS packet is forwarded, according to the above destination instance selection logic, to one of such app-instances that is not, at the time of the forwarding decision, already being forwarded a packet (incl. forwarding decisions made at the same clock cycle) by port modules with higher preference rank (e.g. based on lower port #) of the same balancer. Note that each processing system supports receiving packets destined for the same app-instance concurrently from different load balancers (as explained below).
- The load balancers 3 support, per each app-inst, a dedicated input buffer per each of the external input ports (within the buses 10) to the load balancing group. The system thus supports multiple packets being received (both via the same load balancer module 3, as well as across the different load balancer modules per
FIG. 1 ) simultaneously for the same app-instances via multiple external input ports. From the load balancer input buffers, data packets are muxed to the processing systems 1 of the load balancing group so that the entry stage processor of each of the multi-stage systems (seeFIG. 2 ) in such group receives data from the load balancers similarly as the non-entry-stage processors receive data from the other processing stages of the given multi-stage processing system—i.e., in a manner that the entry stage (like the other stages) will get data per each of its app-instances at most via one of its input ports per a (virtual) source stage at any given time; the load balancer modules of the given load balancing group (FIG. 1 ) appear thus as virtual source processing stages to entry stage of the multi-stage processing systems of such load balancing group. The aforesaid functionality is achieved by logic at module 4 as detailed below:- To eliminate packet drops in cases where packets directed to same app-inst arrive in a time-overlapping manner through multiple input ports (within the buses 10) of same balancer 3, destination processing system 1 specific submodules at modules 3 buffer input data 15 destined for the given processing system 1 at app-inst specific buffers, and assign the processing system 1 input ports (within the bus 19 connecting to their associated processing system 1) among the app-insts so that each app-inst is assigned at any given time at most one input port per a load balancer 3. (Note that inputs to a processing system 1 from different load balancers 3 are handled by the entry stage (
FIG. 2 ) the same way as the other processing stages 300 handle inputs from different source stages, as detailed in connection toFIG. 5 —in a manner that supports concurrent reception of packets to the same destination app-inst from multiple source stages.) More specifically, the port capacity 19 for transfer of data from load balancers 4 to the given processing system 1 entry-stage buffers gets assigned using the same algorithm as is used for assignment of processing cores between the app-instances at the processing stages (FIG. 7 ), i.e., in a realtime input data load adaptive manner, while honoring the contractual capacity entitlements and fairness among the apps for actually materialized demands. This algorithm, which allocates at most one of the cores per each of the app-insts for the core allocation periods following each of its runs—and similarly assigns at most one of the ports at buses 19 to the given processing system 1 per each of the app-inst specific buffers queuing data destined for that processing system from any given source load balancer 3—is specified in detail in [1], Appendix A, Ch. 5.2.3. By this logic, the entry stage of the processing system (FIG. 2 ) will get its input data same way as the other stages, and there thus is no need to prepare for cases of multiple packets to same app-inst arriving simultaneously at any destination processing stage from any of its source stages or load balancers. This logic also ensures that any app with moderate input bandwidth consumption will gets its contractually entitled share of the processing system input bandwidth (i.e. the logic protects moderate bandwidth apps from more input data intensive neighbors).
- To eliminate packet drops in cases where packets directed to same app-inst arrive in a time-overlapping manner through multiple input ports (within the buses 10) of same balancer 3, destination processing system 1 specific submodules at modules 3 buffer input data 15 destined for the given processing system 1 at app-inst specific buffers, and assign the processing system 1 input ports (within the bus 19 connecting to their associated processing system 1) among the app-insts so that each app-inst is assigned at any given time at most one input port per a load balancer 3. (Note that inputs to a processing system 1 from different load balancers 3 are handled by the entry stage (
- Note that since packet transfer within a load balancing group (incl. within the sub-modules of the processing systems) is between app-instance specific buffers, with all the overhead bits (incl. destination app-instance ID) transferred and buffered as parallel wires besides the data, core allocation period (CAP) boundaries will not break the packets while being transferred from the load balancer buffers to a given processing system 1 or between the processing stages of a given multi-stage system 1.
- The processing systems 1 count, for each of the application programs (apps) hosted on them:
The mechanisms per above three bullet points are designed to eliminate all packet drops in the system such that are avoidable by system design, i.e., for reasons other than app-instance specific buffer overflows caused be systemic mismatches between input data loads to a given app-inst and the capacity entitlement level subscribed to by the given app.
In the architecture per
General operation of the application load adaptive, multi-stage parallel data processing system per
The application program tasks executing on the entry stage manycore processor are typically of ‘master’ type for parallelized/pipelined applications, i.e., they manage and distribute the processing workloads for ‘worker’ type tasks running (in pipelined and/or parallel manner) on the worker stage manycore processing systems (note that the processor system hardware is similar across all instances of the processing stages 300). The instances of master tasks typically do preliminary processing (e.g. message/request classification, data organization) and workflow management based on given input data units (packets), and then typically involve appropriate worker tasks at their worker stage processors to perform the data processing called for by the given input packet, potentially in the context of and in connection with other related input packets and/or other data elements (e.g. in memory or storage resources accessible by the system) referred to by such packets. (The processors have access to system memories through interfaces also additional to the IO ports shown in
To provide isolation among the different applications configured to run on the processors of the system, by default the hardware controller of each processor 300, rather than any application software (executing on a given processor), inserts the application ID # bits for the data packets passed to the PS 200. That way, the tasks of any given application running on the processing stages in a system can trust that the packets they receive from the PS are from its own application. Note that the controller determines, and therefore knows, the application ID # that each given core within its processor is assigned to at any given time, via the application-instance to core mapping info that the controller produces. Therefore the controller is able to insert the presently-assigned app ID # bits for the inter-task data units being sent from the cores of its processing stage over the core-specific output ports to the PS.
While the processing of any given application (server program) at a system per
Notably, the architecture enables the aforesaid flexibility and efficiency through its hardware logic functionality, so that no system or application software running on the system needs to either keep track of whether or where any of the instances of any of the app-tasks may be executing at any given time, or which port any given inter-task or external communication may have used. Thus the system, while providing a highly dynamic, application workload adaptive usage of the system processing and communications resources, allows the software running on and/or remotely using the system to be designed with a straightforward, abstracted view of the system: the software (both remote and local programs) can assume that all the applications, and all their tasks and instances, hosted on the given system are always executing on their virtual dedicated processor cores within the system. Also, where useful, said virtual dedicated processors can also be considered by software to be time-share slices on a single (unrealistically high speed) processor.
The presented architecture thereby enables achieving, at the same time, both the vital application software development productivity (simple, virtual static view of the actually highly dynamic processing hardware) together with high program runtime performance (scalable concurrent program execution with minimized overhead) and resource efficiency (adaptively optimized resource allocation) benefits. Techniques enabling such benefits of the architecture are described in the following through more detailed technical description of the system 1 and its subsystems.
The any-to-any connectivity among the app-tasks of all the processing stages 300 provided by the PS 200 enables organizing the worker tasks (located at the array of worker stage processors) flexibly to suit the individual demands (e.g. task inter-dependencies) of any given application program on the system: the worker tasks can be arranged to conduct the work flow for the given application using any desired combinations of parallel and pipelined processing. E.g., it is possible to have the same task of a given application located on any number of the worker stages in the architecture per
The set of applications configured to run on the system can have their tasks identified by (intra-app) IDs according to their descending order of relative (time-averaged) workload levels. Under such (intra-app) task ID assignment principle, the sum of the intra-application task IDs, each representing the workload ranking of its tasks within its application, of the app-tasks hosted at any given processing system is equalized by appropriately configuring the tasks of differing ID #s, i.e. of differing workload levels, across the applications for each processing system, to achieve optimal overall load balancing. For instance, in case of T=4 worker stages, if the system is shared among M=4 applications and each of that set of applications has four worker tasks, for each application of that set, the busiest task (i.e. the worker task most often called for or otherwise causing the heaviest processing load among tasks of the app) is given task ID #0, the second busiest task ID #1, the third busiest ID #2, and the fourth ID #3. To balance the processing loads across the applications among the worker stages of the system, the worker stage #t gets task ID #t+m (rolling over at 3 to 0) of the application ID #m (t=0, 1, . . . T−1; m=0, 1, . . . M−1) (note that the master task ID #4 of each app is located at the entry/exit stages). In this example scenario of four application streams, four worker tasks per app as well as four worker stages, the above scheme causes the task IDs of the set of apps to be placed at the processing stages per Table 1 below:
As seen in the example of Table 1, the sum of the task ID #s (with each task ID # representing the workload ranking of its task within its app) is the same for any row i.e. for each worker stage. This load balancing scheme can be straightforwardly applied for differing numbers of processing stages/tasks and applications, so that the overall task processing load is to be, as much as possible, equal across all worker-stage processors of the system. Advantages of such schemes include achieving optimal utilization efficiency of the processing resources and eliminating or at least minimizing the possibility and effects of any of the worker-stage processors forming system-wide performance bottlenecks.
A non-exclusive alternative task to stage placement principle targets grouping tasks from the apps in order to minimize any variety among the processing core types demanded by the set of app-tasks placed on any given individual processing stage; that way, if all app-tasks placed on a given processing stage optimally run on the same processing core type, there is no need for reconfiguring the core slots of the manycore array at the given stage regardless which of the locally hosted app-tasks get assigned to which of its core slots (see [1], Appendix A, Ch. 5.5 for task type adaptive core slot reconfiguration, which may be used when the app-task located on the given processing stage demand different execution core types).
For a system of
Besides the division of the app-specific submodules 202 of the stage RX logic per
-
- Formation of a request for a number of processing cores (Core Demand Figure, CDF) at the local processing stage by the given app. The logic forms the CDF for the app based on the number of instances of the app that presently have (1) input data at their input buffers (with those buffers located at the instance specific stage RX logic submodules 203 per
FIG. 5 ) and (2) their on-chip fast-access memory contents ready for the given instance to execute without access to the slower-access off-chip memories. InFIG. 4 , (1) and (2) per above are signaled to the app-specific RX logic module 209 via the info flows 429 and 499 from the app-inst specific modules 203 (FIG. 5 ) and 800 (FIG. 7 ), respectively, per each of the insts of the app under study. - The priority order of instances of the app for purposes of selecting such instances for execution on the cores of the local manycore processor.
The info per the above two bullet points are sent from the RX logic 202 of each app via the info flow 430 to the controller 540 (FIG. 7 ) of the local manycore processor 500, for the controller to assign optimal sets of the app-insts for execution on the cores 520 of the processor 500.
- Formation of a request for a number of processing cores (Core Demand Figure, CDF) at the local processing stage by the given app. The logic forms the CDF for the app based on the number of instances of the app that presently have (1) input data at their input buffers (with those buffers located at the instance specific stage RX logic submodules 203 per
The app-instance specific RX logic per
Note that when considering the case of RX logic of the entry-stage processing system of the multi-stage architecture per
Before the actual multiplexer, the app-instance specific RX logic per
-
- The actual FIFO 260 for queuing packets from its associated source stage that are destined to the local task of the app-instance that the given module per
FIG. 5 is specific to. - A write-side multiplexer 250 (to the above referred FIFO) that (1) takes as its data inputs 20 the processing core specific data outputs 210 (see
FIG. 7 ) from the processing stage that the given source-stage specific FIFO module is specific to, (2) monitors (via the data input overhead bits identifying the app-instance and destination task within it for any given packet transmission) from which one of its input ports 210 (within the bus 20) it may at any given time be receiving a packet destined to the local task of the app-instance that the app-instance specific RX logic under study is specific to, with such an input referred to as the selected input, and (3) connects 255 to its FIFO queue 260 the packet transmission from the present selected input. Note that at any of the processing stages, at any given time, at most one processing core will be assigned for any given app instance. Thus any of the source stage specific FIFO modules 245 of the app-instance RX logic perFIG. 5 can, at any given time, receive data destined to the local task of the app-instance that the given app-instance RX logic module is specific to from at most one of the (processing core specific) data inputs of the write-side multiplexer (mux) 250 of the given FIFO module. Thus there is no need for separate FIFOs per each of the (e.g. 16 core specific) ports of the data inputs 20 at these source stage specific FIFO modules, and instead, just one common FIFO suffices per each given source stage specific buffering module 245.
For clarity, the “local” task refers to the task of the app-instance that is located at the processing stage 300 that the RX logic under study interfaces to, with that processing stage or processor being referred to as the local processing stage or processor. Please recall that per any given app, the individual tasks are located at separate processing stages. Note though that copies of the same task for a given app can be located at multiple processing stages in parallel. Note further that, at any of the processing stages, there can be multiple parallel instances of any given app executing concurrently, as well as that copies of the task can be located in parallel at multiple processing stages of the multi-stage architecture, allowing for processing speed via parallel execution at application as well as task levels, besides between the apps.
- The actual FIFO 260 for queuing packets from its associated source stage that are destined to the local task of the app-instance that the given module per
The app-instance RX module 203 per
Each given app-instance software provides a logic vector 595 to the arbitrating logic 270 of its associated app-instance RX module 203 such that has a priority indicator bit within it per each of its individual source stage specific FIFO modules 245: while a bit of such a vector relating to a particular source stage is at its active state (e.g. logic ‘1’), ITC from the source stage in question to the local task of the app-instance will be considered to be high priority, and otherwise normal priority, by the arbitrator logic in selecting the source stage specific FIFO from where to read the next ITC packet to the local (destination) task of the studied app-instance.
The arbitrator selects the source stage specific FIFO 260 (within the array 240 of the local app-instance RX module 203) for reading 265, 290 the next packet per the following source priority ranking algorithm:
-
- The source priority ranking logic maintains three logic vectors as follows:
- 1) A bit vector wherein each given bit indicates whether a source stage of the same index as the given bit is both assigned by the local (ITC destination) task of the app-instance under study a high priority for ITC to it and has its FIFO 260 fill level above a configured monitoring threshold;
- 2) A bit vector wherein each given bit indicates whether a source stage of the same index as the given bit is both assigned a high priority for ITC (to the task of the studied app-instance located at the local processing stage) and has its FIFO non-empty;
- 3) A bit vector wherein each given bit indicates whether a source stage of the same index as the given bit has its FIFO fill level above the monitoring threshold; and
- 4) A bit vector wherein each given bit indicates whether a source stage of the same index as the given bit has data available for reading.
- The FIFO 260 fill level and data-availability is signaled in
FIG. 5 via info flow 261 per each of the source-stage specific FIFO modules 245 of the app-inst specific array 240 to the arbitrator 270 of the app-inst RX module, for the arbitrator, together with the its source stage prioritization control logic 285, to select 272 the next packet to read from the optimal source-stage specific FIFO module 245 (as detailed below). - The arbitrator logic 270 also forms (by logic OR) an indicator bit for each of the above vectors 1) through 4) telling whether the vector associated with the given indicator has any bits in its active state. From these indicators, the algorithm searches the first vector, starting from vector 1) and proceeding toward vector 4), that has one or more active bits; the logic keeps searching until such a vector is detected.
- From the detected highest priority ranking vector with active bit(s), the algorithm scans bits, starting from the index of the current start-source-stage (and after reaching the max bit index of the vector, continuing from bit index 0), until it finds a bit in an active state (logic ‘1’); the index of such found active bit is the index of the source stage from which the arbitrator controls its app-instance port mux 280 to read 265 its next ITC packet for the local task of the studied app-instance.
- The arbitrator logic uses a revolving (incrementing by one at each run of the algorithm, and returning to 0 from the maximum index) starting source stage number as a starting stage in its search of the next source stage for reading an ITC packet.
- The source priority ranking logic maintains three logic vectors as follows:
When the arbitrator has the appropriate data source (from the array 240) thus selected for reading 265, 290 the next packet, the arbitrator 270 directs 272 the mux 280 to connect the appropriate source-stage specific signal 265 to its output 290, and accordingly activates, when enabled by the read-enable control 590 from the app-inst software, the read enable 271 signal for the FIFO 260 of the presently selected source-stage specific module 245.
Note that the ITC source task prioritization info 595 from the task software of app-instances to their RX logic modules 203 can change dynamically, as the processing state and demands of input data for a given app-instance task evolve over time, and the arbitrator modules 270 (
In addition, the app-instance RX logic per
Each of the source stage specific FIFO modules 245 of a given app-instance at the RX logic for a given processing stage maintains a signal 212 indicating whether the task (of the app instance under study) located at the source stage that the given FIFO 260 is specific to is presently permitted to send ITC to the local (destination) task of the app-instance under study: the logic denies the permit when the FIFO fill level is above a defined threshold, while it otherwise grants the permit.
As a result, any given (source) task, when assigned for execution at a core 520 (
Each given processing stage receive and monitor ITC permit signal signals 212 from those of the processing stages that the given stage actually is able to send ITC data to; please see
The ITC permit signal buses 212 will naturally be connected across the multi-stage system 1 between the app-instance specific modules 203 of the RX logic modules 202 of the ITC destination processing stages and the ITC source processing stages (noting that a given stage 300 will be both a source and destination for ITC as illustrated in
Note that, notwithstanding the functional illustration in
Each source task applies these ITC send permission signals from a given destination task of its app-instance at times that it is about to begin sending a new packet over its (assigned execution core specific) processing stage output port 210 to that given destination task. The ITC destination FIFO 260 monitoring threshold for allowing/disallowing further ITC data to be sent to the given destination task (from the source task that the given FIFO is specific to) is set to a level where the FIFO still has room for at least one ITC packet worth of data bytes, with the size of such ITC packets being configurable for a given system implementation, and the source tasks are to restrict the remaining length of their packet transmissions to destination tasks denying the ITC permissions according to such configured limits.
The app-level RX logic per
-
- for H>0, P=T−1+2H+L; and
- for H=0, P=L.
The logic for prioritizing the instances of the given app for its execution priority list 535, via a continually repeating process, signals (via hardware wires dedicated for the purpose) to the controller 540 of the local manycore processor 500 (
The process periodically starts from priority order 0 (i.e. the app's instance with the greatest priority score P), and steps through the remaining priority orders 1 through the maximum supported number of instances for the given application (specifically, for its task located at the processing stage under study) less 1, producing one instance entry per each step on the list that is sent to the controller as such individual entries. Each entry of such a priority list comprises, as its core info, simply the instance ID # (as the priority order of any given instance is known from the number of clock cycles since the bit pulse marking the priority order 0 at the start of a new list). To simplify the logic, also the priority order (i.e. the number of clock cycles since the bit pulse marking the priority order 0) of any given entry on these lists is sent along with the instance ID #.
At the beginning of its core to app-instance assignment process, the controller 540 of the manycore processor uses the most recent set of complete priority order lists 535 received from the application RX modules 202 to determine which (highest priority) instances of each given app to assign for execution for the next core allocation period on that processor.
Per the foregoing, the ITC source prioritization, program instance execution prioritization and ITC flow control techniques provide effective program execution optimization capabilities for each of a set of individual programs configured to dynamically share a given data processing system 1 per this description, without any of the programs impacting or being impacted by in any manner the other programs of such set. Moreover, for ITC capabilities, also the individual instances (e.g. different user sessions) of a given program are fully independent from each other. The herein described techniques and architecture thus provide effective performance and runtime isolation between individual programs among groups of programs running on the dynamically shared parallel computing hardware.
From here, we continue by exploring the internal structure and operation of a given processing stage 300 beyond its RX logic per
Per
The monitoring of the buffered input data availability 261 at the destination app-instance FIFOs 260 of the processing stage RX logic enables optimizing the allocation of processing core capacity of the local manycore processor among the application tasks hosted on the given processing stage. Since the controller module 540 of the local manycore processor determines which instances of the locally hosted tasks of the apps in the system 1 execute at which of the cores of the local manycore array 515, the controller is able to provide the dynamic control 560 for the muxes 450 per
Internal elements and operation of the application load adaptive manycore processor system 500 are illustrated in
As illustrated in
Any of the cores 520 of a processor per
The hardware logic based controller 540 module within the processor system, through a periodic process, allocates and assigns the cores 520 of the processor among the set of applications and their instances based on the applications' core demand figures (CDFs) 530 as well as their contractual core capacity entitlements (CEs). This application instance to core assignment process is exercised periodically, e.g. at intervals such as once per a defined number (for instance 64, 256 or 1024, or so forth) of processing core clock or instruction cycles. The app-instance to core assignment algorithms of the controller produce, per the app-instances on the processor, identification 550 of their execution cores (if any, at any given time), as well as per the cores of the fabric, identification 560 of their respective app-instances to execute. Moreover, the assignments 550, 560 between app-insts and the cores of the array 515 control the access between the cores 520 of the fabric and the app-inst specific memories at the fabric network and memory subsystem 800 (which can be implemented e.g. per [1] Appendix A, Ch. 5.4).
The app-instance to core mapping info 560 also directs the muxing 450 of input data from the RX buffers 260 of an appropriate app-instance to each core of the array 515, as well as the muxing 580 of the input data read control signals (570 to 590, and 575 to 595) from the core array to the RX logic submodule (
Similarly, the core to app-inst mapping info 560 also directs the muxing 600 of the (source) app-instance specific ITC permit signals (212 to 213) from the destination processing stages to the cores 520 of the local manycore array, according to which app-instance is presently mapped to which core.
Further reference specifications for aspects and embodiments of the invention are in the references [1] through [10].
The functionality of the invented systems and methods described in this specification, where not otherwise mentioned, is implemented by hardware logic of the system (wherein hardware logic naturally also includes any necessary signal wiring, memory elements and such).
Generally, this description and drawings are included to illustrate architecture and operation of practical embodiments of the invention, but are not meant to limit the scope of the invention. For instance, even though the description does specify certain system elements to certain practical types or values, persons of skill in the art will realize, in view of this description, that any design utilizing the architectural or operational principles of the disclosed systems and methods, with any set of practical types and values for the system parameters, is within the scope of the invention. Moreover, the system elements and process steps, though shown as distinct to clarify the illustration and the description, can in various embodiments be merged or combined with other elements, or further subdivided and rearranged, etc., without departing from the spirit and scope of the invention. Finally, persons of skill in the art will realize that various embodiments of the invention can use different nomenclature and terminology to describe the system elements, process phases etc. technical concepts in their respective implementations. Generally, from this description many variants and modifications will be understood by one skilled in the art that are yet encompassed by the spirit and scope of the invention.
Claims
1. An apparatus comprising:
- a plurality of reconfigurable logic regions, each reconfigurable logic region comprising configurable hardware to implement a respective application logic design; and
- logic for separately encapsulating each of the reconfigurable logic regions, the logic comprising a host interface for communicating with a processor over a physical interconnect; and a plurality of data path functions accessible via the host interface, each data path function comprising a layer for formatting data transfers between the host interface and the application logic design of a corresponding reconfigurable logic region; and
- wherein the host interface is configured to arbitrate between resources of the application logic designs of the respective reconfigurable logic regions, wherein the host interface is configured to enforce an apportionment of bandwidth of the data transfers over the physical interconnect associated with the application logic designs of the respective reconfigurable logic regions based on a programmed value representing at least one input bandwidth share.
2. The apparatus of claim 1, wherein the logic further comprises a management function accessible via the host interface, the management function adapted to cause a reconfigurable logic region of the plurality of reconfigurable logic regions to be configured with a particular application logic design in response to an authorized transaction received at the host interface.
3. The apparatus of claim 1, wherein in at least one configuration at least two given instances of the reconfigurable logic regions are identically configured with the same application logic design, and wherein a same one of the data path functions formats data transfers between the host interface and each of the given instances of the reconfigurable logic regions.
4. The apparatus of claim 3, wherein the same one of the data path functions comprises separate logic supporting each of the given instances of the reconfigurable logic regions.
5. The apparatus of claim 1, further comprising a memory shared by the reconfigurable logic regions, and wherein the host interface configured to arbitrate between resources of the application logic designs further comprises the host interface arbitrating access to the memory by the reconfigurable logic regions.
6. The apparatus of claim 5, wherein the host interface configured to arbitrate between resources of the application logic designs further comprises dynamically assigning application logic designs to the reconfigurable logic regions based at least in part on a respective processing workload received for each of the application logic designs.
7. A method for operating a configurable hardware platform comprising reconfigurable logic, the method comprising:
- loading control logic on a first region of the reconfigurable logic so that the configurable hardware platform performs operations of the control logic, the control logic including a host interface and a control plane function enforcing restricted access for transactions from the host interface over a physical interconnect;
- loading a first application logic design on a second region of the reconfigurable logic in response to receiving a first transaction at the host interface, the first transaction satisfying access criteria of the control plane function;
- loading a second application logic design on a third region of the reconfigurable logic in response to receiving a second transaction at the host interface, the second transaction satisfying access criteria of the control plane function; and
- using the control logic to arbitrate between resources used by each of the first application logic design and the second application logic design for transmitting information from the host interface, wherein the control logic enforces an apportionment of bandwidth for data transfers over the physical interconnect associated with the first and second application logic designs.
8. The method of claim 7, further comprising:
- using the control logic as an interface between a shared peripheral and the first application logic design and the second application logic design.
9. The method of claim 8, wherein the shared peripheral is a memory.
10. The method of claim 9, further comprising:
- using the host logic to restrict access from the first application logic design to a first range of addresses of the shared peripheral and to restrict access from the second application logic design to a second range of addresses of the shared peripheral.
4402046 | August 30, 1983 | Cox et al. |
4404628 | September 13, 1983 | Angelo |
4956771 | September 11, 1990 | Neustaedter |
5031146 | July 9, 1991 | Umina et al. |
5237673 | August 17, 1993 | Orbits et al. |
5303369 | April 12, 1994 | Borcherding et al. |
5452231 | September 19, 1995 | Butts et al. |
5519829 | May 21, 1996 | Wilson |
5612891 | March 18, 1997 | Butts et al. |
5752030 | May 12, 1998 | Konno et al. |
5809516 | September 15, 1998 | Ukai et al. |
5931959 | August 3, 1999 | Kwiat |
6072781 | June 6, 2000 | Feeney et al. |
6108683 | August 22, 2000 | Kamada et al. |
6212544 | April 3, 2001 | Borkenhagen et al. |
6289434 | September 11, 2001 | Roy |
6289440 | September 11, 2001 | Casselman |
6334175 | December 25, 2001 | Chih |
6345287 | February 5, 2002 | Fong et al. |
6353616 | March 5, 2002 | Elwalid et al. |
6366157 | April 2, 2002 | Abdesselem et al. |
6721948 | April 13, 2004 | Morgan |
6769017 | July 27, 2004 | Bhat et al. |
6782410 | August 24, 2004 | Bhagat et al. |
6816905 | November 9, 2004 | Sheets et al. |
6912706 | June 28, 2005 | Stamm et al. |
7058868 | June 6, 2006 | Guettaf |
7093258 | August 15, 2006 | Miller et al. |
7099813 | August 29, 2006 | Nightingale |
7110417 | September 19, 2006 | El-Hennawey et al. |
7177961 | February 13, 2007 | Brice, Jr. et al. |
7178145 | February 13, 2007 | Bono |
7315897 | January 1, 2008 | Hardee et al. |
7328314 | February 5, 2008 | Kendall et al. |
7370013 | May 6, 2008 | Aziz et al. |
7389403 | June 17, 2008 | Alpert et al. |
7406407 | July 29, 2008 | Larus |
7447873 | November 4, 2008 | Nordquist |
7461376 | December 2, 2008 | Geye et al. |
7469311 | December 23, 2008 | Tsu et al. |
7503045 | March 10, 2009 | Aziz et al. |
7518396 | April 14, 2009 | Kondapalli et al. |
7581079 | August 25, 2009 | Pechanek |
7631107 | December 8, 2009 | Pandya |
7665092 | February 16, 2010 | Vengerov |
7698541 | April 13, 2010 | Robles |
7738496 | June 15, 2010 | Raza |
7743001 | June 22, 2010 | Vermeulen et al. |
7760625 | July 20, 2010 | Miyaho et al. |
7765547 | July 27, 2010 | Cismas et al. |
7802255 | September 21, 2010 | Pilkington |
7805706 | September 28, 2010 | Ly et al. |
7818699 | October 19, 2010 | Stuber |
7861063 | December 28, 2010 | Golla et al. |
7908606 | March 15, 2011 | Depro et al. |
7984246 | July 19, 2011 | Yung et al. |
8001549 | August 16, 2011 | Henmi |
8015392 | September 6, 2011 | Naik et al. |
8018961 | September 13, 2011 | Gopinath et al. |
8024731 | September 20, 2011 | Cornwell et al. |
8032889 | October 4, 2011 | Conrad et al. |
8046766 | October 25, 2011 | Rhine |
8059674 | November 15, 2011 | Cheung et al. |
8060610 | November 15, 2011 | Herington |
8087029 | December 27, 2011 | Lindholm et al. |
8095662 | January 10, 2012 | Lappas et al. |
8098255 | January 17, 2012 | Fouladi et al. |
8136153 | March 13, 2012 | Zhang et al. |
8230070 | July 24, 2012 | Buyya et al. |
8271730 | September 18, 2012 | Piry et al. |
8296434 | October 23, 2012 | Miller et al. |
8299816 | October 30, 2012 | Yamada |
8327126 | December 4, 2012 | Bell, Jr. et al. |
8352609 | January 8, 2013 | Maclinovsky et al. |
8352611 | January 8, 2013 | Maddhuri et al. |
8429630 | April 23, 2013 | Nickolov et al. |
8447933 | May 21, 2013 | Nishihara |
8484287 | July 9, 2013 | Gavini |
8533674 | September 10, 2013 | Abrams et al. |
8539207 | September 17, 2013 | LeGrand |
8561183 | October 15, 2013 | Muth et al. |
8566836 | October 22, 2013 | Ramaraju et al. |
8595832 | November 26, 2013 | Yee et al. |
8626970 | January 7, 2014 | Craddock et al. |
8713572 | April 29, 2014 | Chambliss et al. |
8713574 | April 29, 2014 | Creamer et al. |
8738860 | May 27, 2014 | Griffin et al. |
8745241 | June 3, 2014 | Waldspurger |
8762595 | June 24, 2014 | Muller et al. |
8850574 | September 30, 2014 | Ansel et al. |
8881141 | November 4, 2014 | Koch et al. |
8935491 | January 13, 2015 | Sandstrom |
9038072 | May 19, 2015 | Nollet et al. |
9047137 | June 2, 2015 | Solihin |
9104453 | August 11, 2015 | Anand et al. |
9154442 | October 6, 2015 | Mital |
9164953 | October 20, 2015 | Lippett |
9218195 | December 22, 2015 | Anderson et al. |
9262360 | February 16, 2016 | Wagh et al. |
9323794 | April 26, 2016 | Indeck |
9348724 | May 24, 2016 | Ota et al. |
9390046 | July 12, 2016 | Wagh |
9448847 | September 20, 2016 | Sandstrom |
9503093 | November 22, 2016 | Karras et al. |
9589088 | March 7, 2017 | Mishra et al. |
9608933 | March 28, 2017 | Emaru |
9690600 | June 27, 2017 | Jung et al. |
9697161 | July 4, 2017 | Mangano et al. |
9910708 | March 6, 2018 | Williamson |
10009441 | June 26, 2018 | Xue |
10013662 | July 3, 2018 | Brandwine et al. |
10133599 | November 20, 2018 | Sandstrom |
10133600 | November 20, 2018 | Sandstrom |
10515046 | December 24, 2019 | Fleming |
10650452 | May 12, 2020 | Parsons |
10942778 | March 9, 2021 | Sandstrom |
11036556 | June 15, 2021 | Sandstrom |
11188388 | November 30, 2021 | Sandstrom |
11347556 | May 31, 2022 | Sandstrom |
20020040400 | April 4, 2002 | Masters |
20020056033 | May 9, 2002 | Huppenthal |
20020112091 | August 15, 2002 | Schott et al. |
20020124012 | September 5, 2002 | Liem et al. |
20020129080 | September 12, 2002 | Hentschel et al. |
20020141343 | October 3, 2002 | Bays |
20020143843 | October 3, 2002 | Mehta |
20020152305 | October 17, 2002 | Jackson |
20020169828 | November 14, 2002 | Blanchard |
20030018807 | January 23, 2003 | Larsson et al. |
20030235200 | December 25, 2003 | Kendall et al. |
20040088488 | May 6, 2004 | Ober et al. |
20040111724 | June 10, 2004 | Libby |
20040128401 | July 1, 2004 | Fallon et al. |
20040158637 | August 12, 2004 | Lee |
20040168170 | August 26, 2004 | Miller |
20040193806 | September 30, 2004 | Koga et al. |
20040210900 | October 21, 2004 | Jones et al. |
20050010502 | January 13, 2005 | Birkestrand et al. |
20050013705 | January 20, 2005 | Farkas et al. |
20050036515 | February 17, 2005 | Cheung et al. |
20050055694 | March 10, 2005 | Lee |
20050080999 | April 14, 2005 | Angsmark et al. |
20050182838 | August 18, 2005 | Sheets et al. |
20050188372 | August 25, 2005 | Inoue et al. |
20050193186 | September 1, 2005 | Gazsi et al. |
20050198476 | September 8, 2005 | Gazsi et al. |
20050235070 | October 20, 2005 | Young et al. |
20050268298 | December 1, 2005 | Hunt et al. |
20050278551 | December 15, 2005 | Goodnow et al. |
20060036774 | February 16, 2006 | Schott et al. |
20060059485 | March 16, 2006 | Onufryk et al. |
20060061794 | March 23, 2006 | Ito et al. |
20060070078 | March 30, 2006 | Dweck et al. |
20060075265 | April 6, 2006 | Hamaoka et al. |
20060179194 | August 10, 2006 | Jensen |
20060195847 | August 31, 2006 | Amano et al. |
20060212870 | September 21, 2006 | Arndt et al. |
20060218376 | September 28, 2006 | Pechanek |
20070074011 | March 29, 2007 | Borkar et al. |
20070153802 | July 5, 2007 | Anke et al. |
20070220517 | September 20, 2007 | Lippett |
20070226482 | September 27, 2007 | Borkar et al. |
20070283311 | December 6, 2007 | Karoubalis et al. |
20070291576 | December 20, 2007 | Yang |
20080046997 | February 21, 2008 | Wang |
20080077927 | March 27, 2008 | Armstrong et al. |
20080086395 | April 10, 2008 | Brenner et al. |
20080189703 | August 7, 2008 | Im et al. |
20080244588 | October 2, 2008 | Leiserson et al. |
20080256339 | October 16, 2008 | Xu et al. |
20090037554 | February 5, 2009 | Herington |
20090049443 | February 19, 2009 | Powers et al. |
20090070762 | March 12, 2009 | Franaszek et al. |
20090178047 | July 9, 2009 | Astley et al. |
20090187756 | July 23, 2009 | Nollet et al. |
20090198866 | August 6, 2009 | Chen et al. |
20090265712 | October 22, 2009 | Herington |
20090282477 | November 12, 2009 | Chen et al. |
20090327446 | December 31, 2009 | Wittenschlaeger |
20100043008 | February 18, 2010 | Marchand |
20100049963 | February 25, 2010 | Bell, Jr. et al. |
20100058346 | March 4, 2010 | Narang et al. |
20100100883 | April 22, 2010 | Booton |
20100131955 | May 27, 2010 | Brent et al. |
20100153700 | June 17, 2010 | Capps, Jr. et al. |
20100153955 | June 17, 2010 | Sirota et al. |
20100162230 | June 24, 2010 | Chen et al. |
20100192155 | July 29, 2010 | Nam et al. |
20100205602 | August 12, 2010 | Zedlewski et al. |
20100232396 | September 16, 2010 | Jing et al. |
20100268889 | October 21, 2010 | Conte et al. |
20100287320 | November 11, 2010 | Querol et al. |
20110014893 | January 20, 2011 | Davis et al. |
20110035749 | February 10, 2011 | Krishnakumar et al. |
20110047546 | February 24, 2011 | Kivity et al. |
20110050713 | March 3, 2011 | McCrary et al. |
20110055480 | March 3, 2011 | Guyetant et al. |
20110078411 | March 31, 2011 | Maclinovsky et al. |
20110096667 | April 28, 2011 | Arita et al. |
20110119674 | May 19, 2011 | Nishikawa |
20110154348 | June 23, 2011 | Elnozahy et al. |
20110161969 | June 30, 2011 | Arndt et al. |
20110173432 | July 14, 2011 | Cher et al. |
20110197048 | August 11, 2011 | Chung et al. |
20110247012 | October 6, 2011 | Uehara |
20110249678 | October 13, 2011 | Bonicatto et al. |
20110258317 | October 20, 2011 | Sinha et al. |
20110296138 | December 1, 2011 | Carter et al. |
20110321057 | December 29, 2011 | Mejdrich et al. |
20120005473 | January 5, 2012 | Hofstee et al. |
20120022832 | January 26, 2012 | Shannon et al. |
20120079501 | March 29, 2012 | Sandstrom |
20120089985 | April 12, 2012 | Adar et al. |
20120173734 | July 5, 2012 | Kimbrel et al. |
20120216012 | August 23, 2012 | Vorbach et al. |
20120221886 | August 30, 2012 | Barsness et al. |
20120222038 | August 30, 2012 | Katragadda et al. |
20120222042 | August 30, 2012 | Chess et al. |
20120246450 | September 27, 2012 | Abdallah |
20120266176 | October 18, 2012 | Vojnovic et al. |
20120303809 | November 29, 2012 | Patel et al. |
20120324458 | December 20, 2012 | Peterson et al. |
20130013903 | January 10, 2013 | Bell, Jr. et al. |
20130222402 | August 29, 2013 | Peterson et al. |
20130325998 | December 5, 2013 | Hormuth et al. |
20140123135 | May 1, 2014 | Huang et al. |
20140181501 | June 26, 2014 | Hicok et al. |
20140317378 | October 23, 2014 | Lippett |
20140331236 | November 6, 2014 | Mitra et al. |
20140372167 | December 18, 2014 | Hillier |
20150100772 | April 9, 2015 | Jung et al. |
20150178116 | June 25, 2015 | Jorgensen et al. |
20150339798 | November 26, 2015 | Peterson et al. |
20150378776 | December 31, 2015 | Lippett |
20160034295 | February 4, 2016 | Cochran |
20160048394 | February 18, 2016 | Vorbach et al. |
20160080201 | March 17, 2016 | Huang et al. |
20160378538 | December 29, 2016 | Kang |
20170024573 | January 26, 2017 | Bhattacharyya et al. |
20170097838 | April 6, 2017 | Nagapudi et al. |
20170310794 | October 26, 2017 | Smith et al. |
20180089119 | March 29, 2018 | Khan et al. |
20180097709 | April 5, 2018 | Box et al. |
20190361745 | November 28, 2019 | Sandstrom |
20200192454 | June 18, 2020 | de Rochemont |
20210191781 | June 24, 2021 | Sandstrom |
20210303361 | September 30, 2021 | Sandstrom |
20210397484 | December 23, 2021 | Sandstrom |
20210406083 | December 30, 2021 | Sandstrom |
3340123 | May 1985 | DE |
255857 | February 1988 | EP |
889622 | July 1999 | EP |
2309388 | April 2011 | EP |
2704022 | March 2014 | EP |
1236177 | June 1971 | GB |
2145255 | March 1985 | GB |
2272311 | May 1994 | GB |
05197619 | August 1993 | JP |
06004314 | January 1994 | JP |
11353291 | December 1999 | JP |
1327106 | July 1987 | SU |
2000070426 | November 2000 | WO |
02/09285 | January 2002 | WO |
2011123467 | October 2011 | WO |
2012/040691 | March 2012 | WO |
- Al-Fares et al., “Hedera: Dynamic Flow Scheduling for Data Center Networks”, Nsdi, vol. 10, No. 8, Apr. 28, 2010. (previously submitted in related U.S. Appl. No. 17/195,174).
- Binotto et al., “Dynamic Self-Rescheduling of Tasks over a Heterogeneous Platform,” 2008 International Conference on Reconfigurable Computing and FPGAs, 2008, pp. 253-258. (previously submitted in related U.S. Appl. No. 17/195,174).
- Clemente et al., “A Task-Graph Execution Manager for Reconfigurable Multi-tasking Systems,” pp. 73-83, 2010, Microprocessors and Microsystems, vol. 34, Issues 2-4. (previously submitted in related U.S. Appl. No. 17/195,174).
- Ebrahimi et al., “Fairness via Source Throttling: A Configurable and High-Performance Fairness Substrate for Multi-Core Memory Systems”, ACM SIGPLAN Notices, vol. 45, No. 3, Mar. 2010, pp. 335-346. (previously submitted in related U.S. Appl. No. 17/195,174).
- George et al., “Novo-G: At the Forefront of Scalable Reconfigurable Supercomputing”, Computing in Science Engineering, vol. 13, Issue 1, Dec. 30, 2010, pp. 82-86. (previously submitted in related U.S. Appl. No. 17/195,174).
- Gohringer et al., “CAP-OS: Operating system for runtime scheduling, task mapping and resource management on reconfigurable multiprocessor architectures,” 2010 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW), 2010, pp. 1-8, doi: 10 1109/IPDPSW.2010.5470732. (previously submitted in related U.S. Appl. No. 17/195,174).
- Gohringer et al., “Operating System for Runtime Reconfigurable Multiprocessor Systems,” International Journal of Reconfigurable Computing, Feb. 14, 2011, pp. 1-17, vol. 2011, Hindawi Publishing Corporation. (previously submitted in related U.S. Appl. No. 17/195,174).
- Jacobs et al., “Reconfigurable Fault Tolerance: A Comprehensive Framework for Reliable and Adaptive FPGA-Based Space Computing,” ACM Trans. Reconfigurable Technol. Syst. 5, 4, Article 21 (Dec. 2012), 30 pages. (previously submitted in related U.S. Appl. No. 17/195,174).
- Joselli et al., “An architecture with automatic load balancing for real-time simulation and visualization systems,” Journal of Computational Interdisciplinary Sciences, 2010, 1(3): 207-224. (previously submitted in related U.S. Appl. No. 17/195,174).
- May et al., “Queueing Theory Modeling of a CPU-GPU System,” Northrup Grumman Corporation, Electronic Systems Sector, May 11, 2010, 2 pages, (previously submitted in related U.S. Appl. No. 17/195,174).
- Notice of Allowance issued in U.S. Appl. No. 16/434,581 dated Oct. 27, 2020. (previously submitted in related U.S. Appl. No. 17/195,174).
- Odajima et al., “GPU/CPU Work Sharing with Parallel Language XcalableMP-dev for Parallelized Accelerated Computing,” 2012 41st International Conference on Parallel Processing Workshops, Pittsburgh, PA, 2012, pp. 97-106, doi: 10.1109/ICPPW.2012.16. (previously submitted in related U.S. Appl. No. 17/195,174).
- Ranjan et al., “Parallelizing a Face Detection and Tracking System for Multi-Core Processors,” Proceedings of the 2012 9th Conference on Computer and Robot Vision, CRV 2012 (2012), pp. 290-297, 10.1109/CRV.2012.45. (previously submitted in related U.S. Appl. No. 17/195,174).
- Roy et al., “Efficient Autoscaling in the Cloud using Predictive Models for Workload Forecasting”, 2011 IEEE 4th International Conference on Cloud Computing, Washington DC, Jul. 4-9, 2011, pp. 500-507. (previously submitted in related U.S. Appl. No. 17/195,174).
- Supplemental Notice of Allowability issued in U.S. Appl. No. 17/195,174 dated Sep. 18, 2018, 34 pages. (previously submitted in related U.S. Appl. No. 17/195,174).
- Supplemental Notice of Allowability issued in U.S. Appl. No. 17/195,174 dated Sep. 7, 2018, 26 pages. (previously submitted in related U.S. Appl. No. 17/195,174).
- Toss, Julio, “Work Stealing Inside GPUs,” Universidade Federal do Rio Grande do Sul. Instituto de Informática, 39 pages, 2011, Curso de Ciência da Computaçāo: Ênfase em Ciência da Computaçā: Bacharelado. (previously submitted in related U.S. Appl. No. 17/195,174).
- Wu et al., “Runtime Task Allocation in Multicore Packet Processing Systems,” IEEE Transactions on Parallel and Distributed Systems, vol. 23, No. 10, pp. 1934-1943, Oct. 2012, doi: 10.1109/TPDS.2012.56. (previously submitted in related U.S. Appl. No. 17/195,174).
- Ziermann et al., “Adaptive Traffic Scheduling Techniques for Mixed Real-Time and Streaming Applications on Reconfigurable Hardware,” 2010 IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum (IPDPSW), 2010, pp. 1-4, doi: 10.1109/IPDPSW.2010.5470738 (previously submitted in related U.S. Appl. No. 17/195,174).
- Notice of Allowance issued in U.S. Appl. No. 17/195,174 dated May 14, 2021. (previously submitted in related U.S. Appl. No. 17/470,926).
- Hutchings et al., “Implementation approaches for reconfigurable logic applications,” Field-Programmable Logic and Applications. Springer Berlin/Heidelberg, 1995. <http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.17.3063&rep=rep1 &type=pdf>. (previously submitted in related U.S. Appl. No. 17/470,926).
- “Introduction to Implementing Design Security with Microsemi SmartFusion2 and IGLOO2 FPGAs,” by Microsemi, Nov. 2013, 13 pages. (previously submitted in related U.S. Appl. No. 17/470,926).
- Shin et al., “AVANT-GUARD: Scalable and Vigilant Switch Flow Management in Software-Defined Networks,” 2013. (previously submitted in related U.S. Appl. No. 17/470,926).
- “Design of a Secure Plane Bridge,” Microsemi, 2013. (previously submitted in related U.S. Appl. No. 17/470,926).
- Unnikrishnan et al., “ReClick—A Modular Dataplane Design Framework for FPGA-Based Network Virtualization,” 2011 ACM/IEEE Seventh Symposium on Architectures for Networking and Communications Systems, 2011, pp. 145-155, doi: 10.1109/ANCS.2011.31. (previously submitted in related U.S. Appl. No. 17/470,926).
- Notice of Allowance issued in U.S. Appl. No. 17/344,636 dated Oct. 14, 2021. (previously submitted in related U.S. Appl. No. 17/470,926).
- Supplemental Notice of Allowability issued in U.S. Appl. No. 17/344,636 dated Nov. 5, 2021. (previously submitted in related U.S. Appl. No. 17/470,926).
- Non-Final Office Action issued in U.S. Appl. No. 17/463,098 dated Nov. 26, 2021. (previously submitted in related U.S. Appl. No. 17/470,926).
- [#HADOOP-3445] Implementing core scheduler functionality in Resource Manager (V1) for Hadoop, Accessed May 18, 2018, 12 pages, https://issues.apache.org/jira/si/jira.issueviews:issue-html/HADOOP-3445/HADOOP-3445.html. (previously submitted in related U.S. Appl. No. 15/267,153).
- 7 Series FPGAs Configuration User Guide, a Xilinx, Inc. User Guide UG470 (v1.4) Jul. 19, 2012. (previously submitted in related U.S. Appl. No. 15/267,153).
- Borges, et al., “Sun Grid Engine, a new scheduler for EGEE middleware,” (2018). (previously submitted in related U.S. Appl. No. 15/267,153).
- Cooper, Brian F. et al., Building a Cloud for Yahoo!, 2009, 9 pages, IEEE Computer Society Technical Committee on Data Engineering, https://www.researchgate.net/profile/Rodrigo_Fonseca3/publication/220282767_Building_a_Cloud_for_Yahoo/links/0912f5109da99ddf6a000000/Building-a-Cloud-for-Yahoo.pdf. (previously submitted in related U.S. Appl. No. 15/267,153).
- Dye, David, Partial Reconfiguration of Xilinx FPGAs Using ISE Design Suite, a Xilinx, Inc. White Paper WP374 (v1.2), May 30, 2012. (previously submitted in related U.S. Appl. No. 15/267,153).
- Examination Report issued in IN Application No. 1219/MUM/2012 dated Jul. 19, 2019. (previously submitted in related U.S. Appl. No. 17/195,174).
- Examination Report issued in IN Application No. 2414/MUM/2011 dated Jul. 25, 2019. (previously submitted in related U.S. Appl. No. 17/195,174).
- Examiner's Answer issued in related U.S. Appl. No. 13/297,455 dated Feb. 10, 2016, 9 pages. (previously submitted in related U.S. Appl. No. 15/267,153).
- Final Rejection issued in related U.S. Appl. No. 13/297,455 dated Apr. 18, 2013, 18 pages. (previously submitted in related U.S. Appl. No. 15/267,153).
- Final Rejection issued in related U.S. Appl. No. 13/297,455 dated Mar. 26, 2015, 14 pages. (previously submitted in related U.S. Appl. No. 15/267,153).
- Final Rejection issued in related U.S. Appl. No. 13/297,455 dated Sep. 3, 2014, 18 pages. (previously submitted in related U.S. Appl. No. 15/267,153).
- Final Rejection issued in related U.S. Appl. No. 14/521,490 dated Jul. 28, 2017, 16 pages. (previously submitted in related U.S. Appl. No. 15/267,153).
- First Examination Report issued in IN Application No. 401/MUM/2011 dated Nov. 9, 2018. (previously submitted in related U.S. Appl. No. 15/267,153).
- Fischer, Michael J. et al., Assigning Tasks for Efficiency in Hadoop, 2010, 11 pages, https://www.researchgate.net/profile/Xueyuan_Su/publication/221257628_Assigning_tasks_for_efficiency_in_Hadoop/links/53df31100cf216e4210c5fd1/Assigning-tasks-for-efficiency-in-Hadoop.pdf. (previously submitted in related U.S. Appl. No. 15/267,153).
- Gentzsch, et al., “Sun Grid Engine: Towards Creating a Compute Power Grid.” IEEE Computer Society, Proceedings of the 1st International Symposium on Cluster Computing and the Grid (2001). (previously submitted in related U.S. Appl. No. 15/267,153).
- Ghodsi, Ali, et al., Dominant Resource Fairness: Fair Allocation of Multiple Resource Types, Proceedings of NSDI '11: 8th USENIX Symposium on Networked Systems Design and Implementation, Mar. 30, 2011, pp. 323-336. (previously submitted in related U.S. Appl. No. 15/267,153).
- Han, Wei, et al., Multi-core Architectures with Dynamically Reconfigurable Array Processors for the WiMAx Physical layer, pp. 115-120, 2008. (previously submitted in related U.S. Appl. No. 15/267,153).
- Hindman, Benjamin, et al., Mesos: A Platform for Fine-Grained Resource Sharing in the Data Center, Proceedings of NSDI '11: 8th USENIX Symposium on Networked Systems Design and Implementation, Mar. 30, 2011, pp. 295-308 (previously submitted in related U.S. Appl. No. 15/267,153).
- Isard, Michael et al., Quincy: Fair Scheduling for Distributed Computing Clusters, Accessed May 18, 2018, 20 pages, https://www.sigops.org/sosp/sosp09/papers/isard-sosp09.pdf. (previously submitted in related U.S. Appl. No. 15/267,153).
- Ismail, M. I., et al., “Program-based static allocation policies for highly parallel computers,” Proceedings International Phoenix Conference on Computers and Communications, Scottsdale, AZ, 1995, pp. 61-68. (previously submitted in related U.S. Appl. No. 15/267,153).
- Jean, J et al., Dynamic reconfirmation to support concurrent applications, IEEE Transactions on Computers, vol. 48, Issue 6, pp. 591-602, Jun. 1999. (previously submitted in related U.S. Appl. No. 15/267,153).
- Lamonnier et al., Accelerate Partial Reconfiguration with a 100% Hardware Solution, Xcell Journal, Issue 79, Second Quarter 2012, pp. 44-49. (previously submitted in related U.S. Appl. No. 15/267,153).
- Lim, Harold C. et al., Automated Control in Cloud Computing: Challenges and Opportunities, Jun. 19, 2009, 6 pages, ACM, https://www2.cs.duke.edu/nicl/pub/papers/acdc09-lim.pdf. (previously submitted in related U.S. Appl. No. 15/267,153).
- Loh, Gabriel H., 3 D-Stacked Memory Architectures for Multi-Core Processors, IEEE Computer Society, pp. 453-464, 2008. (previously submitted in related U.S. Appl. No. 15/267,153).
- McCan, Cathy, et al., A Dynamic Processor Allocation Policy for Multiprogrammed Shared-Memory Multiprocessors, 1993, ACM, 33 pages (146-178). (previously submitted in related U.S. Appl. No. 15/267,153).
- Mohan, Shiwali et al., Towards a Resource Aware Scheduler in Hadoop, Dec. 21, 2009, 10 pages, Computer Science and Engineering, University of Michigan, Ann Arbor, https://pdfs.semanticscholar.org/d2e3/c7b60967934903f0837219772c6972ede93e.pdf. (previously submitted in related U.S. Appl. No. 15/267,153).
- Morishita, et al., Design of a multiprocessor system supporting interprocess message communication, Journal of the Faculty of Engineering, University of Tokyo, Series A, No. 24, 1986, pp. 36-37. (previously submitted in related U.S. Appl. No. 15/267,153).
- Murthy, Arun C., et al., Architecture of Next Generation Apache Hadoop MapReduce Framework, 2011, 14 pages. (previously submitted in related U.S. Appl. No. 15/267,153).
- Non-Final Rejection issued in related U.S. Appl. No. 13/297,455 dated Jun. 19, 2014, 15 pages. (previously submitted in related U.S. Appl. No. 15/267,153).
- Non-Final Rejection issued in related U.S. Appl. No. 13/297,455 dated Mar. 14, 2013, 23 pages. (previously submitted in related U.S. Appl. No. 15/267,153).
- Non-Final Rejection issued in related U.S. Appl. No. 13/297,455 dated Oct. 3, 2014, 29 pages. (previously submitted in related U.S. Appl. No. 15/267,153).
- Non-Final Rejection issued in related U.S. Appl. No. 14/318,512 dated Feb. 12, 2016, 25 pages. (previously submitted in related U.S. Appl. No. 15/267,153).
- Non-Final Rejection issued in related U.S. Appl. No. 14/318,512 dated Jun. 1, 2016, 18 pages. (previously submitted in related U.S. Appl. No. 15/267,153).
- Non-Final Rejection issued in related U.S. Appl. No. 14/521,490 dated May 17, 2018, 23 pages. (previously submitted in related U.S. Appl. No. 15/267,153).
- Non-Final Rejection issued in related U.S. Appl. No. 14/521,490 dated May 4, 2017, 19 pages. (previously submitted in related U.S. Appl. No. 15/267,153).
- Non-Final Rejection issued in related U.S. Appl. No. 15/267,153 dated Aug. 24, 2018, 54 pages. (previously submitted in related U.S. Appl. No. 17/195,174).
- Non-Final Rejection issued in related U.S. Appl. No. 15/267,153 dated Mar. 9, 2018, 23 pages. (previously submitted in related U.S. Appl. No. 17/195,174).
- Notice of Allowance issued in U.S. Appl. No. 15/267,153 dated Jan. 17, 2019. (previously submitted in related U.S. Appl. No. 17/195,174).
- Partial Reconfiguration Tutorial, PlanAhead Design Tool, a Xilinx, Inc. User Guide UG743 (v14.1) May 8, 2012. (previously submitted in related U.S. Appl. No. 15/267,153).
- Partial Reconfiguration User Guide, a Xilinx, Inc. user document UG702 (v14.2) Jul. 25, 2012.(previously submitted in related U.S. Appl. No. 15/267,153).
- Sandholm, Thomas et al., Dynamic Proportional Share Scheduling in Hadoop, Accessed May 18, 2018, 20 pages, Hewlett-Packard Laboratories, http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.591.4477&rep=rep1&type=pdf. (previously submitted in related U.S. Appl. No. 15/267,153).
- Shankar, Uma, Oracle Grid Engine Administration Guide, Release 6.2 Update 7, Aug. 2011, 202 pages, Oracle Corporation. (previously submitted in related U.S. Appl. No. 15/267,153).
- Shieh, Alan, et al., Sharing the Data Center Network, Proceedings of NSDI '11: 8th USENIX Symposium on Networked Systems Design and Implementation, Mar. 30, 2011, pp. 309-322. (previously submitted in related U.S. Appl. No. 15/267,153).
- Singh, Deshanand, Implementing FPGA Design with the OpenCL Standard, an Altera Corporation White Paper WP-01173-2.0, Nov. 2012. (previously submitted in related U.S. Appl. No. 15/267,153).
- Tam et al., Fast Configuration of PCI Express Technology through Partial Reconfiguration, a Xilinx, Inc. Application Note XAPP883 (v1.0) Nov. 19, 2010. (previously submitted in related U.S. Appl. No. 15/267,153).
- Tian, Chao et al., A Dynamic MapReduce Scheduler for Heterogeneous Workloads, 2009, pp. 218-224, IEEE Computer Society, https://pdfs.semanticscholar.org/679f/73d810e2ac9e2e84de798d853b6fb0b0206a.pdf. (previously submitted in related U.S. Appl. No. 15/267,153).
- Tsai, Chang-Hao, System Architectures with Virtualized Resources in a Large-Scale Computing Infrastructure, 2009, 146 pages, Computer Science and Engineering, The University of Michigan, https://kabru.eecs.umich.edu/papers/thesis/chtsai-thesis.pdf. (previously submitted in related U.S. Appl. No. 15/267,153).
- Warneke et al., “Nephele: efficient parallel data processing in the cloud,” MTAGS '09 Proceedings of the 2nd Workshop on Many-Task Computing on Grids and Supercomputers, Article No. 8 (2009). (previously submitted in related U.S. Appl. No. 17/195,174).
- Wen et al., “Minimizing Migration on Grid Environments: an Experience on Sun Grid Engine” Journal of Information Technology and Applications, vol. 1, No. 4, pp. 297-304 (2007). (previously submitted in related U.S. Appl. No. 15/267,153).
- Zaharia, Matei et al., Job Scheduling for Multi-User MapReduce Clusters, Apr. 30, 2009, actual publication date unknown, 18 pages, Electrical Engineering and Computer Sciences, University of California at Berkeley, https://www2.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-55.pdf. (previously submitted in related U.S. Appl. No. 15/267,153).
- Non-Final Rejection issued in related U.S. Appl. No. 17/747,839 dated Aug. 16, 2022.
Type: Grant
Filed: Jul 7, 2022
Date of Patent: Nov 15, 2022
Assignee: ThroughPuter, Inc. (Williamsburg, VA)
Inventor: Mark Henrik Sandstrom (Alexandria, VA)
Primary Examiner: Brian T O Connor
Application Number: 17/859,657
International Classification: G06F 9/50 (20060101); G06F 9/48 (20060101); G06F 8/656 (20180101); G06F 15/80 (20060101); H04L 47/78 (20220101); G06F 15/173 (20060101);